How to Optimize Clarion Code for Better Performance

Hey guys! :blush:

I am currently working on a project using Clarion and I’m looking for advice on how to optimize my code for better performance.

Despite following standard practices, I’m still encountering issues with speed and efficiency, particularly with data processing tasks. Here are a few specific areas where I’m facing challenges:

  • Database Queries: What are the best practices for writing efficient database queries in Clarion? Are there any specific techniques to minimize execution time?
  • Looping Structures: Are there recommended ways to structure loops to enhance performance? I’ve noticed some lag when dealing with large datasets.
  • Memory Management: How can I manage memory usage effectively to prevent slowdowns and ensure smooth operation?
  • Code Profiling: What tools or methods are available for profiling Clarion code to identify bottlenecks?

I also check this: https://clarionhub.com/t/need-help-for-clarion-learning-and-development-related-issuesnowflake But I have not found any solution. Could any provide best solution for this? Any insights or resources you could share would be greatly appreciated. I believe improving these aspects will significantly enhance the overall performance of my application.

Thank you in advance for your help! :saluting_face:

Respected community member :smiling_face_with_three_hearts:

Hi, Priyal,

What database backend are you using (SQL? If so, what kind? Or TPS?) If TPS, are data files on the local machine or accessed over a network?
Is this a single-user app, multi-user sharing the same data, web-based?

I have one app that processes a lot of JSON. My processing time was cut by more than 80% by using Capesoft’s Fastmem product.

For code profiling, Capesoft also sells a profiler tool.

What version of Clarion are you using?

1 Like

Hi Priyal - You are asking for a pretty big book report. :slight_smile:

Maybe you could show where you’re having an issue to see if anyone can spot a bottleneck.

What is your database? What do your queries look like? What’s your code look like?

2 Likes

Hi Priyal, As Jane suggested Capesoft have a profiler. Assuming you’re using SQL there are a few things you can do, but they tend to be specific, either to your procedure or the backend. e.g. there are a couple of connection parameters that make PostgreSQL quite a bit faster, but don’t exist for MSSQL.
Mostly Clarion drivers produce pretty decent SQL, and got better over time.
There is also the SV in memory driver that can help in some situations.
But mostly you need to ask more specific questions

Hi Priyal

as others have suggested, more information would be helpful.

obviously things like database query efficiency is largely outside the realm of Clarion, but could make an enormous difference. Having said that, you need to be sure any filter is being executed on the backend and all the data is not being passed over the network to your program to be filtered locally. Others will know a lot more about that than me, but you can do various traces and check what query is being passed to the backend database - and of course different backends will have different tools for query optimisation. Perhaps adding (or using) an index will make a large difference so that only a small subset of the complete dataset needs to be accessed.

on the Clarion side, there are lots of small things you can do to increase efficiency. These can add up and make a large difference. For example, I often see code that has unnecesssary clip() statements in, or that passes string data by value rather than by reference. All things that slow down code.

however the first thing you need to establish is where the bottleneck is. There is no point spending a large amount of time optimising code that is rarely used or only constitutes a tiny part of the elapsed time. Others have pointed out that Capesoft have a profiler which may be useful.

hth and cheers

Geoff R

Profiling: Capesofts profiler will show you where your app is spending all its time. Once you have this you will be able to see what areas of the app is spending lots of time, like loops and could be appearing unresponsive to the OS which then triggers a pop up msg from Windows.

Once you have profiler and you know you have some loops in your app which are taking too long you can decide to push the loop job back to the server. For sql dbs this would be a stored procedure, for isam files like topspeed this would mean copying the code to softvelocity’s ipdriver server which stops the raw data being pulled over the network to the workstation for processing.

For memory management leave this to the OS to manage but you can make sure any queues/arrays/News are freed after use instead of freeing when the app is closed. Some would consider the latter to be a memory leak.

For writing queries in clarion you need to enable the driver trace/logging functionality to see how the queries are being formed. This is more relevant for sql backends, but you can override the standard clarion generated queries by sending your own sql commands and receive the result set in a queue.

If you are using a sql backend then knowledge for optimising queries for that backend rather than clarion will give you the best performance and in those cases using clarion to send the sql command is all you need to do for maximum performance.

For isam files like topspeed, a trick I have used in the past is to create a ram drive and store the tps files on that and use the ipdriver server on the same machine as the ram drive to gain maximum performance for intensive processing jobs. Just make sure you have a battery backup so you don’t lose data when a power cut happens.

In all cases maximum performance is then limited by the hardware performance of the machine and a nice Xeon cpu with plenty of ram usually outperforms machines which are intended for workstation use.

https://www.capesoft.com/accessories/profilersp.htm

HTH

As others have noted to improve performance, first you need to understand what parts of your code are taking up the time. It’s impossible to efficiently improve performance simply by “guessing” as to what might be happening. In many cases the actual issue is not where you intuitively suspect it is.

We wrote Profiler (CapeSoft Profiler) specifically to address this issue. Once you have profiled your app you will have a better understanding of where the perfomance is going, and what to spend your time fixing.

With respect the the first two, and assuming you have a SQL backend…

Clarion writes good simple queries. If you need anything more complicated, for example, where you want to aggregate data in one table and add the result to some parent table, you are better off writing a view in SQL and importing the view as a clarion table.

Use the Buffer command (doesn’t speed up the query, but speeds up getting the results into Clarion)

Make sure all your commonly used joins are supported by indexes in the database (same for any SQL database). Make sure that Clarion does not try to order things that are not supported by a view (most common is having a case-insensitive index in Clarion, where the index on the server is case-sensitive).

IMO, if you have a SQL database you should not ever be using looping structure on data (including Clarion Process procedures). SQL databases are set up to provide fast set operations. When you are looping with Clarion you are doing everything slow-by-slow: read one row at a time, process one row at a time, write it back and wait for the database to tell you it has done it…
Write your update/insert query in the database, put it in a stored procedure, and call the stored procedure from Clarion.

Another tip on LOOPing datasets is to change your code to use the PROP:Where to filter data directly on the SQL server, instead of filtering data inside the LOOP structure on the client side.

A quick way to speed up queries in SQL calls from Clarion is to add an into #temp within the queries.

Queries like: from:
select c.* from customers c where …

to:
select c.*
into #temp
from customers c
where …
select * from #temp

“This allows queries to be significantly accelerated, at least in my experience with MSSQL and some queries in ODBC.”

In calls of the type FULL JOIN or UNION/UNION ALL, it is necessary to change the code to:
select x.* into #temp from ( “full join query” ) as X
select * from #temp

This minimizes the problem of cursors in Clarion. I hope the advice is useful to you.

Hi Priyal, welcome!
I’m going to post my experience
Connecting to an exe file via a network shortcut:
1-If you’re using TPS, it’s very bad for large queries over the network. Even if you put a lot of keys in the table and use the ABC Browse template.
Depending on the amount of data and filters… it gets slow.
If you’re going to use it like this, I suggest changing the option from FILE to PAGE in Browse. It will show the amount of data possible on the screen in Browse, which speeds up the query. But it’s not good for total sums, for example.
If you request 1000 pieces of data, you’ll be able to access them all by scrolling down the vertical sidebar of Browse, but you’ll only get the count of what Browse shows.
The suggestion is to use queues, make the query and populate it, but it also gets slow depending on the amount of data.
2-If you’re using MySQL, you need an ODBC connector. The speed is average and depends a lot on the server’s response. Both Browse and Queues methods are supported.
3-In this documentation GitHub - mikeduglas/PostgreSQL-for-Clarion: This is the Clarion application programmer's interface to PostgreSQL., you can use PostGrees, which is faster than MySQL.
4-Another option is to work by accessing the API by sending and receiving data via JSON. It is interesting because the return of large data is fast, but it is more laborious, because you have to assemble the lists in queues, and access the API via endpoints for inputs, updates and deletes.
In this link mikeduglas (Mike) · GitHub you can download libCurl and cJson, which are excellent for this method.
I use:
→ TPS in the clients’ local software. We have problems with large network queries. The future project is to transfer this legacy to SQL. (but we haven’t had time yet)
→ mySQL on a VPS, accessing it via DNS from my domain (api.mydomain)

  • my company’s management software.
  • user access control in client software.
    → libCurl and cJson to access:
  • postal address code API
  • WhatsApp API import and export in JSON
  • API for integrating data that is imported from a portal to the software (TPS) that I provide to my clients.
    In both cases, the data is imported with JSON, listed in queues or groups, and finally saved in the TPS.
    I hope this helps.
    Hugs to everyone in the community.