I have been working with Clarion for a while but I am looking for the best practices to optimize performance. My application has grown and I have started noticing slowdowns, especially in reports and database queries.
Here are a few things I have tried :-
Using indexes properly – Ensuring that frequently queried fields have indexes.
Minimizing loops – Reducing unnecessary loops in processing logic.
Optimizing file access – Trying to batch read/write operations instead of frequent disk access.
Using QUEUEs efficiently – Avoiding large in-memory queues where possible.
Despite these, I feel like I could be doing more. Are there any specific techniques or hidden tricks that seasoned Clarion developers use to speed up applications: ?? I have also searched on the forum for the solution related to my query and found this thead https://clarionhub.com/t/need-help-for-clarion-learning-and-development-related-issues- but couldn’t get any help.
I’m pretty sure you’ll get a lengthy reply from @vitesse (because it’s his pet subject)
But there are a few things one can do. Capesoft has a profiler template that can help a lot.
But it really comes down to how fast you actually want things and what you consider slow.
It’s very possible to spend an inordinate amount of time gaining time that the user of your program won’t even notice!
Most all browses and reports in Clarion should be based on a single “file”, where that “file” can be a view in the SQL database. This is because for certain databases and certain combinations of joins, Clarion decides that the database is incapable of doing those joins, and does the joins client-side, which is very slow. This may slightly complicate the Clarion program for browses (the update form for a view is probably going to be based on the primary file of the view, so you need to grab that row as you enter the form, but it’s a small price to pay).
You should have no Clarion processes, they should all be replaced by a stored procedure in the database. The exception is where the result is not stored in the database, it is stored outside (e.g. you are writing something to a .CSV file. This avoids a lot of unnecessary traffic back and forth to the client porgram.
Browses should always have a WHERE clause when they open. In the ISAM world, we implicitly have browses that are populated like : select from order by . In the ISAM world that means get the index values one by one and get the row corresponding to that index value. When you have a full screen of rows, then stop. That works about equally fast whether your table has 1000 or 1,000,000 rows. In the SQL world, the database has no idea when you send it that query that you really only want 20 or 30 rows, so it prepares to get them all, and that can be expensive. If you are going to use your browse for someone to look up a name, wait until you have the name, or part of it, before you ask the database anything.
Use the BUFFER command to tell the driver to get multiple rows, maybe 200 at a time. That’s a trade-off between your third and fourth rules: less file accesses, more storage in memory.
Make sure your database indexes match your Clarion keys. They don’t all have to exist in the database but a common problem is Clarion sorts strings without case insensitivity and the database sorts them with case-sensitivity. So Clarion sends a query with order by upper(column) and the database index is on the column, and cannot be used to sort by upper(column).
Nothing on my list that isn’t really already covered by yours, except maybe I’m suggesting going a little further.
But yes, in order to improve performance you need to first understand were the time is going. That starts by profiling the code so that you understand what is happening.
Yes, you can improve performance by simply guessing, or asking for advice, and then you can simply just inspect code to see what needs improvement. And yes, adding say ,AUTO to a local variable can “speed up performance”. But if you’re not addressing the reason of “why it’s slow”, then you’re just fiddling around, you’re not going to make real progress.
If you think a browse is slow, use Trace.Exe (or read prop:sql) to see the SQL being passed to the backend. Make use of your database profiling tools to understand (EXPLAIN) the data access.
Of course you can make your data access as fast as you like, but perhaps the best improvement is when you don’t use data access at all.
Most of all a generic question “make it faster” will give you very generic answers. The more detailed your question, the more information you provide, the more topical advice you may receive.
Assuming you have a well designed database and are using TPS files for your application, the best way, in my experience, to improve performance is to switch to a SQL database. This will be especially helpful in multi user systems accessing data across a network.
Once you have done that you can use the database analysis tools, usually provided with the database, to analyse the bottlenecks in your use of the data.
So, please provide more information about your current setup for more specific advice.
I have used Capesofts Profiler when I need to optimize Clarion code. I think I had to generate a full debug version first to be able to compile in debug, then turn off debug for all apps not required to debug. Then it was easy to navigate.
The sql “First Responder Kit” helped me a lot with optimizing queries and identifying errors.
The “Sql Server Profiler” also helped to identify queries that took particularily long time, or queries where criteras were gone.
The point of browses having a default filter is important, and I discovered a problem in that area: The “filters and range limits are ignored” error.
If you have ways for the user to set filters and for some reason haven’t covered all possible errors they can do, which are more than you could possibly imagine, then the browse will instruct to build a cursor on the entire database. I had a browse like that where the result was a 30 million record view. When I added an override to the applyfilter method that tested for possible errors, the complaints plummeted.
Also, always make sure the filter is valid for server side filtering.
Looks like these might be “Hit and Run” queries? I wonder if a client is interviewing candidates to fix their Clarion problems and the candidates are using the Clarionhub brains trust? If so, I hope they post their proposals here so we can have a laugh!
I am not sure of the reason for the posts but at least they gave a forum for people to share their respective experiences regarding optimising.
As Sean noted above, code and system optimisation has been something of a hobby (obsession?) of mine for around 40 years now. The first thing is to decide if something is worth optimising. If it is, then you need to know where the bottleneck is that is taking the most time. Sometimes you can optimise the code if it is written inefficiently, other times you can re-engineer it to attack the problem differently, maybe using different structures and/or algorithms.
Anyway it is good for people to share their experiences!
If using ISAM files, considering creating files which store just the data that would be used in these reports, and then have a periodic batch process which keeps these report files up to date. This is a bit like what SQL servers do automatically behind the scenes, in that they log the commonly used queries and then automatically keep the data to hand behind the scenes, sometimes in memory, sometimes on disk.
If using SQL server, considering writing your own stored procedures which create the views tuned for maximium performance for these reports and queries. A view created by a stored procedure is supposedly faster than a fixed view created in the SQL Server. Both forms of views can be stored in the dct so they can be accessed and used.
Capesofts Profiler can help, but you can optimise some clarion code to the point you cant make it any faster, so also look around for any Windows API’s that could be called which could speed things up for you, and if an API exists, these will be the newer API’s typically available after the security model overhaul seen in Win8/8.1.