Iāll post the result of a little test I did a while ago (which is in the databasedrivers newsgroup).
My test of buffer size, which involved reading a file from my server
(which is on a WAN, so slow back and forth: 230 ms compared to 20 in the
office), about 4400 rows, read into a queue,yields a table like this:
Buffer Time
1 19948
10 8596
20 7103
30 6707
40 6391
50 6332
60 6144
70 6169
80 6040
90 6053
100 5970
200 5861
300 5770
400 5744
500 5751
600 5806
700 5768
800 5741
900 5700
1000 5747
1200 5765
1400 6225
1600 6159
1800 6022
2000 5985
So, rapid improvements (from 200 seconds down to about 60 seconds) for
buffers up to 100 rows. Only minor improvements after that, lowest point
was at 900 rows, and it seems to increase slightly after that.
I would say that you have two main choices:
a) you load the whole file into your queue (using an appropriate buffer size, like 200), and then moving around in that data set is a matter of using Clarion queue commands, or
b) you get a smallish data set from the server (using LIMIT) and if you need another set, you send another query to get another smallish data set.
a) your effort is up-front and everything else is quick, but you need memory on the client to store data in the queue, but you are a bit on your own because the traditional clarion way of working is closer to b)
b) you are potentially sending a bunch of different queries to the server, but each query should be relatively quick so long as you have the indexes to support it. Trick is that your queries should result in small data sets either because your real-world filters result in a small data set, or you use limits. You will still benefit from using a reasonable buffer size (not 1, which is the default).
What is a sort of worst of both worlds is your out-of-the-box Clarion method which sends queries that result in large queries being put together on the server (send my everything in this order starting from this row), and send new queries all the time (though with bind variables) each time you need a new page of data.
Bruceās drivers are basically set up to optimize the the b) method.
Really depends what you are doing. I do a lot of non-clarion SQL work, and the way most of that is done (like in the MSSQL Management Studio) is that your initial query results in a data set limited to the first 50 or 100 rows that match your query, basically so that you can check that you are getting what you want, and then you can ask to get the full data set which you might then be wanting to export somewhere (a flat file, and Excel spreadsheet, or whatever).