Help to Use PROP:SQL

Hello everyone, I would like to know how to use PROP:SQL, I want to do searches in the database, but I’m used to using TPS, so I’ve always used the normal filters.
And I would also like to know which one you recommend me to use ODBC, Clarion or ABC?

Using ODBC with PostgreSQL and ABC.
Easiest is to use the same filter statements as you might embed with TPS.
For instance, in the browse ApplyFilter method before parent call:
SELF.SetFilter(‘MyPre:FieldNameA=’ & Loc:FieldValueA, ‘1Filter’)
SELF.SetFilter(‘MyPre:FieldNameB=’ & SQuote(Loc:FieldStringB), ‘2Filter’)

If you have a situation where you want to pass SQL statement directly:
DctTableLabel{PROP:Alias} = ‘v’
SELF.SetFilter(‘SQL(v.fieldnamea=’ & Loc:FieldValueA & ‘);’ , ‘1Filter’)
SELF.SetFilter(‘SQL(v.fieldnameb=’ & SQuote(Loc:FieldStringB) & ‘);’ , ‘2Filter’)

For both situations, the ODBC driver will do the work to generate a proper SQL query.
If you are not getting what you expect, you can always use ODBC Admin trace to see exactly what is being passed to PG.

1 Like

I am going to try.
I didn’t know Clarion interpreted it like that.

{prop:sql} allows you to perform direct syntax like select statement , update, insert ,… etc.

I used it with MSSQL and ODBC (ms accesss) and the speed is the same as using the original engine , sometimes I don’t even need to define the tables in anywhere in clarion, just run the query and get your results.



Furthermore, it is so efficient with data grouping which I find hard in clarion while it takes seconds with millions of records using prop:sql

Hi, are you using the original file that you query or are you using dummy table to execute prop:sql statement(s)?

I am using a dummy table.

1 Like

Thank you. May I know at least an example structure of dummy table you are commonly using?

It could be any SQL table. only you have to take care of the following:

  • number of fields should be equal or more than the expected number of columns of the results (field names do not matter)
  • data types and length should match the results of the query

In brief, the dummy table should match the Query results columns specs and not the original table.


1 Like

One thing more that it is better to put /TurboSQL=TRUE in the driver options.

1 Like

If this is for queries…the PROP:SQL replaces the clarion SET statement. You still need to do a loop next to get the records returned. Also, it is more efficient to put any “range” limit into the prop:SQL, rather than an (if not what I want the break.) approach. So, for example:

myresulttable{PROP:SQL} = ‘select whatever columns from x join y on = y.parent where … order by …’
if error then message('query problem ’ & fileerror()).

if error() then break.
end ! loop

1 Like

For benchmarking purposes, I’m currently using the prop:sql approach to a dummy sql file that I used for browsing and limiting the results to between 1000 and 5000 records. When we use our original sql file on browse, it takes 1–60 seconds to load because each record is approximately 500k (or about 600mb in size).

I’m not sure you are looking in the right direction. In the first answer you got from Douglas, he suggested you look at setfilter in clarion. PROP:SQL is not a magic bullet. In clarion if you set it up to retrieve a bunch of columns for rows that meet certain criteria, then the clarion driver will construct a statement that looks like:
select <some_columns> from where .order by something
If you write and send a PROP:SQL statement that looks like
select <some_columns> from where .order by something
then you will get exactly the same performance and results when that query is sent to the sever.

You can use trace to see what the clarion driver sends to the server. I suggest you look at the following:

  1. does clarion retrieve more columns than you actually want for the browse
  2. make sure you don’t have any client-side joins happening (if clarion thinks the server cannot join tables it will do it itself and that can be very slow)
  3. make sure you have the indexes on the server to support any filtering or order by clauses

Note that if you see good performance in some other query tool (i.e. not clarion), that may be because a) the query tool has hinted the query to give you the first rows fast and/or b) has limited the initial result set to 20 or 50 rows.

No, I’m just asking on how @sk2107 implements his code using prop:sql. It’s @ThalesSilveirs who asked for the first question. Anyways, I’m just looking for a solution on how can we display browse form with a large number of records by limiting the rows to load or more like a lazy loading(for every set of records, when scrolled down to the last row, will load the next set of records), let’s say for example I have a products table and I want to create a product catalog displaying all products, since sql query takes some time to load all the records (records more than 10k) compared to topspeed, I need to limit the displayed records to certain rows that will not take atleast more than 1 seconds or so to load. I tried using page loaded browse, it improves the speed a little, but takes a little delay on scrolling.

There is a class to allow other “pagination” approaches that has its uses cases. More detailed explanations on SV newsgroups… last thread I remember was in comp.lang.clarion , topic “Changing buffer size in a browse” by Koen Tjoa 2021-07-26, there are references in that thread for older ones too and the class and explanations.

1 Like

I will check on this, Thank you so much!

You might also find this page useful. If you’re heading towards PROP:SQL, different databases do things differently when it comes to paging through results.

Whoops, and the link…

OFFSET is bad for skipping previous rows (

1 Like

When it comes to browsing, defining the table/query in the dictionary is the best way. I use filtering clarion way and even locator behavior/ find anywhere is super fast (involved fields should be indexed in SQL). I have a table with 6 million records and it is running smooth.
one problem could occur that in browse list the records may appear as duplicates and the solution is adding Additional Sort fields and choose the primary key index.


1 Like

May I know how many columns do you have on your table with 6 million records? I am using a table with almost 90 plus columns maybe it’s the reason for slow queries when the record gets big