I once spent a hour+ helping on a support call where a junior C# programmer coded like below with no error codes:
IF Error() then
Message('File is locked')
Because 5 Access Denied was the error he saw when testing . The real error was 2 File Not Found, so we spent a longgggg time trying to find the locked file ā¦ that didnāt exist.
You should check out AUTOINC in the help. If you set that up properly you will always have the auto-incremented ID available right after an insert. So you shouldnāt have to change any code.
It should save you a bit of time. Basically, with the clarion-style autoinc you are asking the server for the last number (thatās a round trip), then adding a stub record with the incremented number (thatās a round-trip), then updating the record with your actual values (thatās a third trip). Thereās also just the extra overhead of saving and restoring buffers (see the PrimeAutoIncServer method in ABfile).
If you set your tables to auto-increment the primary key on the server, you insert the record (round-trip) then ask the server what the new ID value was (round trip) and store it in your file buffer. Thatās just two, not three round trips.
With respect to your āAttempts to automatically numberā¦ā error above. Threading is not going to be of any benefit there. If you have two procedures on different threads adding records, then each of them is going to the database and asking for the last number, then each is trying to add a record with an incremented number, which will fail if the other thread got there first. So it tries again. If it fails three times like that (the other thread got there first three times), then you will get the error. Sure, each thread has its own file buffer, but they are still both adding to the same database table and are going to get primary key conflicts.
Also, I did check out Logout/Commit with the Oracle driver, as opposed to the ODBC driver, and in that case the performance for 5000 records improved from 530 seconds to 370 seconds.
There are always different ways to do things. Personally, when it comes to autoincrementing I would use whatever the server has available and, if necessary, use the methods described in AUTOINC in the help to retrieve the number it added.
If you have a special situation like you set up in your little test program, one way you could deal with that (two procedures in the same program adding a lot of records and not wanting to run into conflict over the primary key), you could set up a home-grown version of a sequence (the way that Clarion and Postgres handled autonumbering, and also an option in MSSQL).
Get rid of autonumbering on your primary key, you will always fill the column manually.
Set up a global queue. Fill it with say 1000 sequential numbers.
Right before each Insert, grab the first record from the queue, put that value into your primary key column in the table and delete the queue value. If there is no record in the queue, then get the last added value from the table and create another 1000 sequential numbers in your queue.
That way neither of your procedures can use a number that has already been used. A bit slower, but would work for multiple programs, is to have that āqueueā of numbers be a table in the database that all your procedures adding to that table pull from, and delet as they grab the numbers.
Another option, if you donāt really care what your primary key looks like is to use GUIDs, where your chance of conflict is vanishingly small. You can generate those yourself in Clarion or most databases have a method to create them.
Finally found a way how it works with MySQL:
I try with /AUTOINC='select LAST_INSERT_ID()' on the DCT but it does not helped
Then I deleted that and changed default setting for ID (it is Unique key) to AUTO_INCREMENT (on server side) and it works with ABC.insert or ADD(file). With ADD it works 10 times faster (with @CarlBarnes error checking procedure).
The only ādisadvantageā is that the IDs are not reset, even if you delete all the records - it continues to grow next time when you add new records.
Thank you all for your help.
/AUTOINC does not do the autonumbering. It is the auto-increment on the server side that does that. The /AUTOINC = āselect last_insert_id()ā is to retrieve the number that MySQL put into the record it saved and put it in your record buffer in Clarion.
For example, if you have a bunch of things in a queue you want to add as child records right after entering the parent, youā¦
Save the parent record. In the clarion buffer you donāt have the ID filled in, because you are going to let the server fill that in as the record is saved.
Now you want to copy the ID of the parent record into all your queue children so you can save the children. But your parent buffer still has nothing in the ID column. Thatās what the /AUTOINC code does. It asks the server what the ID last entered was, and copies it into the PAR:ID column. Now you can do CHI:Parent_ID = PAR:ID.
So the /AUTOINC is to deal with think you meant when you said: I have no idea where I use just added ID (on the client side) so it can be dangerous to to this now.
Yes, it is useful when there is no records (because they are deleted) and you start to adding them. In that case, ID-s will start from previously lastID and it looks ugly Then you can run that ALTER and it will start with 1 (otherwise, it will start with last ID added (+1), before records was deleted
Looking at your table/dct definition I see you have no driver switches in use.
Iām assuming your MySQL server is handling the Auto Incrementing?
If yes, Iāve only worked on MS SQL so dont know if this works on the ODBC driver (but it shouldā¦), but on this MS SQL help page at the bottom it says:
For MSSQL, SQLAnywhere and Pervasive.SQL drivers, this is all the work you normally need to do. For Oracle and the ODBC driver for other drivers not mentioned above (and in special cases for the other drivers) you will also need to apply the following driver string:
/AUTOINC =your SQL SELECT statement
with the necessary SQL statement you want executed for retrieving the auto-incrementing fields and
/PREAUTOINC=[TRUE|FALSE]
to indicate that the auto-incrementing code should be executed before or after the INSERT statement. For all the drivers (except Oracle), if PREAUTOINC is not specified, then the SQL code is issued after the SELECT. For Oracle, it is issued before the SELECT.
These properties can easily be set through the Dictionary Driver String Options. Example: !ORACLE Example OracleFile FILE,DRIVER(āOracleā, '/AUTOINC=SELECT Myseq.nextVal')
I would also use the driver trace to do an On-Demand log and see what is happening with the ODBC driver on this section of code of yours.
You could also try this post in your ODBC dct table, it might work, I know Iāve used switches in individual key definition properties and individual field/column definition properties so that I only had to use the MARS (/MULTIPLEACTIVERESULTSETS) switch and /BUSYHANDLING=2 switch defined in the MS SQL dct table definitions.
Also see towards the bottom of this online help page (link below), this section:
Application/Dictionary Implementation
There are essentially two steps involved to activate the retrieval of a server side auto-incrementing value.
If shows how to add the driver switches into an individual field/column instead of adding them to the dct table options section, which is my preferred way of working.
Basically you can use some of these driver switches in more than one place in the dct, but the format will be different depending on where you add them.
In your dictionary, for rhis table, do any of the fields have anthing set on the Validation tab?
ABC will first enforce this validation when doing an Insert. For most validations this is not significant however for must-be-in-file it is very expensive. If you have that option you would see this kind of effect.
Yes. Since I added AUTO_INCREMENT (thanks @Max) on each table (for Primary key), it works perfect.
In that case, my existing code works great:
loop L = 1 to 50000
clear(m:record)
m:name = 'some name '&L
...
access:materials.insert()
end
and I can run it from anywhere, in the same time, using any number of times from any place (from same app or using different exe).
The thing is that server take care regarding next unique ID and I can get each ID after each .insert().
ADD works also without any problem, search for previous unique ID is not needed and I can use code like this:
loop L = 1 to 50000
clear(m:record)
m:name = 'some name '&L
...
add(materials)
end
and MySQL add new ID himself (I must not check for previous ID) etc. I decided to use .insert() and not add just because I do not want to change existing code
I do not use Validation. Tables was created from DCT (I used CREATE attribute and once table is created, I remove CREATE from DCT).
Only problem is that Clarion does not add AUTO_INCREMENT under Default value (on server side). I presume it should be like that because key in the DCT is defined as unique (NOCASE, OPT, PRIMARY)
The more error checking you can offload back to the server beit MS SQL or MySQL, the faster code can run because its not having to operate across a network which is slow. Same as doing stuff in ram is faster than on the hard disk, although the new SSD/NVMe drives are quick.
You presume wrong. In programming itās best to not to presume, but to check out the docs and see what you need to do. AutoNumbering is assumed to be client-side, unless you specify otherwise.
The Debugger has itās limitations, but it would be a good way to step through the code executed by that specific .Insert() and see every line of source code it executes in ABC. That would reveal if there was some kind of slow client side validation like a āmust be in fileā.
In the Debugger set a Breakpoint on that .Insert line in the LOOP so it will stop at that line, then single step source. If you donāt know how to do this call into the Wednesday webinar.
Ok, I was wrong because AUTO_INCREMENT on the server side is causing some other problemsā¦ Can you please tell me where in the docs or help this can be found or how it should be designed?
I need to be able to insert (Access:MyFile.Insert) many records (into the same table), at the same time from the same or different threads.
Currently I found this which covers all the above cases:
mysql_file{prop:sql} = 'LOCK TABLES materials WRITE'
...my insert record code here
mysql_file{prop:sql} = 'UNLOCK TABLES'
but I would be happy to see how it can be solved otherwise.
Thank you.
does the lock allow you to add from different threads while stopping any other process writing to the table?
if so then after applying the lock, why not increment a global (non-threaded) id wrapped in a critical section? You will effectively have exclusive update access so no-one else will be able to mess with your id sequence, and the critical section will arbitrate (avoid conflicts) between your threads.
Some years ago (2006) I did a write up or report on critical sections for ClarionMag based on a presentation Bruce did at an Aussie DevCon in Geelong.
Basically the steps were:
ā Include(ācwsynchC.incā) ! in global embeds after global includes
ā In global data insert an entry called CS. The type is Class and base type is CriticalSection
ā This generates CS Class(CriticalSection)
ā Wrap global data access:
cs.Wait()
Global data = whatever
cs.Release()
so in this case if we have set up such a critical section which we named CS and a global
glo:id long
the code for adding records could be something like:
cs.Wait()
if glo:id = 0
! not initialized yet...
clear(sql:record)
sql_file{prop:sql} = 'select max(id)+1 from materials where userId = 1'
next(sql_file)
glo:id = sql:f1
end
cs.Release()
loop L = 1 to 50000
yield()
clear(m:record)
m:name = 'some name '& L & ' on thread ' & thread()
m:userId = 1
cs.Wait()
m:id = glo:id
glo:id += 1
cs.Release()
add(materials)
if errorcode() then <complain>; break.
end
sure you will have the overheads of a critical section but this should be tiny compared with going to the server to get the next id number.
note this is written so that different threads can be writing almost sumultaneously. If you wanted other threads to wait while you did the full 50000 adds then you would wrap the critical section (wait/release) around the loop rather than wrapping the id access and increment for each individual insertion. That would give a faster time for adding 50000 records at the cost of the other threads having to wait until it was all done.