Error: (Unable to frame transaction (48)) attempting to frame the transaction and LogoutTimeout on TPS driver

I ocasionally get this error when updating records in one location where the database is the largest. I gather this could be due to using Logout in transaction frame with TopSpeed database driver.

Looking at Abfile.clw I can see that the deafult value is 2 seconds which sometimes may not be sufficient. I’ve seen a note somewhere that for busier networks LogoutTimeout should be set to more than 2 seconds.

TransactionManager.Construct               PROCEDURE()
 CODE
    SELF.Files &= NEW(TransactionManagerQueue)
    SELF.RMList&= NEW(FilesOnTransactionQueue)
    SELF.UselogoutList &= NEW(FilesOnTransactionQueue)
    SELF.LogoutTimeout = 2
    SELF.LogoutOff = False
    SELF.AutoLogoutOff = True
[...]	
RelationManager.Init PROCEDURE(FileManager F,BYTE Log)
  CODE
    SELF.Me           &= F
    SELF.Relations    &= NEW RelationQueue
    SELF.UseLogout     = Log
    SELF.LogoutTimeout = 2
    FilesManager.AddFileMapping(SELF)	
	
TransactionManager.SetTimeout              PROCEDURE(BYTE pTimeout)
 CODE
    SELF.LogoutTimeout = pTimeout

I derfined my own global variables GLO:LogoutTimeout & GLO:UseLogout and am adding lines of code before and after PARENT.Init in my data dll

Hide:Relate:PaNag.Init PROCEDURE(FileManager FM,BYTE UseLogout=0)

  CODE
  UseLogout = GLO:UseLogout  
  PARENT.Init(FM,UseLogout)
  PARENT.LogoutTimeout = GLO:LogoutTimeout

but doing it this way I have to add it there for all my tables. Is there a better place to put this code so that is affects all tables? I don’t want to change that in Abfile.clw, I’d rather have a setting in the ini file so that end users can choose.

On a separate note, does it at all make sense to UseLogout with TPS driver? I’m aware that it does lock the whole table during transaction, but it was not so much of a problem and I prefer to use transactions to prevent data corruption. Do you use this feature with TPS ?

Using LOGOUT is using transactions and is the easiest/best way of handling transactions unless you want to code it all by hand. LOGOUT also improves the write speed for tps processing which also helps make it worthwhile.

See SetTimeout() in the help if you want to increase the timeout value. I’d be curious to know if it helps?

The LOGOUT error you are getting is probably affected by many factors, but it might indicate that it’s time to consider an SQL database?

Something the ABC filemanager classes dont use is Stream() and Flush().

If you have some batch processing to do, wrapping the code in Stream()/Flush() will keep the file open in windows which can speed things up.

Delaying the PrimeRecord autoinc will also speed things up.

IMO the filemanager class could be overhauled.

The problem with Stream/Flush is they require exclusive access. So simply don’t work on a shared file. Logout/commit do work and are damn near as fast.

1 Like

Transactions are used by default in the Relation manager class to update records. So yes it makes sense.

There are a few scenarios that might lead to your error.

  • You have files in the logout using different drivers.
    In this case the logout would fail every time, so may not be it.

  • You have a very long lived transaction.
    In which case you have to ask why the transaction is so long lived.

Transactions should be short. The template, by default only enact a logout when updating.
It should be noted that a special case here is the relation manager which wraps an update, and if you’re updating a linking field, one used as a key in a child table, that can take a bit of time.

I’d look into what’s going on and what’s being update to make these transactions long lived.
Thee may be other reasons, but this seems to fit what you’ve told us

All files involved use TopSpeed driver, these are 2 form procedures updating single tps file with some “other files” included for lookups. One file has 242k records - 33 Mb - 43 errors this month, the other 315k records - 20 Mb - 17 errors this month. Up to 10 simultaneous users. These errors are thrown when updating records (at least some of them, users report problems updating, not inserting). It only happens in one location this application runs in, in other locations there are no such errors, but tables are signifficantly smaller and there are not so many simultaneous users.

I never used TransactionManager before, but it seems to help here, at least I can set timeouts for all files in transaction with a single line of code. I added the extension to these forms and am using SetTimeout method in ThisWindow.Init. Seems to work with a single user, we’ll see if it helps when 10 users start working on this database. Guess we’ll find out next week when it gets busier.

Definitely the way to go. I am using MySQL/MariaDB or PostgreSQL with other apps not written in Clarion, but using these with Clarion ODBC driver causes problems. The most annoying one is disconnects “Error: MySQL server has gone away (08S01). Press OK to end this application,”. I had 122 of those errors this month from one lookup procedure that uses MySQL ODBC for faster “filter” locator, compared to 43 + 17 errors on 2 different procedures using TopSpeed tables. Capesoft drivers are expected to resolve this and other issues so as soon as MySQL driver is available I will definitely look into migrating everything, but so far I only have some tables in SQL databases and mostly batch pocess these from Clarion.

Here topspeed_other.htm [Clarion Community Help]

Batch Processing Performance

When writing a large number of records, use STREAM() or open the file in a deny write mode, that is, OPEN(file) rather than SHARE(file). After the records have been written, call FLUSH() to allow other users access.

It is very important to use STREAM() when ADDing/APPENDing/PUTting a large number of records. STREAM() will typically make processing about 20 times faster. For example, adding 1000 records might take nearly 2 minutes without STREAM(), but only 5 seconds with STREAM.

It is not necessary to use STREAM() or FLUSH() on a logged out file (performance on logged out files is always good).

STREAM has the effect of LOCKing the file.

Speedy Logging and Automatic Recovery

TopSpeed transaction logging is very fast (about 100 times faster than the Clarion driver). With LOGOUT, the TopSpeed engine posts all transactions to memory. ROLLBACK simply frees the memory, while COMMIT writes out the database changes in a stream.

If a system crashes during a transaction (LOGOUT–COMMIT), the recovery is automatically handled by the TopSpeed driver the next time the affected file is accessed.

So it looks like Prop:Logout is using Stream(), but I do see performance improvements when specifically using Stream()/Flush() in code. So maybe the ABC Filemanager isnt changing the file access mode from Shared (2h) to Locked (12h). I’ll have to use SysInternals to have a look if Logout() or Stream()/Flush() is changing the access mode, but the way Stream() reads, it looks like it is.

Some file systems flush the operating system’s buffers on each disk write keeping the file “logically closed” (for example, the Clarion and TopSpeed file drivers do this by default). The STREAM statement disables this automatic flushing operation. A STREAM operation is terminated by closing the file, which automatically flushes the buffers, or by issuing a FLUSH statement.

It looks like Stream/Flush is using

with NTFS file streams.

As the RecordID Auto Increment in the ABC filemanager is performed every time a record is Added, this can also be an area which is slow in batch processing, so if its possible to use the MS SQL Auto Inc switches on the TPS RecordID field, then rolling your own code to do the AutoInc may also be a way to speed up the ABC Filemanager for TPS files. I’ve never tried this, but if it works it could speed things up a bit in this area.

The PROP:ServerAutoInc file property is used to specify which column of the table will receive the auto incremented query value.

Those options and properties are used at the driver level. To allow the application templates to use them, two user options were also added.

IsIdentity and ServerAutoIncColumn are used at the FILE user option level in the Dictionary.

For more information regarding any of these options, refer to the Server Side Auto incrementing for Clarion SQL file drivers help topic.

This is one area the ABC Filemanager could be improved, because using a TPS file with batch processing, as its going to be just one workstation adding/changing a lot of records, the AutoInc could definately be handled better.