Using Clarion 6.3, I’m trying to LOGOUT a single 80Mb TPS file, running locally and I’m getting the ‘Unable to logout transaction’ error. I’m not getting the error on another smaller version of the file.
Does anyone know what causes this error?
Using Clarion 6.3, I’m trying to LOGOUT a single 80Mb TPS file, running locally and I’m getting the ‘Unable to logout transaction’ error. I’m not getting the error on another smaller version of the file.
Does anyone know what causes this error?
I do not know what causes this error (48?) But I had this ‘Unable to log transaction error’ once. The advice then was: turn off RI - Global Properties - Actions… Tab: File control: ‘Enclose RI code in transaction frame’ (not sure if this is also the spot in C6).
Hello!
A asked the AI about your problem, maybe help: “Unable to logout transaction” error in Clarion 6.3 typically occurs during a LOGOUT operation (used to bracket a transaction) when Clarion can’t flush the buffered data to the disk properly. This can be caused by several issues, especially when dealing with a large TPS file (like your 80MB file). Here’s a breakdown of common causes and suggestions for resolution:
Solution:
nginx
MásolásSzerkesztés
TPSFix YourFile.TPS /V
/V
for verbose output. Solution:
Solution:
Solution:
Solution:
clarion
MásolásSzerkesztés
LOGOUT(YourFile)
! update records here
COMMIT
LOGOUT()
and COMMIT
to check for success:c
MásolásSzerkesztés
IF LOGOUT(YourFile) <> Level:Benign
MESSAGE('Logout failed!')
RETURN
END
Try performing the same transaction on a copy of the large file with fewer records (e.g. 75% or 50% of the data) to see when the error starts to appear. That helps isolate whether it’s related to data, size, or specific records.
TPSFix
on the file.Thanks for that.
I ran TPSFix on the file and it didn’t report any errors but LOGOUT subsequently did not report any errors.
Hi Geoff, I think this probably means your data was ok but there was corruption in the index blocks.
Hi Geoff, The first time I ran it I didn’t notice any errors, and I used the option to rebuild the file, but it was still corrupt. I know because I’m running a process to delete records before a certain date and it deleted all the records.
The next time I ran it, it found a corrupt block and rebuilt the file automatically and my process ran correctly.
Hmmm, now I’m wondering if I ran it once using C6, on my development VM and another time using C10?
Hmm, a couple of variables there. Anyway, glad it is sorted now!
Hi,
In my old TPS days, I used an example TPS file when running TPSFIX…
Doing it this way, TPSFIX detected some issues didn’t appeared without the example file…
Yes using an example file is definitely recommended in case the header has been corrupted. The main issue is to make sure the example file is the same version as the data file!