LOGOUT - Unable to log transaction error

Using Clarion 6.3, I’m trying to LOGOUT a single 80Mb TPS file, running locally and I’m getting the ‘Unable to logout transaction’ error. I’m not getting the error on another smaller version of the file.

Does anyone know what causes this error?

I do not know what causes this error (48?) But I had this ‘Unable to log transaction error’ once. The advice then was: turn off RI - Global Properties - Actions… Tab: File control: ‘Enclose RI code in transaction frame’ (not sure if this is also the spot in C6).

Hello!
A asked the AI about your problem, maybe help: “Unable to logout transaction” error in Clarion 6.3 typically occurs during a LOGOUT operation (used to bracket a transaction) when Clarion can’t flush the buffered data to the disk properly. This can be caused by several issues, especially when dealing with a large TPS file (like your 80MB file). Here’s a breakdown of common causes and suggestions for resolution:


:magnifying_glass_tilted_left: Common Causes of the “Unable to logout transaction” Error

1. TPS File Corruption

  • Larger files are more prone to minor corruption, especially if a write operation was interrupted.
  • A corrupted index or data record can prevent Clarion from completing the LOGOUT properly.

:white_check_mark: Solution:

  • Run TPSFix (comes with Clarion) on the large file:

nginx

MásolásSzerkesztés

TPSFix YourFile.TPS /V
  • Use /V for verbose output.
  • Check if TPSFix reports any errors or repairs the file.

2. File Locking or Sharing Issues

  • If another process or thread has the file open or locked, the LOGOUT/COMMIT can’t complete.
  • This is more likely in multi-threaded or multi-user environments, even locally.

:white_check_mark: Solution:

  • Ensure the file is not open elsewhere, including:
    • Another instance of your app
    • Antivirus or backup software
  • Add debug logging to confirm no other part of your program has the file open in a conflicting mode.

3. Disk Write or Flush Failure

  • If the disk subsystem fails to flush the write buffer (e.g. due to low space, permission issues, or hardware problems), the transaction can’t be completed.

:white_check_mark: Solution:

  • Check available disk space.
  • Test writing to the same directory with a simple test app.
  • Ensure your app has write permissions to the file and directory.

4. Insufficient Clarion File Buffers or Memory

  • The TPS driver uses internal buffers; if memory is constrained or the transaction is large (multiple inserts/updates), the LOGOUT may fail.

:white_check_mark: Solution:

  • Increase the TPS buffer size using the Clarion environment settings or at runtime.
  • Minimize the number of file operations inside a LOGOUT/COMMIT block.

5. Incorrect LOGOUT/COMMIT Logic

  • If your code attempts a LOGOUT on a file that isn’t properly opened for updates or is already inside another transaction, you can get this error.

:white_check_mark: Solution:

  • Ensure you are doing something like:

clarion

MásolásSzerkesztés

LOGOUT(YourFile)
! update records here
COMMIT
  • Avoid nesting or overlapping LOGOUT blocks.
  • Use the RETURN CODE from LOGOUT() and COMMIT to check for success:

c

MásolásSzerkesztés

IF LOGOUT(YourFile) <> Level:Benign
    MESSAGE('Logout failed!')
    RETURN
END

:test_tube: Diagnostic Tip:

Try performing the same transaction on a copy of the large file with fewer records (e.g. 75% or 50% of the data) to see when the error starts to appear. That helps isolate whether it’s related to data, size, or specific records.


:white_check_mark: Summary of Fix Steps

  1. Run TPSFix on the file.
  2. Ensure exclusive access to the file.
  3. Check disk space and permissions.
  4. Simplify the LOGOUT/COMMIT logic.
  5. Try on a trimmed-down copy of the file.

Thanks for that.
I ran TPSFix on the file and it didn’t report any errors but LOGOUT subsequently did not report any errors.

Hi Geoff, I think this probably means your data was ok but there was corruption in the index blocks.

Hi Geoff, The first time I ran it I didn’t notice any errors, and I used the option to rebuild the file, but it was still corrupt. I know because I’m running a process to delete records before a certain date and it deleted all the records.

The next time I ran it, it found a corrupt block and rebuilt the file automatically and my process ran correctly.

Hmmm, now I’m wondering if I ran it once using C6, on my development VM and another time using C10?

Hmm, a couple of variables there. Anyway, glad it is sorted now!

Hi,

In my old TPS days, I used an example TPS file when running TPSFIX…
Doing it this way, TPSFIX detected some issues didn’t appeared without the example file…

1 Like

Yes using an example file is definitely recommended in case the header has been corrupted. The main issue is to make sure the example file is the same version as the data file!

1 Like