I long ago made notation to ALWAYS call Reset(Force) for an In-Memory file AFTER the file was Closed. From where this “requirement” originated, I no longer remember and I can’t find any further documentation.
Because I am now seeing some GPFs after procedures where In-Memory tables are used, I am beginning to question the practice. Any ideas or history?
To be clear Douglas, you are calling RESET on an object and not on the file itself?
I expect your habit originated with the idea that “the memory table is a temp store, so when i close it i want to empty it.”
That is likely true for some use cases, and not true for others. If your mem table does NOT have /THREADEDCONTENT then it is shared across multiple threads. Emptying the table for everyone just because 1 thread is done seems like a bug.
So in your case I guess, context will matter, but if you do this i recommend you make sure the file is set to have /THREADEDCONTENT.
To be clear Douglas, you are calling RESET on an object and not on the file itself?
I expect your habit originated with the idea that “the memory table is a temp store, so when i close it i want to empty it.”
That is likely true for some use cases, and not true for others. If your mem table does NOT have /THREADEDCONTENT then it is shared across multiple threads. Emptying the table for everyone just because 1 thread is done seems like a bug.
So in your case I guess, context will matter, but if you do this i recommend you make sure the file is set to have /THREADEDCONTENT.
I agree, most likely the idea. However, I can’t take credit for such deep thinking. It’s one of those bits I picked up from docs or a respected someone for work I was doing during COVID lock down.
empty(IMFILE)!empty() needs file to have exclusive use otherwise it will fail
if errorcode() or records(IMFILE) > 0
Set(IMFILE)
loop until Access:IMFILE.Next()
Access:IMFILE.DeleteRecord(0)
end
end
To be absolutely sure the file was empty, when it was very important, I would Loop and Delete 9 Times. I guess my concern was a flaw in the driver with DELETE might cause sequential processing NEXT to skip a record.
I never saw this, but its just 2 extra lines of code, and the extra LOOP only happens “While Records > 0” so … never … if the driver next works. E.g.
LOOP 9 TIMES ! 9 TIMES prevents infinite loop should Delete fail to work
Set(IMFILE)
loop until Access:IMFILE.Next()
Access:IMFILE.DeleteRecord(0)
end
WHILE Records(IMFILE) > 0 ! All were not deleted, so loop and delete again ... driver bug
With the added complexity of ABC doing the Delete inside its function it may not work, so this gives up after 9 tries.
I don’t know what Jeff is referring to. In the Link he posted there is no example or explanation that I see? Just the same statement.
If I had to guess… you think that Previous() will keep starting from the Last Record in the File?
IMO the Help says not, the 2nd … Nth PREVIOUS will move back one record. That is how I understand Sequential processing in Clarion to work, and I write code with that assumption.
The first PREVIOUS following a SET reads the record at the position specified by the SET statement.
Subsequent PREVIOUS statements read subsequent records in reverse sequence. The sequence is not affected by any GET, REGET, ADD, PUT, or DELETE.
The way I could see it working is if the SET was repeated:
LOOP
SET(Key)
PREVIOUS(Key) !move to Last Record
IF ErrorCode() THEN BREAK.
DELETE(File)
!? IF ERRORCODE() THEN BREAK.
END
I would not write that code without a LOOP limit like LOOP 2*Records(File) in case a record could not delete, maybe because it was LOCKed. That is very unlikely as LOCK is rarely used. The File could be STREAM which locks it and prevents all deletes. In that case my original LOOP 9 TIMES at least gives up and does not hang.
This whole drift I started with multiple tries of Delete is kind of academic. I have never seen it happen and the docs say DELETE does not affect Next or Previous. But as I said “with 2 lines of code (Loop Times + While Records>0) you can be sure”.
when looping through a table to delete specific selected records, go backwards.
The reason is, that, when the one record is deleted and the loop skips forward,
it starts from the deleted one, which no longer exists, so it actually takes
the next existing. Which is then not processed, because the table has shrunk
for one record.