What happens to a Queue after 67,108,864 records has been added to it?
In practice, I dont expect there to be more than 100 records existing in the queue at any one time, because one thread will add a record, another will delete the record, but over the course of a session, including the PC going to sleep, I expect there to be more than 67,108,864 records added.
So does anyone know what happens to the queue once this magic number is reached?
When working with the —template— language, I have gotten to the limit of the number of items that can be added to the template language on a queue, and it just stops adding. Fails silently. The read/add keeps going. It just does not add. Going back and outputting the items from the queue, shows the items missing from the queue at the end.
This does not mean that I did everything correctly. When I discovered the error, I could verify the problem, and could not get past it, so I commented the problem and put it aside on that data set until a later time.
On a program, I have not run into the limit, so I cannot answer the specific question.
If the available memory gets used the OS begins writing into virtual memory but if you’re adding and deleting records, depending on what is being written, I doubt you’d ever get there since memory in a queue is recovered when a record is deleted.
OTOH, if the help tells you there’s a hard limit, there must be a reason.
I made a simple queue with a byte in it. Tried adding 2^26 records, but it bogged down from a disk swap after about 50,000,000 records. Pretty much unusable after that.
When I added LARGE_ADDRESS so it could get past that 2GB limit, I passed that number of records, but I also ran into an ERRORCODE 8, insufficient memory. I don’t think it even tried to swap to disk with LARGE_ADDRESS enabled.
ADD writes a new entry from the QUEUE structure data buffer to the QUEUE. If there is not enough memory to ADD a new entry, the “Insufficient Memory” error is posted.
sometimes I have received the errorcode(), other times it has failed silently and sometimes it has done a GPF.
so it is unpredictable.
in such cases (probably only happened to me a couple of times in 30 or so years) it is time to rethink your approach.
Maybe you use a disk file but with the queue as a cache so recently used records are in memory. When you add a queue entry if the records() count exceeds some limit then time to flush to disk and remove some queue entries. Mind you, that then massively complicates your code (eg. on read if entry not found in queue then go and check disk) and therefore the capacity for and probability of bugs.
I have heard of some people setting up a RAM disk instead but not done that myself.
Writing to disk would be too slow, I’ve got events coming into a callback procedure, which comes in quick bursts, especially at startup and shutdown, so using a Queue as a buffer before the individual event record data is passed over to ServiceMain which can process that particular type of event and start a new clarion thread to process the subsequent data from a single event..
Already having to wrap this in a critical section to maintain the global var integrity of the queue and hoping the CS wont slow things up too much, but I am considering moving some of the initial event processing code from ServiceMain back to the HandlerEx, which goes against MS guidelines, and then start new Clarion threads to process the remaining data linked to the event…