PROP:Timer set at runtime seemingly running slower on some laptops. Any ideas?


I have a display that cycles through stored images and displays them using the Timer on the Window. The period is set in a configurations window in minutes then stored, and set on open from the stored value using DisplayWin{PROP:Timer} = StoredMins * 6000.

In testing the timings are all as set but when deployed on the customer laptops, changes are far longer apart. The laptop time shows perfectly correctly in the status bar, so we know the machine is not faulty in that respect. These laptops have hefty security (Sophos) but in this case, are only used to drive the display screen, so running just the one user app (but over a network; app on server).

I obviously can change to having the setting in seconds to reduce the setting but I would just like to understand what is going on. Anyone know why?

I don’t know why you’re having that problem but I haven’t used such a large interval of time on a TIMER.

I wonder if it’s possible that certain modal activities could be re-setting the timer? With such a large timer value, that could potentially contribute to the discrepancy.

Maybe you’d have better luck if you used a smaller PROP:Timer, and compared current time to the last triggered time.

Hi JimM,

What is the use of StoredMins variable? Setting PROP:Timer to 100 means per second, seems your minimum PROP:Timer value is 6000 is your variable value is 1 it means per minute.


The idea from jslarve of shortening the timer interval and using another variable to trigger the change is an interesting one. Frankly the idea that some other event outside my program might reset the timer hadn’t crossed my mind before and well worth a try, since the work to do it is very little. I will let you know how I get on. Does anyone else know something germane to this?

The question why have a user configuration so users can choose and change the interval, rather than hard code is a specification requirement. Obviously to make it humanly sensible you offer units of a minute (or second) at the interface. Offering choices in seconds allows setting less than a minute, to compensate for inaccuracies seen, setting shorter than wanted (commonly 2 mins is wanted and setting to 1 is giving about 4) but that is a very poor solution.

Regards, Jim

Many thanks to jslarve. I tried your suggestion with a shorter TIMER and it worked a treat!

Looking into your suggestion of other processes outside my program having an effect, I found the following about Thread Priorities (
“If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full time slice to the higher-priority thread. For more information, see Context Switches.”
I suspect the organisation has some higher priority processes. By stepping the activity up with a lower TIMER value, I get many more shots at an execution (with the counter increasing).The odd one or two lost, when a higher priority thread takes the slot, have far less impact on the overall period. I can’t be certain that is the true explanation but the more frequent TIMER did deliver an accurate enough period.

So thanks again and regards, Jim

You’re most welcome. By “modal” stuff, I was thinking about perhaps the application menu being touched. I don’t know what happens with the timer (whether or not it’s “reset”), but it does not fire while a menu is dropped. Since you have (had) a large timer interval, that might make (have made) for the variance that you’re seeing.

I think that sometimes processes running during the timer interrupt will stop the timer to prevent that process from being interrupted before it is finished. If this is happening then time between the interrupts you experience can vary.

Look at the SetTimer API instead of the Clarion Prop:Timer. I find it is very effective for long timer periods without the program having to constantly check to see if the timer interval has expired.