I'm programming a tool that has to display a timer with 1/100 of second as precision in Tkinter with python. I've tried using the after() method with self.timerLabel.after(0, self.update_timer)
def update_timer (self):
self.timer += 0.01
self.timerLabel.configure(text="%.2f" % self.timer)
self.timerLabel.after(10, self.update_timer)
The problem is that it runs way slower than expected and I need to know if there's a workaround or some way to have the timer run exactly on time.
Or maybe some way to use the computer's time to display the correct elapsed time on the screen.
Thank you in advance
The most accurate method is probably to find a start time, then every time you call the update timer function just subtract the start time from the current time.
I've attached some example code to show you roughly how it would work. It should be fairly simple to adapt that to use after() for Tkinter.
import time
# Start time
startTime = time.time();
def update_timer():
# Find difference and print
timeDifference = time.time() - startTime;
print(timeDifference);
# Sleep appropriate amount of time before printing
# next statement.
time.sleep(0.1);
# Recursively call update.
update_timer();
# Start running
update_timer();
Example fiddle: https://repl.it/Hoqh/0
The after command makes no guarantees, other than it will not execute before the given time period. Tkinter is not a real time system.
You also have to take into account the time it takes to execute the code. For example, let's assume that your command starts precisely on time. In that code you have two lines of code:
self.timer += 0.01
self.timerLabel.configure(text="%.2f" % self.timer)
That code takes some amount of time to execute. It may be only a few microseconds, but it's definitely more than zero microseconds.
You then call after again:
self.timerLabel.after(10, self.update_timer)
The next time it will run will be at least 10 ms after the current time, which is a few microseconds or milliseconds after it was called. So, if the first two commands take 1ms, then the next time you call it will be 11 ms after the first call (10ms for the delay, plus 1ms for the code to execute)
You can minimize the delay factor a little by calling after immediately, rather than waiting for the rest of the code to execute. As long as the delay is greater than the time that it takes to execute the other code, you'll notice a slight improvement:
def update_timer (self):
self.timerLabel.after(10, self.update_timer)
self.timer += 0.01
self.timerLabel.configure(text="%.2f" % self.timer)
This still won't be precise, since there are other things that can prevent self.timer from being called exactly 10ms after it was requested. For example, if the window is resized or moved at 9.99ms, tkinter will have to handle the redraw before it can handle the scheduled task.
If you want to account for all of this drift, don't just automatically increment by 10ms each time. Instead, calculate the time between each invocation, and add the delta.
Related
I'm making a client/server program and on the client I want a clock on the GUI that displays the running time. Now there's plenty of tutorials on here on how to make a clock/timer and I think I have that part down.
The issue is making one that runs in the background while the rest of the code executes. At the moment I have a loop for my timer that the code doesn't move past, so it just starts counting the timer then doesn't do anything else after that. At least until the timer is stopped anyway.
I'm guessing I need to find a way to make it run in the background, but I don't know how and I can't find the answer. It has been suggested to me that I should use threading/multithreading, but that looks kinda complicated and I can't quite figure it out.
Is there a better way to do it or is threading the way to go?
You can keep track of time passed since a certain point by subtracting the start time from the current time. You can then update the timer value with this (if you have a lot of other code running in between this will become slower so you might want to round it).
import time
start = time.time()
while doing_stuff:
do_stuff()
GUI.update_timer(time.time() - start)
I don't see any reason why threading is not a good idea. For one, if you have complex computations to run in your code, threading will enhance the performance by running your code and the timer in the background in tandem. Here's something that may help illustrate my point with a simple function to square numbers:
import time
import threading
def square():
start_time = time.time()
x = int(input('Enter number: '))
squared = x*x
print('Square is: %s ' %squared)
print('Time elapsed: %s seconds' %(time.time() - start_time))
set_thread = threading.Thread(target=square) #set Thread() to run on square() function
set_thread.start()
#Output:
Enter number: 5
Square is: 25
Time elapsed: 1.4820027351379395 seconds
Of course, the simple function above may take only a few seconds. The timer begins when the function is called and stops when the code in the square() block has run. But imagine a situation where your code has much more complex computations such as insert multiple values into a database or sort a large list of data and write to a file at the same time.
the following is:
python sched:
from time import time, sleep
from sched import scheduler
def daemon(local_handler):
print 'hi'
local_handler.enter(3, 1, daemon, (local_handler,))
if __name__ == '__main__':
handler = scheduler(time, sleep)
handler.enter(0, 1, daemon, (handler,))
handler.run()
python loop + sleep:
from time import sleep
while True:
print 'hello'
sleep(3)
What is the difference between sched and loop+sleep, and sched will stop when the system time is changed?
A big difference is that the delay between multiple tasks is calculated as necessary. That means your loop will take:
time it needs to print("hello") or do the task that you need to do
time it takes to sleep(3)
while if you change the order in your scheduler to:
local_handler.enter(3, 1, daemon, (local_handler,))
do_the_task
your next task will be run either after 3 seconds, or immediately after do_the_task if it took longer than 3 seconds.
So the decision really comes down to: do you want your task executed every X time units, or with X time units space between executions.
Assuming you're using the typical (time, sleep) parameters, if the system time is changed, you'll get the next task run after the expected amount of time (sleep takes care of this, unless some signals were received in the meantime), but your next scheduled task time will be shifted. I believe that the next execution time will not be what you'd normally expect.
The difference between the two is that scheduler is more pythonic than loop + sleep for two reasons: elegance and modularity.
Long loops easily become difficult to read and require a lot more code to be written within. However, with a scheduler, a specific function can be called on a delay, containing all of the code within. This makes code much more readable and allows for moving code into classes and modules to be called within the main loop.
Python knows what the current time is by checking the local system. If the local system's time is changed, then that will affect a currently running program or script.
Becaused the python sched is use system time for next iteration.
The sleep is use cpu time clock for next iteration.
I need to send repeating messages from a list quickly and precisely. One list needs to send the messages every 100ms, with a +/- 10ms window. I tried using the code below, but the problem is that the timer waits the 100ms, and then all the computation needs to be done, making the timer fall out of the acceptable window.
Simply decreasing the wait is a messy, and unreliable hack. The there is a Lock around the message loop in the event the list gets edited during the loop.
Thoughts on how to get python to send messages consistently around 100ms? Thanks
from threading import Timer
from threading import Lock
class RepeatingTimer(object):
def __init__(self,interval, function, *args, **kwargs):
super(RepeatingTimer, self).__init__()
self.args = args
self.kwargs = kwargs
self.function = function
self.interval = interval
self.start()
def start(self):
self.callback()
def stop(self):
self.interval = False
def callback(self):
if self.interval:
self.function(*self.args, **self.kwargs)
Timer(self.interval, self.callback, ).start()
def loop(messageList):
listLock.acquire()
for m in messageList:
writeFunction(m)
listLock.release()
MESSAGE_LIST = [] #Imagine this is populated with the messages
listLock = Lock()
rt = RepeatingTimer(0.1,loop,MESSAGE_LIST)
#Do other stuff after this
I do understand that the writeFunction will cause some delay, but not more than the 10ms allowed. I essentially need to call the function every 100ms for each message. The messagelist is small, usually less than elements.
The next challenge is to have this work with every 10ms, +/-1ms :P
Yes, the simple waiting is messy and there are better alternatives.
First off, you need a high-precision timer in Python. There are a few alternatives and depending on your OS, you might want to choose the most accurate one.
Second, you must be aware of the basics preemptive multitasking and understand that there is no high-precision sleep function, and that its actual resolution will differ from OS to OS too. For example, if we're talking Windows, the minimal sleep interval might be around 10-13 ms.
And third, remember that it's always possible to wait for a very accurate interval of time (assuming you have a high-resolution timer), but with a trade-off of high CPU load. The technique is called busy waiting:
while(True):
if time.clock() == something:
break
So, the actual solution is to create a hybrid timer. It will use the regular sleep function to wait the main bulk of the interval, and then it'll start probing the high-precision timer in the loop, while doing the sleep(0) trick. Sleep(0) will (depending on the platform) wait the least possible amount of time, releasing the rest of the remaining time slice to other processes and switching the CPU context. Here is a relevant discussion.
The idea is thoroughly described in the Ryan Geiss's Timing in Win32 article. It's in C and for Windows API, but the basic principles apply here as well.
Store the start time. Send the message. Get the end time. Calculate timeTaken=end-start. Convert to FP seconds. Sleep(0.1-timeTaken). Loop back.
try this:
#!/usr/bin/python
import time; # This is required to include time module.
from threading import Timer
def hello(start, interval, count):
ticks = time.time()
t = Timer(interval - (ticks-start-count*interval), hello, [start, interval, count+1])
t.start()
print "Number of ticks since 12:00am, January 1, 1970:", ticks, " #", count
dt = 1.25 # interval in sec
t = Timer(dt, hello, [round(time.time()), dt, 0]) # start over at full second, round only for testing here
t.start()
I have a function that runs a tick() for all players and objects within my game server. I do this by looping through a set every .1 seconds. I need it to be a solid .1. Lots of timing and math depends on this pause being as exact as possible to .1 seconds. To achieve this, I added this to the tick thread:
start_time = time.time()
# loops and code and stuff for tick thread in here...
time_lapsed = time.time() - start_time # get the time it took to run the above code
if 0.1 - time_lapsed > 0:
time.sleep(0.1 - time_lapsed)
else:
print "Server is overloaded!"
# server lag is greater that .1, so don't sleep, and just eat it on this run.
# the goal is to never see this.
My question is, is this the best way to do this? If the duration of my loop is 0.01, then time_lapsed == 0.01 ... and then the sleep should only be for 0.09. I ask, because it doesn't seem to be working. I started getting the overloaded server message the other day, and the server was most definitely not overloaded. Any thoughts on a good way to "dynamically" control the sleep? Maybe there's a different way to run code every tenth of a second without sleeping?
It would be better to base your "timing and math" on the amount of time actually passed since the last tick(). Depending on "very exact" timings will be fragile at the best of times.
Update: what I mean is that your tick() method would take an argument, say "t", of the elapsed time since the last call. Then, to do movement you'd store each thing's position (say in pixels) and velocity (in "pixels/second") so the magnitude of its movement for that call to tick() becomes "velocity * t".
This has the additional benefit of decoupling your physics simulation from the frame-rate.
I see pygame mentioned below: their "pygame.time.Clock.tick()" method is meant to be used this way, as it returns the number of seconds since the last time you called it.
Other Python threads may run in between leaving your thread less time. Also time.time() is subject to system time adjustments; it can be set back.
There is a similar function Clock.tick() in pygame. Its purpose is to limit the maximum frame rate.
To avoid outside influence you could keep an independent frame/turn-based counter to measure the game time.
What I want is to be able to run a function every second, irrelevant of how long the function takes (it should always be under a second). I've considered a number of options but not sure which is best.
If I just use the delay function it isn't going to take into account the time the function takes to run.
If I time the function and then subtract that from a second and make up the rest in the delay it's not going to take into account the time calculations.
I tried using threading.timer (I'm not sure about the ins and outs of how this works) but it did seem to be slower than the 1s.
Here's the code I tried for testing threading.timer:
def update(i):
sys.stdout.write(str(i)+'\r')
sys.stdout.flush()
print i
i += 1
threading.Timer(1, update, [i]).start()
Is there a way to do this irrelevant of the length of the time the function takes?
This will do it, and its accuracy won't drift with time.
import time
start_time = time.time()
interval = 1
for i in range(20):
time.sleep(start_time + i*interval - time.time())
f()
The approach using a threading.Timer (see code below) should in fact not be used, as a new thread is launched at every interval and this loop can never be stopped cleanly.
# as seen here: https://stackoverflow.com/a/3393759/1025391
def update(i):
threading.Timer(1, update, [i+1]).start()
# business logic here
If you want a background loop it is better to launch a new thread that runs a loop as described in the other answer. Which is able to receive a stop signal, s.t. you can join() the thread eventually.
This related answer seems to be a great starting point to implement this.
if f() always takes less than a second then to run it on a one second boundary (without a drift):
import time
while True:
time.sleep(1 - time.monotonic() % 1)
f()
The idea is from #Dave Rove's answer to a similar question.
To understand how it works, consider an example:
time.monotonic() returns 13.7 and time.sleep(0.3) is called
f() is called around (±some error) 14 seconds (since time.monotonic() epoch)
f() is run and it takes 0.1 (< 1) seconds
time.monotonic() returns around 14.1 seconds and time.sleep(0.9) is called
Step 2. is repeated around 15 seconds (since time.monotonic() epoch)
f() is run and it takes 0.3 (< 1) seconds (note: the value is different from Step 2.)
time.monotonic() returns around 15.3 and time.sleep(0.7) is called
f() is called around 16 seconds and the loop is repeated.
At each step f() is called on a one second boundary (according to time.monotonic() timer). The errors do not accumulate. There is no drift.
See also: How to run a function periodically in python (using tkinter).
How about this: After each run, sleep for (1.0 - launch interval) seconds. You can change the terminate condition by changing while True:. Although if the your function takes more than 1 second to run, this will go wrong.
from time import time, sleep
while True:
startTime = time()
yourFunction()
endTime = time()-startTime
sleep(1.0-endTime)
Threading may be a good choice. The basic concept is as follows.
import threading
def looper():
# i as interval in seconds
threading.Timer(i, looper).start()
# put your action here
foo()
#to start
looper()
I would like to recommend the following code. You can replace the True with any condition if you want.
while True:
time.sleep(1) #sleep for 1 second
func() #function you want to trigger
Tell me if it works.