I have a function that runs a tick() for all players and objects within my game server. I do this by looping through a set every .1 seconds. I need it to be a solid .1. Lots of timing and math depends on this pause being as exact as possible to .1 seconds. To achieve this, I added this to the tick thread:
start_time = time.time()
# loops and code and stuff for tick thread in here...
time_lapsed = time.time() - start_time # get the time it took to run the above code
if 0.1 - time_lapsed > 0:
time.sleep(0.1 - time_lapsed)
else:
print "Server is overloaded!"
# server lag is greater that .1, so don't sleep, and just eat it on this run.
# the goal is to never see this.
My question is, is this the best way to do this? If the duration of my loop is 0.01, then time_lapsed == 0.01 ... and then the sleep should only be for 0.09. I ask, because it doesn't seem to be working. I started getting the overloaded server message the other day, and the server was most definitely not overloaded. Any thoughts on a good way to "dynamically" control the sleep? Maybe there's a different way to run code every tenth of a second without sleeping?
It would be better to base your "timing and math" on the amount of time actually passed since the last tick(). Depending on "very exact" timings will be fragile at the best of times.
Update: what I mean is that your tick() method would take an argument, say "t", of the elapsed time since the last call. Then, to do movement you'd store each thing's position (say in pixels) and velocity (in "pixels/second") so the magnitude of its movement for that call to tick() becomes "velocity * t".
This has the additional benefit of decoupling your physics simulation from the frame-rate.
I see pygame mentioned below: their "pygame.time.Clock.tick()" method is meant to be used this way, as it returns the number of seconds since the last time you called it.
Other Python threads may run in between leaving your thread less time. Also time.time() is subject to system time adjustments; it can be set back.
There is a similar function Clock.tick() in pygame. Its purpose is to limit the maximum frame rate.
To avoid outside influence you could keep an independent frame/turn-based counter to measure the game time.
Related
We are a team of bachelor students currently working on building a legged robot. At the moment our interface to the robot is written in python using an sdk from the master board we are using.
In order to communicate with the master board sdk, we need to send a command every millisecond.
To allow us to send tasks periodically, we have applied the rt-preempt patch to our linux kernel. (Ubuntu LTS 20.04, kernel 5.10.27-rt36)
We are very new to writing real time applications, and have run into some issues where our task sometimes will have a much smaller time step than specified. In the figure below we have plotted the time of each cycle of the while loop where the command is being sent to the sdk. (x axis is time in seconds and y axis is the elapsed time of an iteration, also in seconds)
As seen in the plot, one step is much smaller than the rest. This seems to happen at the same exact time mark every time we run the script.
cyclic_task_plot
We set the priority of the entire script using:
pid = os.getpid()
sched = os.SCHED_FIFO
param = os.sched_param(98)
os.sched_setscheduler(pid, sched, param)
Our cyclic task looks like this:
dt is set to 0.001
while(_running):
if direction:
q = q + 0.0025
if (q > np.pi/2).any():
direction = False
else:
q = q - 0.0025
if (q < -np.pi/2).any():
direction = True
master_board.track_reference(q, q_prime)
#Terminate if duration has passed
if (time.perf_counter() - program_start > duration):
_running = False
cycle_time = time.perf_counter() - cycle_start
time.sleep(dt - cycle_time)
cycle_start = time.perf_counter()
timestep_end = time.perf_counter()
time_per_timestep_array.append(timestep_end - timestep_start)
timestep_start = time.perf_counter()
We suspect the issue has to do with the way we define the sleep amount. Cycle_time is meant to be the time that the calculations above time.sleep() takes, so that: sleep time + cycle time = 1ms. However, we are not sure how to properly do this, and we're struggling with finding resources on the subject.
How should one properly define a task such as this for a real time application?
We have quite loose requirements (several milliseconds), but it is very important to us that it is deterministic, as this is part of our thesis and we need to understand what is going on.
Any answers to our question or relevant resources are greatly appreciated.
Link to the full code: https://drive.google.com/drive/folders/12KE0EBaLc2rkTZK2FuX_goMF4MgWtknS?usp=sharing
timestep_end = time.perf_counter()
time_per_timestep_array.append(timestep_end - timestep_start)
timestep_start = time.perf_counter()
You're recording the time between timestep_start from the previous cycle and timestep_end from the current cycle. This interval does not accurately represent the cycle time step (even if we assume that no task preemption takes place); it excludes the time consumed by the array append function. Since the outlier seems to happen at the same exact time mark every time we run the script, we could suspect that at this point the array exceeds a certain size where an expensive memory reallocation has to take place. Regardless of the real reason, you should remove such timing inaccuracies by recording the time between cycle starts:
timestep_end = cycle_start
time_per_timestep_array.append(timestep_end - timestep_start)
timestep_start = cycle_start
I'm making a client/server program and on the client I want a clock on the GUI that displays the running time. Now there's plenty of tutorials on here on how to make a clock/timer and I think I have that part down.
The issue is making one that runs in the background while the rest of the code executes. At the moment I have a loop for my timer that the code doesn't move past, so it just starts counting the timer then doesn't do anything else after that. At least until the timer is stopped anyway.
I'm guessing I need to find a way to make it run in the background, but I don't know how and I can't find the answer. It has been suggested to me that I should use threading/multithreading, but that looks kinda complicated and I can't quite figure it out.
Is there a better way to do it or is threading the way to go?
You can keep track of time passed since a certain point by subtracting the start time from the current time. You can then update the timer value with this (if you have a lot of other code running in between this will become slower so you might want to round it).
import time
start = time.time()
while doing_stuff:
do_stuff()
GUI.update_timer(time.time() - start)
I don't see any reason why threading is not a good idea. For one, if you have complex computations to run in your code, threading will enhance the performance by running your code and the timer in the background in tandem. Here's something that may help illustrate my point with a simple function to square numbers:
import time
import threading
def square():
start_time = time.time()
x = int(input('Enter number: '))
squared = x*x
print('Square is: %s ' %squared)
print('Time elapsed: %s seconds' %(time.time() - start_time))
set_thread = threading.Thread(target=square) #set Thread() to run on square() function
set_thread.start()
#Output:
Enter number: 5
Square is: 25
Time elapsed: 1.4820027351379395 seconds
Of course, the simple function above may take only a few seconds. The timer begins when the function is called and stops when the code in the square() block has run. But imagine a situation where your code has much more complex computations such as insert multiple values into a database or sort a large list of data and write to a file at the same time.
I'm programming a tool that has to display a timer with 1/100 of second as precision in Tkinter with python. I've tried using the after() method with self.timerLabel.after(0, self.update_timer)
def update_timer (self):
self.timer += 0.01
self.timerLabel.configure(text="%.2f" % self.timer)
self.timerLabel.after(10, self.update_timer)
The problem is that it runs way slower than expected and I need to know if there's a workaround or some way to have the timer run exactly on time.
Or maybe some way to use the computer's time to display the correct elapsed time on the screen.
Thank you in advance
The most accurate method is probably to find a start time, then every time you call the update timer function just subtract the start time from the current time.
I've attached some example code to show you roughly how it would work. It should be fairly simple to adapt that to use after() for Tkinter.
import time
# Start time
startTime = time.time();
def update_timer():
# Find difference and print
timeDifference = time.time() - startTime;
print(timeDifference);
# Sleep appropriate amount of time before printing
# next statement.
time.sleep(0.1);
# Recursively call update.
update_timer();
# Start running
update_timer();
Example fiddle: https://repl.it/Hoqh/0
The after command makes no guarantees, other than it will not execute before the given time period. Tkinter is not a real time system.
You also have to take into account the time it takes to execute the code. For example, let's assume that your command starts precisely on time. In that code you have two lines of code:
self.timer += 0.01
self.timerLabel.configure(text="%.2f" % self.timer)
That code takes some amount of time to execute. It may be only a few microseconds, but it's definitely more than zero microseconds.
You then call after again:
self.timerLabel.after(10, self.update_timer)
The next time it will run will be at least 10 ms after the current time, which is a few microseconds or milliseconds after it was called. So, if the first two commands take 1ms, then the next time you call it will be 11 ms after the first call (10ms for the delay, plus 1ms for the code to execute)
You can minimize the delay factor a little by calling after immediately, rather than waiting for the rest of the code to execute. As long as the delay is greater than the time that it takes to execute the other code, you'll notice a slight improvement:
def update_timer (self):
self.timerLabel.after(10, self.update_timer)
self.timer += 0.01
self.timerLabel.configure(text="%.2f" % self.timer)
This still won't be precise, since there are other things that can prevent self.timer from being called exactly 10ms after it was requested. For example, if the window is resized or moved at 9.99ms, tkinter will have to handle the redraw before it can handle the scheduled task.
If you want to account for all of this drift, don't just automatically increment by 10ms each time. Instead, calculate the time between each invocation, and add the delta.
I am getting a timestamp every time a key is pressed like this:
init_timestamp = time.time()
while (True):
c = getch()
offset = time.time() - init_timestamp
print("%s,%s" % (c,offset), file=f)
(getch from this answer).
I am verifying the timestamps against an audio recording of me actually typing the keys. After lining the first timestamp up with the waveform, subsequent timestamps drift slighty but consistently. By this I mean that the saved timestamps are later than the keypress waveforms and get later and later as time goes on.
I am reasonably sure the waveform timing is correct (i.e. the recording is not fast or slow), because in the recording I also included the ticking of a very accurate clock which lines up perfectly with the second markers.
I am aware that there are unavoidable limits to the accuracy of time.time(), but this does not seem to account for what I'm seeing - if it was equally wrong on both sides that would be acceptable, but I do not want it to gradually diverge more and more from the truth.
Why would I be seeing this drifting behaviour and what can I do to avoid it?
Just solved this by using time.monotonic() instead of time.time(). time.time() seems to use gettimeofday (at least here it does) which is apparently really bad for measuring walltime differences because of NTP syncing issues:
gettimeofday() and time() should only be used to get the current time if the current wall-clock time is actually what you want. They should never be used to measure time or schedule an event X time into the future.
You usually aren't running NTP on your wristwatch, so it probably won't jump a second or two (or 15 minutes) in a random direction because it happened to sync up against a proper clock at that point. Good NTP implementations try to not make the time jump like this. They instead make the clock go faster or slower so that it will drift to the correct time. But while it's drifting you either have a clock that's going too fast or too slow. It's not measuring the passage of time properly.
(link). So basically measuring differences between time.time() calls is a bad idea.
Depending on which OS you are using you will either need to use time.time() or time.clock().
For windows OS's you will need to use time.clock this give you will clock seconds as a float. time.time() on windows if I remember correctly time.time() is only accurate within 16ms.
For posix systems (linux, osx) you should be using time.time() this is a float which returns the number of seconds since the epoch.
In your code add the following to make your application a little more cross system compatible.
import os
if os.name == 'posix':
from time import time as get_time
else:
from time import clock as get_time
# now use get_time() to return the timestamp
init_timestamp = get_time()
while (True):
c = getch()
offset = get_time() - init_timestamp
print("%s,%s" % (c,offset), file=f)
...
What I want is to be able to run a function every second, irrelevant of how long the function takes (it should always be under a second). I've considered a number of options but not sure which is best.
If I just use the delay function it isn't going to take into account the time the function takes to run.
If I time the function and then subtract that from a second and make up the rest in the delay it's not going to take into account the time calculations.
I tried using threading.timer (I'm not sure about the ins and outs of how this works) but it did seem to be slower than the 1s.
Here's the code I tried for testing threading.timer:
def update(i):
sys.stdout.write(str(i)+'\r')
sys.stdout.flush()
print i
i += 1
threading.Timer(1, update, [i]).start()
Is there a way to do this irrelevant of the length of the time the function takes?
This will do it, and its accuracy won't drift with time.
import time
start_time = time.time()
interval = 1
for i in range(20):
time.sleep(start_time + i*interval - time.time())
f()
The approach using a threading.Timer (see code below) should in fact not be used, as a new thread is launched at every interval and this loop can never be stopped cleanly.
# as seen here: https://stackoverflow.com/a/3393759/1025391
def update(i):
threading.Timer(1, update, [i+1]).start()
# business logic here
If you want a background loop it is better to launch a new thread that runs a loop as described in the other answer. Which is able to receive a stop signal, s.t. you can join() the thread eventually.
This related answer seems to be a great starting point to implement this.
if f() always takes less than a second then to run it on a one second boundary (without a drift):
import time
while True:
time.sleep(1 - time.monotonic() % 1)
f()
The idea is from #Dave Rove's answer to a similar question.
To understand how it works, consider an example:
time.monotonic() returns 13.7 and time.sleep(0.3) is called
f() is called around (±some error) 14 seconds (since time.monotonic() epoch)
f() is run and it takes 0.1 (< 1) seconds
time.monotonic() returns around 14.1 seconds and time.sleep(0.9) is called
Step 2. is repeated around 15 seconds (since time.monotonic() epoch)
f() is run and it takes 0.3 (< 1) seconds (note: the value is different from Step 2.)
time.monotonic() returns around 15.3 and time.sleep(0.7) is called
f() is called around 16 seconds and the loop is repeated.
At each step f() is called on a one second boundary (according to time.monotonic() timer). The errors do not accumulate. There is no drift.
See also: How to run a function periodically in python (using tkinter).
How about this: After each run, sleep for (1.0 - launch interval) seconds. You can change the terminate condition by changing while True:. Although if the your function takes more than 1 second to run, this will go wrong.
from time import time, sleep
while True:
startTime = time()
yourFunction()
endTime = time()-startTime
sleep(1.0-endTime)
Threading may be a good choice. The basic concept is as follows.
import threading
def looper():
# i as interval in seconds
threading.Timer(i, looper).start()
# put your action here
foo()
#to start
looper()
I would like to recommend the following code. You can replace the True with any condition if you want.
while True:
time.sleep(1) #sleep for 1 second
func() #function you want to trigger
Tell me if it works.