properly enforce pre-defined loop time - python

I am trying to measure a voltage through an acquisition card at a precise frequency, and I don't find effective tools to enforce my loop to run at said frequency.
For now I have something like this:
import time
def get_data(frequency):
last_t=time.time()
while True:
while (time.time()-last_t)<(1./frequency): # waiting loop
time.sleep(1./(100*frequency)) # wait for 1/100 of the desired loop time
last_t=time.time()
data=sensor.acquire()
# do stuff with the data
1/ This is not very precise, as it only enforce that frequency will not be higher than wanted frequency (but can be much lower).
2/ This is expensive in CPU time, because the waiting loop is very fast, but I don't know how to improve it.
Any idea for improving one (or both) of these issues will be much appreciated.

Related

Iterate through a loop at timed intervals - Python 3.0

I thought I'd sign up to help me learn more coding with Python. I recently signed up to code academy and have been doing the online Python course which has helped a lot.
I've decided to give myself a small project to continue learning but have ran into a problem (searched on here and still no help.)
I'm wanting to write a small function of code for a midi step sequencer, for simplicity I'm omitting midi for now and looking at it in the most logical way I can.
What I want to do is:
input a set of midi note numbers
append these to a list
loop through this list at a timed interval (BPM) - for example 60,000 / 120 bpm = 500ms between quarter notes / 24 PPQN = 20.8333ms per pulse.
The trouble I have is I can't find any possible way to iterate through a list in the time domain. I have looked at the time.sleep function but read this is not accurate enough. Is there another method. I don't want to use any libraries.
Any pointers would be a huge help as I'm struggling to find any available resources for running through a loop at a specified amount of time between each value in the loop.
Could you say why sleep is not accurate enough?
If you wish so, you can keep track of the elapsed time yourself using something like time.thread_time_ns
so:
def sleep(pause_time):
initial_time = time.thread_time_ns()
while( time.thread_time_ns() - initial_time < pause_time):
pass
So this is your own sleep function
The reason time.sleep might not be accurate for you might be due to the way you are using it. Try this:
import time
sleeptime = 0.0208333 #in seconds
for your_loop_here:
start = time.time()
#do stuff here
time.sleep(max(sleeptime - (time.time() - start), 0))
I have used this method to limit frame rate in computer vision processing. All it does is account the loop iteration time in the sleep so that the time for each loop is as accurate as possible. It might work for you too. Hope this helps!
Well of course there's a much more accurate way to do this, which would be to write all your code in assembly and finely adjust the clock speed of your CPU so that each iteration takes a fixed amount of time, but this might be too impractical for your use case.

Why the executing time of functions not constant?

I read my university class theoretically the order of growth of functions and tried implementing it practically at home. Although the order of growth turned out to be exact the same as in textbooks but their executing time changes with every single time I execute the program. Why is that?
Source Code
import time
import math
from tabulate import tabulate
n=eval(input("Enter the value of n: "));
t1=time.time()
a=12
t2=time.time()
A=t2-t1
t3=time.time()
b=n
t4=time.time()
B=t4-t3
t5=time.time()
c=math.log10(n);
t6=time.time()
C=t6-t5
t7=time.time()
d=n*math.log10(n);
t8=time.time()
D=t8-t7
t9=time.time()
e=n**2
t10=time.time()
E=t10-t9
t11=time.time()
f=2**n
t12=time.time()
F=t12-t11
print(tabulate([['constant',a,A], ['n',b,B], ['logn',c,C], ['nlogn',d,D], ['n**2',e,E], ['2**n',f,F]], headers=['Function', 'Value', 'Time']))
templist= [A,B,C,D,E,F]
print("The time order in acsending order is: ", sorted(templist,key=int))
First Execution
naufil#naufil-Inspiron-7559:~/Desktop/python$ python3 time_order.py
Enter the value of n: 100
Function Value Time
---------- --------------- -----------
constant 12 2.14577e-06
n 100 1.43051e-06
logn 2 4.1008e-05
nlogn 200 3.57628e-06
n**2 10000 3.33786e-06
2**n 1.26765e+30 3.8147e-06
The time order in acsending order is: [2.1457672119140625e-06, 1.430511474609375e-06, 4.100799560546875e-05, 3.5762786865234375e-06, 3.337860107421875e-06, 3.814697265625e-06]
Second Execution
naufil#naufil-Inspiron-7559:~/Desktop/python$ python3 time_order.py
Enter the value of n: 100
Function Value Time
---------- --------------- -----------
constant 12 2.14577e-06
n 100 1.19209e-06
logn 2 4.64916e-05
nlogn 200 4.05312e-06
n**2 10000 3.33786e-06
2**n 1.26765e+30 3.57628e-06
The time order in acsending order is: [2.1457672119140625e-06, 1.1920928955078125e-06, 4.649162292480469e-05, 4.0531158447265625e-06, 3.337860107421875e-06, 3.5762786865234375e-06]
As other comments and answers have rightly pointed out, the reason for the difference in execution times that you observe come from the way operating systems work. But doing rigorous measures is a complicated matter, so let me elaborate a bit more though and give you pointers to where you should maybe direct your experimentation.
What your OS does behind your back
You can see the OS as a conductor and programs as instrument players, and imagine there are only so many instruments that can play at the same time. The conductor must therefore choose at each time who should play, also making sure nobody is frustrated in the end! Same-wise, the OS is therefore constantly in charge of choosing what programs to execute, meaning what program to dedicate CPU time. The number of programs (or rather processes) that can be executed at the same time is usually limited by the number of cores in your processor.
In practice, the way that OS chooses what to execute is a very complex and fascinating subject, which relies on experimentation-backed heuristics. (Read more here). What you have to understand, is that there is hardly any way for you to alter this behavior, and none to guarantee the same execution time between two calls.
Using linux's time command
Calling python's time like you do measures the physical time elapsed between two calls, so because of what we have said, you don't only measure time spent on your program's execution. If you want to have a better a sense of what time the OS actually dedicated to your program, you can use the linux command time. The user time, will give you the actual CPU time dedicated to the execution of your program. Check out this thread for more info. But understand that this time as well is subject to oscillations!
What wisdom are you trying to draw from your measurements?
Finally, you should ask yourself if the exact time is really what you want. Do you care about the value? or do you want to exhibit behaviors?
Usually what is done to measure performances, is averaging the execution times of repeated calls. This way, the effects that pertain to the OS's business should be averaged out. (You can see that as building an unbiased estimator for a random process). From what I understand, you are trying to show difference in execution times for algorithms with different complexity. So the actual execution time is not so relevant, what is, is the relative order. That is why averaging multiple calls will reduce the variance of the observation and you will be able to make stronger statements as to the relative execution times.
You should address this question to your operating system. What else runs on your computer? List the various processes and see how many there are; all it takes is a process or even a context swap to alter your execution time. Among other things, calling time.time can invoke such a switch, as this is a call to a system process.
It also depends on what system support routines are already loaded when you call them -- many of those calls being implicit or secondary. If you need to allocate more memory for a particular instruction because another process took the last of your RAM and then swapped out ... well, you get the idea, I hope.

Adjust sleep() to ensure a constant rate as accurately as possible

I have a function that takes x amount of time to run. This time follows approximately a standard distribution. Now I want this function to be executed y times per minute or per hour. How would I adjust the delays after each call to achieve that?
I am guessing that I would have to time and average the last 10(maybe?) calls, see how long they took and then make the adjustments in the delays based on that? Similar to how fps are controlled. I can't however wrap my head around exactly how to do that.
Than you.
pseudocode:
def repeat_x_per_hour(reps_per_hour):
start_time = time.time()
while(true):
function(a,b,c)
ellapsed_time = time.time()-start_time
sleep(some_func_of_ellapsed_time(ellapsed_time, reps_per_hour))
So I found this thread and it's pretty much exactly what I needed!
https://gist.github.com/gregburek/1441055
Thanks #SuperStew for suggesting looking into rate limiting tools.

Wasting cpu cycles with python

I am trying to create a simple app that wastes cpu cycles for multi-core research. The one I created takes up 100% core usage. I want it to be around 30% 60% 70%, which adjustments should I make in order to achieve this? Thanks in advance.
Current version:
a=999999999
while True:
a=a/2
Starting at a large number isn't necessary, as dividing a number by 2 will quickly end up as 0/2 over and over again anyway. Besides, you don't have to actually do anything in a loop to consume CPU cycles - the mere action of looping is enough. This is why any infinite loop, even something as simple as while 1: pass, will eat up an entire CPU core until killed. To avoid taking up an entire core, use time.sleep to pause execution of the thread for a certain period of time. This function takes a single argument representing the time in seconds for the thread to sleep. It accepts a floating-point number.
import time
while 1:
time.sleep(0.0001)
Simply run an instance of this script (with an appropriate sleep time for the workload you'd like to put on your particular system) for each core you'd like to test.
Note that some operating systems may not support sleep times of less than one millisecond, causing shorter sleep times to come through as zero, making them incompatible with this strategy. See Python: high precision time.sleep and How accurate is python's time.sleep()? for more.

How to keep track of execution time?

The Setup
I'm working on training some neural networks. These have lots of hyperparameters, and typically you see how each set of hyperparameters performs, then pick your favorite. This is often done by (say) training a network with the given parameters for n epochs, then evaluate its performance, yielding a numerical score of each set of parameters and allowing you to pick the best.
There's a problem with this, though. Some sets of parameters let you go through more epochs more quickly, but benefit less from each epoch. Additionally, pretty much any set of parameters will always do better, given more epochs, so given infinite time, they would all do really well (to a point, but that's not the point right now).
The Problem
What I would prefer to do is to let each process figure out how long it's been running, and cut itself off (gracefully) after a specified number of seconds. The problem is, I would like to multithread this, so just because the program has been running for 60 seconds doesn't mean the process has had 60 seconds of fair CPU time.
So how can I measure how much time the process has actually had available to it, within the process itself?
The time.clock() method gives system time, which is problematic (as above).
The timeit module seems a bit better, but it's external to the script, so the process wouldn't know when to stop.
Is there a better way? Am I wrong about one of the above ways?
Specific Question
How can a python process see how many seconds it has been allocated so far? Not the amount of time that has passed, but how many seconds it itself has been allowed to execute for?
Use os.times().
This gives you the user and system times for the current process. Below is an example limiting the amount of user time.
start = os.times()
limit = 5 # seconds of user time
while True:
# your code here
check = os.times()
if check.user - start.user > limit:
break

Categories

Resources