I thought I'd sign up to help me learn more coding with Python. I recently signed up to code academy and have been doing the online Python course which has helped a lot.
I've decided to give myself a small project to continue learning but have ran into a problem (searched on here and still no help.)
I'm wanting to write a small function of code for a midi step sequencer, for simplicity I'm omitting midi for now and looking at it in the most logical way I can.
What I want to do is:
input a set of midi note numbers
append these to a list
loop through this list at a timed interval (BPM) - for example 60,000 / 120 bpm = 500ms between quarter notes / 24 PPQN = 20.8333ms per pulse.
The trouble I have is I can't find any possible way to iterate through a list in the time domain. I have looked at the time.sleep function but read this is not accurate enough. Is there another method. I don't want to use any libraries.
Any pointers would be a huge help as I'm struggling to find any available resources for running through a loop at a specified amount of time between each value in the loop.
Could you say why sleep is not accurate enough?
If you wish so, you can keep track of the elapsed time yourself using something like time.thread_time_ns
so:
def sleep(pause_time):
initial_time = time.thread_time_ns()
while( time.thread_time_ns() - initial_time < pause_time):
pass
So this is your own sleep function
The reason time.sleep might not be accurate for you might be due to the way you are using it. Try this:
import time
sleeptime = 0.0208333 #in seconds
for your_loop_here:
start = time.time()
#do stuff here
time.sleep(max(sleeptime - (time.time() - start), 0))
I have used this method to limit frame rate in computer vision processing. All it does is account the loop iteration time in the sleep so that the time for each loop is as accurate as possible. It might work for you too. Hope this helps!
Well of course there's a much more accurate way to do this, which would be to write all your code in assembly and finely adjust the clock speed of your CPU so that each iteration takes a fixed amount of time, but this might be too impractical for your use case.
Related
Is switching to C an adequate approach to get the combinations faster than in Python or am I on a wrong track? I only "speak" python and hope for some guidance to decide on the next programming steps for my self-chosen learning project. I am working on a Data Science project and based on your answers, I would recommend to invite a computer scientist to the project or drop my project approach.
I have a list of 69 strings where I need all possible combinations of 8 elements.
At the moment I can do this in python with itertools.combinations()
for i in itertools.combinations(DictAthleteObjects.keys()),8):
do stuff here on instances of classes
In Python the itertools.combinations works perfectly fine for a view combinations but due to the large amount of combinations it is not time efficient and sometimes crashes (I think because of too less memory) when not breaking the loop after a view iterations. Generally the time-complexity is very large.
According to this StackOverflow discussion it could be a valid approach to generate the combinations in C and also do all the programming that works in python in C, because its much faster.
On the other hand I have received a comment that itertools.combinations is using C itself. But I cannot find any sources on that.
The comments you've received so far basically answer your question (profiling, minor gains for lots of C hassle, code redesign) but in the past handful of days I went through a similar dilemma for a home project and wanted to throw in my thoughts.
On the profiling front I just used Python's time module and a global start time variable to get basic benchmarks as my program ran. For highly complex scenarios I recommend using one of the Python profilers mentioned in the comments instead.
import time
start_time = time.process_time()
// stuff
print(f'runtime(sec): {time.process_time() - start_time}')
This allowed me to get the 10,000ft view of how long my code was taking to do various things, then I found a workable size of input data that didn't take too long to run but was a good representation of the larger dataset and tried to make incremental improvements.
After a lot of messing around I figured out what the most expensive things needed to be done were, and at the top of the list was generating those unique combinations. So what I ended up doing was splitting things up into a pipeline of sorts that allowed me to cut down the total runtime to the amount of time it takes to perform the most expensive work.
In the case of itertools.combinations it's not actually generating the real output values so it runs insanely quickly, but when it comes time to do the for loop and actually produce those values things get interesting. On my machine it took about 3ms to return the generator that would produce ~31.2B combinations if I looped over that.
# Code to check how long itertools.combinations() takes to run
import itertools
import time
data = []
for i in range(250000):
data.append(i)
ncombos = (250000 * 249999) / 2
for num_items in range(2, 9):
start_time = time.process_time()
g = itertools.combinations(data, num_items)
print(f'combo_sz:{num_items} num_combos:{ncombos} elapsed(sec):{time.process_time() - start_time}')
In my case I couldn't find a way to nicely split up a generator into parts so I decided to use the multiprocessing module (Process, Queue, Lock) to pass off data as it came in (this saves big on memory as well). In summary it was really helpful to look at things from the subtask perspective because each subtask may require something different.
Also don't be like me and skim the documentation too quickly haha, many problems can be resolved by reading that stuff. I hope you found something useful out of this reply, good luck!
I have a function that takes x amount of time to run. This time follows approximately a standard distribution. Now I want this function to be executed y times per minute or per hour. How would I adjust the delays after each call to achieve that?
I am guessing that I would have to time and average the last 10(maybe?) calls, see how long they took and then make the adjustments in the delays based on that? Similar to how fps are controlled. I can't however wrap my head around exactly how to do that.
Than you.
pseudocode:
def repeat_x_per_hour(reps_per_hour):
start_time = time.time()
while(true):
function(a,b,c)
ellapsed_time = time.time()-start_time
sleep(some_func_of_ellapsed_time(ellapsed_time, reps_per_hour))
So I found this thread and it's pretty much exactly what I needed!
https://gist.github.com/gregburek/1441055
Thanks #SuperStew for suggesting looking into rate limiting tools.
The Setup
I'm working on training some neural networks. These have lots of hyperparameters, and typically you see how each set of hyperparameters performs, then pick your favorite. This is often done by (say) training a network with the given parameters for n epochs, then evaluate its performance, yielding a numerical score of each set of parameters and allowing you to pick the best.
There's a problem with this, though. Some sets of parameters let you go through more epochs more quickly, but benefit less from each epoch. Additionally, pretty much any set of parameters will always do better, given more epochs, so given infinite time, they would all do really well (to a point, but that's not the point right now).
The Problem
What I would prefer to do is to let each process figure out how long it's been running, and cut itself off (gracefully) after a specified number of seconds. The problem is, I would like to multithread this, so just because the program has been running for 60 seconds doesn't mean the process has had 60 seconds of fair CPU time.
So how can I measure how much time the process has actually had available to it, within the process itself?
The time.clock() method gives system time, which is problematic (as above).
The timeit module seems a bit better, but it's external to the script, so the process wouldn't know when to stop.
Is there a better way? Am I wrong about one of the above ways?
Specific Question
How can a python process see how many seconds it has been allocated so far? Not the amount of time that has passed, but how many seconds it itself has been allowed to execute for?
Use os.times().
This gives you the user and system times for the current process. Below is an example limiting the amount of user time.
start = os.times()
limit = 5 # seconds of user time
while True:
# your code here
check = os.times()
if check.user - start.user > limit:
break
I am trying to measure a voltage through an acquisition card at a precise frequency, and I don't find effective tools to enforce my loop to run at said frequency.
For now I have something like this:
import time
def get_data(frequency):
last_t=time.time()
while True:
while (time.time()-last_t)<(1./frequency): # waiting loop
time.sleep(1./(100*frequency)) # wait for 1/100 of the desired loop time
last_t=time.time()
data=sensor.acquire()
# do stuff with the data
1/ This is not very precise, as it only enforce that frequency will not be higher than wanted frequency (but can be much lower).
2/ This is expensive in CPU time, because the waiting loop is very fast, but I don't know how to improve it.
Any idea for improving one (or both) of these issues will be much appreciated.
I did some research before posting but seem to be at a lost (not too experienced in coding).
I am attempting to generate or compute a random number for certain time interval with Python. I'm not looking for full code, I want help using the time library if that is the correct one to use.
Pseudo-code:
Allow python [PC] to compute a random number for 3 seconds
------> Store the computed generation in a value (i can handle this)
I would then use the random generated value to link access a python list (which would be automatically generated via a random number generation as well but i can figure that out).
I'm not sure why you want to do this, but here's how to compute many random numbers, throwing most of them away, and then using the last one after 3 seconds have elapsed.
import random
import time
start = time.clock()
while time.clock() - start < 3:
random_number = random.randint(0,100)
print random_number
This pointlessly throws away about 2 million perfectly good random numbers on my machine.
(And, as abarnert points out, this also maxes out one CPU core for the whole 3 seconds in a busy loop, which is very, very wasteful, but I thinks it's what you were asking for?)
EDIT: Updated to use time.clock instead of time.time, as suggested by abarnert again (thanks), because this seems to give better resolution across platforms and doesn't suffer from problems when the system time is altered in the middle of the program running.
First, you didn't say what kind of random number you want to generate, but given that your example is 10, I assume it's an integer in some range—let's say you're calling random.randrange(30).
Now, you want to compute a number every second for 3 seconds, then keep the last one. I don't know why you'd even want to do this, but you can do it like this:
for i in range(3):
number = random.randrange(30)
time.sleep(1.0)
At the end of 3 seconds, number will be the third random number generated.
The key here is that, to do something once per second (in a synchronous program—don't do this in a GUI or server!)—you just call time.sleep.
If the operation you were doing took a significant chunk of a second (or longer), this wouldn't be appropriate. Instead, you'd want to compute the start time, and sleep until a second after that:
t0 = time.monotonic()
for i in range(3):
number = random.randrange(30)
t0 += 1
time.sleep(t0 - time.monotonic())
Note that I've used time.monotonic here. This function is specifically designed for this kind of use case. It returns as much precision as can be gotten with reasonable efficiency (in particular, unlike time.time, it doesn't give you 1s precision on some platforms), and it guarantees that you'll never go backward even if, e.g., you change the system clock in the middle of the program. If you're using 3.2 or earlier, either look through the docs for the best alternative (possibly using time.clock()), or look into using ctypes to call the appropriate platform native function.
But in this case, random.randrange is going to take somewhere on the order of a microsecond, which is so much less time than the minimum resolution of most systems' simple timers that there's no reason to do such a thing.
If you want to take 3 seconds to get a random number, because you're concerned about the quality of the random number, you can use os.urandom() to generate the value. If all you really want to do is to select an item from your list at random, you can use random.choice()
Note: The function time.clock() has been removed, after having been deprecated since Python 3.3: use time.perf_counter() or time.process_time() instead, depending on your requirements, to have well-defined behavior. (Contributed by Matthias Bussonnier in bpo-36895.)