Python periodic task inside an infinite loop - python

I need to execute a piece of code inside a while True loop every, let's say, 5 seconds. I know the threading.Timer(5, foo).start() will be run every 5 seconds but my foo() function depends on a variable inside my while loop.
The foo is actually run on another thread and I don't want to block the current thread just for timing's sake.
+------------------------while #main-thread---------------------------------
|
+..........foo(val)..........foo(val)...........foo(val)............foo(val)
-5s- -5s- -5s- -5s-
Something like this:
def input(self):
vals = []
while True:
# fill the vals
# run `foo(vals)` every 5 seconds
def foo(vals):
print vals
Is there any Pythonic way of doing this?

Use the sleep function:
import time
def input(self):
vals = []
while True:
# fill the vals
foo(vals)
time.sleep(5)
def foo(vals):
print vals
Note that the command will run every 5 seconds exactly only if the time needed to run it is itself negligible.

Related

Organize Python function return into a clean for loop and check if time exceeded [duplicate]

This question already has answers here:
How to add a timeout to a function in Python
(5 answers)
Closed 1 year ago.
I have a function that I run on a number of data frames. I'd like to be able to organize this a bit better and add in a timeout statement. I'm a bit new to this...
organize data frames -
d = {}
dfs = [1,2,3]
for name in dfs:
df[name] = some_function()
And then set-up to organize these into a clean loop that checks how long the df takes to run. So it would run df_1 and df_2 because they take 5 seconds but would skip and print df_3 that it took x number of seconds.
def timeout():
# check how long df_1, df_2, df_3 takes and if takes longer than 30 seconds then print out the df name
You could use a Timer (from the threading module) but your loop must cooperate and stop then the time is expired. This could also be done by checking the elapsed time at each iteration but I believe the Timer approach would incur less overhead.
Assuming you are using an iterator for the loop, you can define a generic "timeOut" function to force it to stop after a given number of seconds:
from threading import Timer
def timeout(maxTime,iterator):
stop = False
def timedOut(): # function called when timer expires
nonlocal stop
stop = True
t = Timer(maxTime,timedOut)
t.start()
for r in iterator:
if stop: break # you could print the content of r here if needed.
yield r
t.cancel()
output:
for i in timeout(3,range(1000)):
for _ in range(10000000): pass
print(i)
1
2
3
4
5
6
7
8 # it stopped here after 3 seconds with 992 iterations to go
In your example, this could be:
d = {}
dfs = [1,2,3]
for name in timeout(5,dfs):
df[name] = some_function()
Note that this will stop the loop on dfs when the total processing time exceeds 5 seconds but it cannot interrupt what is going on inside some_funtion() if it exceeds the total time.
If you need a timeout that is not tied to a specific loop, you can create an instance of a Timer that you store in global variable or in a singleton class and check its state at appropriate places in you code:
t = Timer(25,lambda:pass) # timer will not do anything when expired
t.start() # but will no longer be alive after 25 seconds
...
# at appropriate places in your code
if not t.is_alive(): return # or break or continue ...

How to create a timer function that can be called multiple times before it ends?

I want to set multiple timers (same function) at the same time, but with different ending times. Coding in python 3.
My code currently is:
import time
def timer(t):
start = time.time()
stop = False
while not stop:
if time.time()> start+t:
print("I'm done counting to %d" % t)
stop = True
timer(4)
timer(1)
timer(5)
Now I would like it would first print 1, then 4 and finally 5, but instead it runs completely timer(4) and only after that it continues to the next timer.
I've heard a bit about multi-threading, but couldn't find a good example how to implement it in my code.
Eventually, I would also like to add an option to delay the start of the timer with n seconds.
Thanks a lot!
If it's just about timers, you can use directly timers, without more complicated multi-threading:
https://docs.python.org/3.8/library/threading.html
https://docs.python.org/3.8/library/threading.html#timer-objects
import threading
def hello():
print("hello, world")
t = Timer(30.0, hello)
t.start() # after 30 seconds, "hello, world" will be printed
Try this one: (Tested on my machine, Python 3.8.2)
from threading import Timer
def hello(t):
print("Counted to", t)
t1 = Timer(4, hello, [4])
t1.start()
t2 = Timer(1, hello, [1])
t2.start()
t3 = Timer(3, hello, [3])
t3.start()
In order to delay the start, add other timers which call a function that does nothing.
These type of timers are called only once though at the ending time.

Python For Loop List, Function Every 5min

matches = []
done = []
for item in matches:
dofunctioneveryloop()
done.extent(item)
dofunctiononce5min()
How can I execute dofunctiononce5min() inside this loop once 5 minute? This is backup to file function is this possible?
Not sure I understood the question. I'll assume that you want this function to be executed only once every five minutes, no matter how often it is really called.
This might be overkill, but why not use a decorator? This will create a new function for the 'decorated' function that will execute the original function if X seconds have passed since the last execution. This will make sure the function is not executed more than once every 5 minutes (or whateer time interval in seconds you pass to the decorator), no matter whether it's called in that loop or elsewhere.
import time
def onceEveryXSeconds(seconds): # this creates the decorator
def wrapper(f): # decorator for given 'seconds'
f.last_execution = 0 # memorize last execution time
def decorated(*args, **kwargs): # the 'decorated' function
if f.last_execution < time.time() - seconds:
f.last_execution = time.time()
return f(*args, **kwargs)
return decorated
return wrapper
Usage:
#onceEveryXSeconds(3)
def function(foo):
print foo
while True:
print "loop"
function("Hello again")
time.sleep(1)
Output, with #onceEveryXSeconds(3)
loop
Hello again
loop
loop
loop
Hello again
loop
...
Assuming the loop takes longer than five minutes, you could use time.time() to determine when 5 minutes has been up.
import time
matches = []
done = []
starttime = time.time()
for item in matches:
dofunctioneveryloop()
done.extent(item)
if time.time() - starttime > 300:
dofunctiononce5min()
starttime = time.time()
It is not recommended that you do this way. Perhaps the best approach could be to schedule it on operation system, and it run it task periodically.
Anyway, if want to run a statement every x time, here is an example
import time
for i in range(5):
print i
time.sleep(3) # seconds
Time as parameter should be fractioned like 0.5 seconds.

Set a timer for running a process, pause the timer under certain conditions

I've got this program:
import multiprocessing
import time
def timer(sleepTime):
time.sleep(sleepTime)
fooProcess.terminate()
fooProcess.join() #line said to "cleanup", not sure if it is required, refer to goo.gl/Qes6KX
def foo():
i=0
while 1
print i
time.sleep(1)
i
if i==4:
#pause timerProcess for X seconds
fooProcess = multiprocessing.Process(target=foo, name="Foo", args=())
timer()
fooProcess.start()
And as you can see in the comment, under certain conditions (in this example i has to be 4) the timer has to stop for a certain X time, while foo() keeps working.
Now, how do I implement this?
N.B.: this code is just an example, the point is that I want to pause a process under certain conditions for a certain amount of time.
I am think you're going about this wrong for game design. Games always (no exceptions come to mind) use a primary event loop controlled in software.
Each time through the loop you check the time and fire off all the necessary events based on how much time has elapsed. At the end of the loop you sleep only as long as necessary before you got the next timer or event or refresh or ai check or other state change.
This gives you the best performance regarding lag, consistency, predictability, and other timing features that matter in games.
roughly:
get the current timestamp at the time start time (time.time(), I presume)
sleep with Event.wait(timeout=...)
wake up on an Event or timeout.
if on Event: get timestamp, subtract initial on, subtract result from timer; wait until foo() stops; repeat Event.wait(timeout=[result from 4.])
if on timeout: exit.
Here is an example, how I understand, what your Programm should do:
import threading, time, datetime
ACTIVE = True
def main():
while ACTIVE:
print "im working"
time.sleep(.3)
def run(thread, timeout):
global ACTIVE
thread.start()
time.sleep(timeout)
ACTIVE = False
thread.join()
proc = threading.Thread(target = main)
print datetime.datetime.now()
run(proc, 2) # run for 2 seconds
print datetime.datetime.now()
In main() it does a periodic task, here printing something. In the run() method you can say, how long main should do the task.
This code producess following output:
2014-05-25 17:10:54.390000
im working
im working
im working
im working
im working
im working
im working
2014-05-25 17:10:56.495000
please correct me, if I've understood you wrong.
I would use multiprocessing.Pipe for signaling, combined with select for timing:
#!/usr/bin/env python
import multiprocessing
import select
import time
def timer(sleeptime,pipe):
start = time.time()
while time.time() < start + sleeptime:
n = select.select([pipe],[],[],1) # sleep in 1s intervals
for conn in n[0]:
val = conn.recv()
print 'got',val
start += float(val)
def foo(pipe):
i = 0
while True:
print i
i += 1
time.sleep(1)
if i%7 == 0:
pipe.send(5)
if __name__ == '__main__':
mainpipe,foopipe = multiprocessing.Pipe()
fooProcess = multiprocessing.Process(target=foo,name="Foo",args=(foopipe,))
fooProcess.start()
timer(10,mainpipe)
fooProcess.terminate()
# since we terminated, mainpipe and foopipe are corrupt
del mainpipe, foopipe
# ...
print 'Done'
I'm assuming that you want some condition in the foo process to extend the timer. In the sample I have set up, every time foo hits a multiple of 7 it extends the timer by 5 seconds while the timer initially counts down 10 seconds. At the end of the timer we terminate the process - foo won't finish nicely at all, and the pipes will get corrupted, but you can be certain that it'll die. Otherwise you can send a signal back along mainpipe that foo can listen for and exit nicely while you join.

Run multiple functions every second, write result to file

I'm trying to run three functions (each can take up to 1 second to execute) every second. I'd then like to store the output from each function, and write them to separate files.
At the moment I'm using Timers for my delay handling. (I could subclass Thread, but that's getting a bit complicated for this simple script)
def main:
for i in range(3):
set_up_function(i)
t = Timer(1, run_function, [i])
t.start()
time.sleep(100) # Without this, main thread exits
def run_function(i):
t = Timer(1, run_function, [i])
t.start()
print function_with_delay(i)
What's the best way to handle the output from function_with_delay? Append the result to a global list for each function?
Then I could put something like this at the end of my main function:
...
while True:
time.sleep(30) # or in a try/except with a loop of 1 second sleeps so I can interrupt
for i in range(3):
save_to_disk(data[i])
Thoughts?
Edit: Added my own answer as a possibility
I believe the python Queue module is designed for precisely this sort of scenario. You could do something like this, for example:
def main():
q = Queue.Queue()
for i in range(3):
t = threading.Timer(1, run_function, [q, i])
t.start()
while True:
item = q.get()
save_to_disk(item)
q.task_done()
def run_function(q, i):
t = threading.Timer(1, run_function, [q, i])
t.start()
q.put(function_with_delay(i))
I would say store a list of lists (bool, str), where bool is whether the function has finished running and str is the output. Each function locks the list with a mutex to append output (or if you don't care about thread safety omit this). Then, have a simple polling loop checking if all the bool values are True, and if so then do your save_to_disk calls.
Another alternative would be to implement a class (taken from this answer) that uses threading.Lock(). This has the advantage of being able to wait on the ItemStore, and save_to_disk can use getAll, rather than polling the queue. (More efficient for large data sets?)
This is particularly suited to writing at a set time interval (ie every 30 seconds), rather than once per second.
class ItemStore(object):
def __init__(self):
self.lock = threading.Lock()
self.items = []
def add(self, item):
with self.lock:
self.items.append(item)
def getAll(self):
with self.lock:
items, self.items = self.items, []
return items

Categories

Resources