I have a Python 2.7 program running an infinite while loop and I want to incorporate a timer interrupt.
What I aim to do is to set off a timer at some point in the loop, and when 5 seconds have elapsed I want the code to branch to a specific part of the while loop.
What I have been doing so far is the following:
in each iteration of the while loop I am checking how much time has elapsed when I reach that point in the code using
time.clock()
and if the difference exceeds 5 I run the chunk of code I mean to run
However that way 7 seconds might pass before I evaluate the time, it will be >5sec but I want to go there exactly when 5 seconds pass
Also, I need this to work for more than 1 counter (possibly up to a 100) but I do not want the interrupts to interrupt each other. Using Timer did not work either.
I know this can be done using timer interrupts in assembly but how can I do that in python?
If a single event is to be handled, then the easiest way is to use the signal framework which is a standard module of Python.
However, if we need a fully-fledged scheduler, then we have to resort to another module: sched. Here is a pointer to the official documentation. Please be aware, though, that in multi-threaded environments sched has limitations with respect to thread-safety.
Another option is the Advanced Python Scheduler, which is not part of the standard distribution.
You can't get real-time without special hardware/software support. You don't need it in most cases (do you need to control giant robots?).
How to delay several function calls by known numbers of seconds depends on your needs e.g., if a time it takes to run a function is negligible compared to the delay between the calls then you could run all functions in a single thread:
#!/usr/bin/env python
from __future__ import print_function
from Tkinter import Tk
root = Tk()
root.withdraw() # don't show the GUI window
root.after(1000, print, 'foo') # print foo in a second
root.after(0, print, 'bar') # print bar in a jiffy
root.after(2000, root.destroy) # exit mainloop in 2 seconds
root.mainloop()
print("done")
It implements yours "I do not want the interrupts to interrupt each other either" because the next callback is not called until the previous one is complete.
Related
I'm writing a short application in python using tkinter. Everything works except for an unexpected pause - it should be generating an event twice a second, but frequently it will pause for 5 or 6 seconds between signals. I've put print statements to find where the delay is, and found it is the following statement:
self.frame.after(ms, self.tick_handler)
ms is 500 so this should send the event at around .5 seconds. Usually it does, but frequently it hangs for as much as 5 or 6 seconds before tick_handler() gets the signal. The program is pretty simple, with a single worker thread receiving all input from a single queue, events coming from a single tkinter frame. The after() statement is in the worker thread. I've tried shutting off gc (gc.disable()) but that makes no difference. There is minimal activity outside this on my computer.
If I send other input during the pause using mouse or keys it is handled immediately, so the worker thread is not blocked. It looks as if the signal request is received but not fired for some time. I know I can't expect real time performance so .6 seconds wouldn't be noteworthy, but 6.0 seconds?
This is the first time I've worked with tkinter. Is there something I am missing about event handling?
I Think You Did Not Include tkinter.mainloop() At The End
PS: I'm Not Sure...
When using the after method in tkinter, instead of doing root.after(function_name,time_delayed) do root.after(function_name(),time_delayed) as this worked for me and the delay disappeared.
I'm trying to timeout a function if it runs for more than 3 seconds (for example). I'm using signals and alarms but the alarm never fires. I'd like a timeout mechanism that works for any function. As an example of the problem I'm facing:
import signal
def foobar():
x = 42
while x >= 20:
if x >= 40:
x = 23
return x
def handle_alarm(*args):
print("Alarm raised")
raise TimeoutException("timeout reached")
signal.signal(signal.SIGALRM, handle_alarm)
signal.alarm(3)
try:
print(foobar())
except:
print("Exception Caught")
When run, my program just runs forever and my handler never runs. Any idea why this is the case?
As an aside, if I delete the if statement from foobar, then the alarm does trigger.
On my system, Mac OS X with MacPorts, I tested your code with many version of Python. The only version that exhibits the "bug" you found is 2.7. The timeout works in 2.4, 2.5, 2.6, 3.3, and 3.4.
Now, why is this happening, and what can be done about it?
I think it happens because your foobar() is a tight loop which never "yields" control back to Python's main loop. It just runs as fast as it can, doing no useful work, yet preventing Python from processing the signal.
It will help to understand how signals in *nix are usually handled. Since few library functions are "async signal safe," not much can be done within a C signal handler directly. Python needs to invoke your signal handler which is written in Python, but it can't do that directly in the signal handler that it registers using C. So a typical thing that programs do in their signal handlers is to set some flag to indicate that a signal has been received, and then return. In the main loop, then, that flag is checked (either directly or by using a "pipe" which can be written to in the signal handler and poll()ed or select()ed on).
So I would suppose that the Python main loop is happily executing your foobar() function, and a signal comes in, it sets some internal state to know it needs to handle that signal, and then it waits for for foobar() to end, or failing that, at least for foobar() to invoke some interruptible function, such as sleep() or print().
And indeed, if you add either a sleep (for any amount of time), or a print statement to foobar()'s loop, you will get the timeout you desire in Python 2.7 (as well as the other versions).
It is in general a good idea to put a short sleep in busy loops anyway, to "relax" them, thereby helping scheduling of other work which may need doing. You don't have to sleep on every iteration either--just a tiny sleep every 1000 times through the loop would work fine in this case.
I've been coding the python "apscheduler" package (Advanced Python Scheduler) into my app, so far it's going good, I'm able to do almost everything that I had envisioned doing with it.
Only one kink left to iron out...
The function my events are calling will only accept around 3 calls a second or fail as it is triggering very slow hardware I/O :(
I've tried limiting the max number of threads in the threadpool from 20 to just 1 to try and slow down execution, but since I'm not really putting a bit load on apscheduler my events are still firing pretty much concurrently (well... very, very close together at least).
Is there a way to 'stagger' different events that fire within the same second?
I have recently found this question because I, like yourself, was trying to stagger scheduled jobs slightly to compensate for slow hardware.
Including an argument like this in the scheduler add_job call staggers the start time for each job by 200ms (while incrementing idx for each job):
next_run_time=datetime.datetime.now() + datetime.timedelta(seconds=idx * 0.2)
What you want to use is the 'jitter' option.
From the docs:
The jitter option enables you to add a random component to the
execution time. This might be useful if you have multiple servers and
don’t want them to run a job at the exact same moment or if you want
to prevent multiple jobs with similar options from always running
concurrently
Example:
# Run the `job_function` every hour with an extra-delay picked randomly
# in a [-120,+120] seconds window.
sched.add_job(job_function, 'interval', hours=1, jitter=120)
I don't know about apscheduler but have you considered using a Redis LIST (queue) and simply serializing the event feed into that one critically bounded function so that it fires no more than three times per second? (For example you could have it do a blocking POP with a one second max delay, increment your trigger count for every event, sleep when it hits three, and zero the trigger count any time the blocking POP times out (Or you could just use 333 millisecond sleeps after each event).
My solution for future reference:
I added a basic bool lock in the function being called and a wait which seems to do the trick nicely - since it's not the calling of the function itself that raises the error, but rather a deadlock situation with what the function carries out :D
I'm currently using python (2.7) to write a GUI that has some threads going on. I come across a point that I need to do a roughly about a second delay before getting a piece of information, but I can't afford to have the function takes more than a few millisecond to run. With that in mind, I'm trying to create a Threaded timer that will set a flag timer.doneFlag and have the main function to keep poking to see whether it's done or not.
It is working. But not all the time. The problem that I run into is that sometimes I feel like the time.sleep function in run , doesn't wait fully for a second (sometimes it may not even wait). All I need is that I can have a flag that allow me control the start time and raise the flag when it reaches 1 second.
I maybe doing too much just to get a delay that is threadable, if you can suggest something, or help me find a bug in the following code, I'd be very grateful!
I've attached a portion of the code I used:
from main program:
class dataCollection:
def __init__(self):
self.timer=Timer(5)
self.isTimerStarted=0
return
def StateFunction(self): #Try to finish the function within a few milliseconds
if self.isTimerStarted==0:
self.timer=Timer(1.0)
self.timer.start()
self.isTimerStarted=1
if self.timer.doneFlag:
self.timer.doneFlag=0
self.isTimerStarted=0
#and all the other code
import time
import threading
class Timer(threading.Thread):
def __init__(self, seconds):
self.runTime = seconds
self.doneFlag=0
threading.Thread.__init__(self)
def run(self):
time.sleep(self.runTime)
self.doneFlag=1
print "Buzzzz"
x=dataCollection()
while 1:
x.StateFunction()
time.sleep(0.1)
First, you've effectively rebuilt threading.Timer with less flexibility. So I think you're better off using the existing class. (There are some obvious downsides with creating a thread for each timer instance. But if you just want a single one-shot timer, it's fine.)
More importantly, having your main thread repeatedly poll doneFlag is probably a bad idea. This means you have to call your state function as often as possible, burning CPU for no good reason.
Presumably the reason you have to return within a few milliseconds is that you're returning to some kind of event loop, presumably for your GUI (but, e.g., a network reactor has the same issue, with the same solutions, so I'll keep things general).
If so, almost all such event loops have a way to schedule a timed callback within the event loop—Timer in wx, callLater in twisted, etc. So, use that.
If you're using a framework that doesn't have anything like that, it hopefully at least has some way to send an event/fire a signal/post a message/whatever it's called from outside. (If it's a simple file-descriptor-based reactor, it may not have that, but you can add it yourself just by tossing a pipe into the reactor.) So, change your Timer callback to signal the event loop, instead of writing code that polls the Timer.
If for some reason you really do need to poll a variable shared across threads, you really, really, should be protecting it with a Condition or RLock. There is no guarantee in the language that, when thread 0 updates the value, thread 1 will see the new value immediately, or even ever. If you understand enough of the internals of (a specific version of) CPython, you can often prove that the GIL makes a lock unnecessary in specific cases. But otherwise, this is a race.
Finally:
The problem that I run into is that sometimes I feel like the time.sleep function in run , doesn't wait fully for a second (sometimes it may not even wait).
Well, the documentation clearly says this can happen:
The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine.
So, if you need a guarantee that it actually sleeps for at least 1 second, the only way to do this is something like this:
t0 = time.time()
dur = 1.0
while True:
time.sleep(dur)
t1 = time.time()
dur = 1.0 - (t1 - t0)
if dur <= 0:
break
First method:
import threading
import time
def keepalive():
while True:
print 'Alive.'
time.sleep(200)
threading.Thread(target=keepalive).start()
Second method:
import threading
def keepalive():
print 'Alive.'
threading.Timer(200, keepalive).start()
threading.Timer(200, keepalive).start()
Which method takes up more RAM? And in the second method, does the thread end after being activated? or does it remain in the memory and start a new thread? (multiple threads)
Timer creates a new thread object for each started timer, so it certainly needs more resources when creating and garbage collecting these objects.
As each thread exits immediately after it spawned another active_count stays constant, but there are constantly new Threads created and destroyed, which causes overhead. I'd say the first method is definitely better.
Altough you won't realy see much difference, only if the interval is very small.
Here's an example of how to test this yourself:
And in the second method, does the thread end after being activated? or does it remain in the memory and start a new thread? (multiple threads)
import threading
def keepalive():
print 'Alive.'
threading.Timer(200, keepalive).start()
print threading.active_count()
threading.Timer(200, keepalive).start()
I also changed the 200 to .2 so it wouldn't take as long.
The thread count was 3 forever.
Then I did this:
top -pid 24767
The #TH column never changed.
So, there's your answer: We don't have enough info to know whether Python maintains a single timer thread for all of the timers, or ends and cleans up the thread as soon as the timer runs, but we can be sure the threads doesn't stick around and pile up. (If you do want to know which of the former is happening, you can, e.g., print the thread ids.)
An alternative way to find out is to look at the source. As the documentation says, "Timer is a subclass of Thread and as such also functions as an example of creating custom threads". The fact that it's a subclass of Thread already tells you that each Timer is a Thread. And the fact that it "functions as an example" implies that it ought to be easy to read. If you click the link form the documentation to the source, you can see how trivial it is. Most of the work is done by Event, but that's in the same source file, and it's almost as simple. Effectively, it just creates a condition variable, waits on it (so it blocks until it times out, or you notify the condition by calling cancel), then quits.
The reason I'm answering one sub-question and explaining how I did it, rather than answering each sub-question, is because I think it would be more useful for you to walk through the same steps.
On further reflection, this probably isn't a question to be decided by optimization in the first place:
If you have a simple, synchronous program that needs to do nothing for 200 seconds, make a blocking call to sleep. Or, even simpler, just do the job and quit, and pick an external tool to schedule your script to run every 200s.
On the other hand, if your program is inherently asynchronous—especially if you've already got thread, signal handlers, and/or an event loop—there's just no way you're going to get sleep to work. If Timer is too inefficient, go to PyPI or ActiveState and find a better timer that lets you schedule repeatable timers (or even multiple timers) with a single instance and thread. (Or, if you're using signals, use signal.alarm or setitimer, and if you're using an event loop, build the timer into your main loop.)
I can't think of any use case where sleep and Timer would both be serious contenders.