End loop with user input in Python - python

I have a this update function:
def update(self, interval=60):
while True:
# Do stuff
time.sleep(interval)
I would like to know the possible ways to, once the function is called, interrupt the loop via user input while leaving the script running.
All I found were answer from 5+ years ago, mostly platform-dependant.Is there any new/reliable way to achieve this? I would rather avoid threading, if possible. Using 3.7

You could create an interrupt handler with signal. This, however, still relies on the system thread for monitoring; I don't think it's costly since the thread is already spawned.
In essence, you'd still need a sort of global flag that governs the loop. When the interrupt trigger happens (user input, etc.), the interrupt changes the value of the flag, and the loop terminates allowing for other processing.

Related

How to make a Python program get into a function and finish with Ctrl+X while running?

My Python program takes a lot of time to complete all the iterations of a for loop. The moment I hit a particular key/key combination on the keyboard while it is running, I want it to go into another method and save the variables into the disk (using pickle which I know) and exit the program safely.
Any idea how I can do this?
Is the KeyboardInterrupt a safe way to this just be wrapping the for loop inside the KeyboardInterrupt exception, catching it and then saving the variables in the except block?
It is only safe if, at every point in your loop, your variables are in a state which allows you to save them and resume later.
To be safe, you could instead catch the KeyboardInterrupt before it happens and set a flag for which you can test. To make this happen, you need to intercept the signal which causes the KeyboardInterrupt, which is SIGINT. In your signal handler, you can then set a flag which you test for in your calculation function. Example:
import signal
import time
interrupted = False
def on_interrupt(signum, stack):
global interrupted
interrupted = True
def long_running_function():
signal.signal(signal.SIGINT, on_interrupt)
while not interrupted:
time.sleep(1) # do your work here
signal.signal(signal.SIGINT, signal.SIG_DFL)
long_running_function()
The key advantage is that you have control over the point at which the function is interrupted. You can add checks for if interrupted at any place you like. This helps with being in a consistent, resumable state when the function is being interrupted.
(With python3, this could be solved nicer using nonlocal; this is left as an excercise for the reader as the Asker did not specify which Python version they are at.)
(This should work on Windows according to the documentation, but I have not tested it. Please report back if it does not so that future readers are warned.)

Python whats the most efficient way to wait for input

I have a python program I want to run in the background (on a Raspberry Pi) that waits for GPIO input then performs an action and continues waiting for input until the process is killed.
What is the most efficient way to achieve this. My understanding is that using while true is not so efficient. Ideally it would use interrupts - and I could use GPIO.wait_for_edge - but that would need to be in some loop or way of continuing operation upon completion of the handler.
Thanks
According to this: http://raspi.tv/2013/how-to-use-interrupts-with-python-on-the-raspberry-pi-and-rpi-gpio GPIO.wait_for_edge(23, GPIO.FALLING) will wait for a transition on pin 23 using interrupts instead of polling. It'll only continue when triggered. You can enclose it in a try: / except KeyboardInterrupt to catch ctrl-c.
If you want to continue processing then you should register a call back function for your interrupt. See: http://sourceforge.net/p/raspberry-gpio-python/wiki/Inputs/
def callback(channel):
do something here
GPIO.add_event_detect(channel, GPIO.RISING, callback=my_callback)
continue your program here, likely in some sort of state machine
I understand that when you say "using while true" you mean polling,
which is checking the gpio state at some time interval to detect
changes, in the expense of some processing time.
One alternative to avoid polling (from the docs) is wait_for_edge():
The wait_for_edge() function is designed to block execution of your program
until an edge is detected.
Which seems to be what you are looking for; the program would suspend
execution using epool() IIUC.
Now assuming you meant that you don't want to use GPIO.wait_for_edge()
because you don't want to loose GPIO state changes while handling
events, you'll need to use threading. One possible solution is putting
events in a Queue, and setup:
One thread to do the while True: queue.put(GPIO.wait_for_edge(...)).
Another thread to perform the Queue.get().

How can I stop the execution of a Python function from outside of it?

So I have this library that I use and within one of my functions I call a function from that library, which happens to take a really long time. Now, at the same time I have another thread running where I check for different conditions, what I want is that if a condition is met, I want to cancel the execution of the library function.
Right now I'm checking the conditions at the start of the function, but if the conditions happen to change while the library function is running, I don't need its results, and want to return from it.
Basically this is what I have now.
def my_function():
if condition_checker.condition_met():
return
library.long_running_function()
Is there a way to run the condition check every second or so and return from my_function when the condition is met?
I've thought about decorators, coroutines, I'm using 2.7 but if this can only be done in 3.x I'd consider switching, it's just that I can't figure out how.
You cannot terminate a thread. Either the library supports cancellation by design, where it internally would have to check for a condition every once in a while to abort if requested, or you have to wait for it to finish.
What you can do is call the library in a subprocess rather than a thread, since processes can be terminated through signals. Python's multiprocessing module provides a threading-like API for spawning forks and handling IPC, including synchronization.
Or spawn a separate subprocess via subprocess.Popen if forking is too heavy on your resources (e.g. memory footprint through copying of the parent process).
I can't think of any other way, unfortunately.
Generally, I think you want to run your long_running_function in a separate thread, and have it occasionally report its information to the main thread.
This post gives a similar example within a wxpython program.
Presuming you are doing this outside of wxpython, you should be able to replace the wx.CallAfter and wx.Publisher with threading.Thread and PubSub.
It would look something like this:
import threading
import time
def myfunction():
# subscribe to the long_running_function
while True:
# subscribe to the long_running_function and get the published data
if condition_met:
# publish a stop command
break
time.sleep(1)
def long_running_function():
for loop in loops:
# subscribe to main thread and check for stop command, if so, break
# do an iteration
# publish some data
threading.Thread(group=None, target=long_running_function, args=()) # launches your long_running_function but doesn't block flow
myfunction()
I haven't used pubsub a ton so I can't quickly whip up the code but it should get you there.
As an alternative, do you know the stop criteria before you launch the long_running_function? If so, you can just pass it as an argument and check whether it is met internally.

Python - wait on a condition without high cpu usage

In this case, say I wanted to wait on a condition to happen, that may happen at any random time.
while True:
if condition:
#Do Whatever
else:
pass
As you can see, pass will just happen until the condition is True. But while the condition isn't True the cpu is being pegged with pass causing higher cpu usage, when I simply just want it to wait until the condition occurs. How may I do this?
See Busy_loop#Busy-waiting_alternatives:
Most operating systems and threading libraries provide a variety of system calls that will block the process on an event, such as lock acquisition, timer changes, I/O availability or signals.
Basically, to wait for something, you have two options (same as IRL):
Check for it periodically with a reasonable interval (this is called "polling")
Make the event you're waiting for notify you: invoke (or, as a special case, unblock) your code somehow (this is called "event handling" or "notifications". For system calls that block, "blocking call" or "synchronous call" or call-specific terms are typically used instead)
As already mentioned you can a) poll i.e. check for a condition and if it is not true wait for some time interval, if your condition is an external event you can arrange for a blocking wait for the state to change, or you can also take a look at the publish subscribe model, pubsub, where your code registers an interest in a given item and then other parts of the code publish the item.
This is not really a Python problem. Optimally, you want to put your process to sleep and wait for some sort of signal that the action has occured, which will use no CPU while waiting. So it's not so much a case of writing Python code but figuring out what mechanism is used to make condition true and thus wait on that.
If the condition is a simple flag set by another thread in your program rather than an external resource, you need to go back and learn from scratch how threading works.
Only if the thing that you're waiting for does not provide any sort of push notification that you can wait on should you consider polling it in a loop. A sleep will help reduce the CPU load but not eliminate it and it will also increase the response latency as the sleep has to complete before you can commence processing.
As for waiting on events, an event-driven paradigm might be what you want unless your program is utterly trivial. Python has the Twisted framework for this.

Resource usage of "time.sleep" in loop vs. "threading.Timer"

First method:
import threading
import time
def keepalive():
while True:
print 'Alive.'
time.sleep(200)
threading.Thread(target=keepalive).start()
Second method:
import threading
def keepalive():
print 'Alive.'
threading.Timer(200, keepalive).start()
threading.Timer(200, keepalive).start()
Which method takes up more RAM? And in the second method, does the thread end after being activated? or does it remain in the memory and start a new thread? (multiple threads)
Timer creates a new thread object for each started timer, so it certainly needs more resources when creating and garbage collecting these objects.
As each thread exits immediately after it spawned another active_count stays constant, but there are constantly new Threads created and destroyed, which causes overhead. I'd say the first method is definitely better.
Altough you won't realy see much difference, only if the interval is very small.
Here's an example of how to test this yourself:
And in the second method, does the thread end after being activated? or does it remain in the memory and start a new thread? (multiple threads)
import threading
def keepalive():
print 'Alive.'
threading.Timer(200, keepalive).start()
print threading.active_count()
threading.Timer(200, keepalive).start()
I also changed the 200 to .2 so it wouldn't take as long.
The thread count was 3 forever.
Then I did this:
top -pid 24767
The #TH column never changed.
So, there's your answer: We don't have enough info to know whether Python maintains a single timer thread for all of the timers, or ends and cleans up the thread as soon as the timer runs, but we can be sure the threads doesn't stick around and pile up. (If you do want to know which of the former is happening, you can, e.g., print the thread ids.)
An alternative way to find out is to look at the source. As the documentation says, "Timer is a subclass of Thread and as such also functions as an example of creating custom threads". The fact that it's a subclass of Thread already tells you that each Timer is a Thread. And the fact that it "functions as an example" implies that it ought to be easy to read. If you click the link form the documentation to the source, you can see how trivial it is. Most of the work is done by Event, but that's in the same source file, and it's almost as simple. Effectively, it just creates a condition variable, waits on it (so it blocks until it times out, or you notify the condition by calling cancel), then quits.
The reason I'm answering one sub-question and explaining how I did it, rather than answering each sub-question, is because I think it would be more useful for you to walk through the same steps.
On further reflection, this probably isn't a question to be decided by optimization in the first place:
If you have a simple, synchronous program that needs to do nothing for 200 seconds, make a blocking call to sleep. Or, even simpler, just do the job and quit, and pick an external tool to schedule your script to run every 200s.
On the other hand, if your program is inherently asynchronous—especially if you've already got thread, signal handlers, and/or an event loop—there's just no way you're going to get sleep to work. If Timer is too inefficient, go to PyPI or ActiveState and find a better timer that lets you schedule repeatable timers (or even multiple timers) with a single instance and thread. (Or, if you're using signals, use signal.alarm or setitimer, and if you're using an event loop, build the timer into your main loop.)
I can't think of any use case where sleep and Timer would both be serious contenders.

Categories

Resources