Python whats the most efficient way to wait for input - python

I have a python program I want to run in the background (on a Raspberry Pi) that waits for GPIO input then performs an action and continues waiting for input until the process is killed.
What is the most efficient way to achieve this. My understanding is that using while true is not so efficient. Ideally it would use interrupts - and I could use GPIO.wait_for_edge - but that would need to be in some loop or way of continuing operation upon completion of the handler.
Thanks

According to this: http://raspi.tv/2013/how-to-use-interrupts-with-python-on-the-raspberry-pi-and-rpi-gpio GPIO.wait_for_edge(23, GPIO.FALLING) will wait for a transition on pin 23 using interrupts instead of polling. It'll only continue when triggered. You can enclose it in a try: / except KeyboardInterrupt to catch ctrl-c.
If you want to continue processing then you should register a call back function for your interrupt. See: http://sourceforge.net/p/raspberry-gpio-python/wiki/Inputs/
def callback(channel):
do something here
GPIO.add_event_detect(channel, GPIO.RISING, callback=my_callback)
continue your program here, likely in some sort of state machine

I understand that when you say "using while true" you mean polling,
which is checking the gpio state at some time interval to detect
changes, in the expense of some processing time.
One alternative to avoid polling (from the docs) is wait_for_edge():
The wait_for_edge() function is designed to block execution of your program
until an edge is detected.
Which seems to be what you are looking for; the program would suspend
execution using epool() IIUC.
Now assuming you meant that you don't want to use GPIO.wait_for_edge()
because you don't want to loose GPIO state changes while handling
events, you'll need to use threading. One possible solution is putting
events in a Queue, and setup:
One thread to do the while True: queue.put(GPIO.wait_for_edge(...)).
Another thread to perform the Queue.get().

Related

End loop with user input in Python

I have a this update function:
def update(self, interval=60):
while True:
# Do stuff
time.sleep(interval)
I would like to know the possible ways to, once the function is called, interrupt the loop via user input while leaving the script running.
All I found were answer from 5+ years ago, mostly platform-dependant.Is there any new/reliable way to achieve this? I would rather avoid threading, if possible. Using 3.7
You could create an interrupt handler with signal. This, however, still relies on the system thread for monitoring; I don't think it's costly since the thread is already spawned.
In essence, you'd still need a sort of global flag that governs the loop. When the interrupt trigger happens (user input, etc.), the interrupt changes the value of the flag, and the loop terminates allowing for other processing.

How to make a Python program get into a function and finish with Ctrl+X while running?

My Python program takes a lot of time to complete all the iterations of a for loop. The moment I hit a particular key/key combination on the keyboard while it is running, I want it to go into another method and save the variables into the disk (using pickle which I know) and exit the program safely.
Any idea how I can do this?
Is the KeyboardInterrupt a safe way to this just be wrapping the for loop inside the KeyboardInterrupt exception, catching it and then saving the variables in the except block?
It is only safe if, at every point in your loop, your variables are in a state which allows you to save them and resume later.
To be safe, you could instead catch the KeyboardInterrupt before it happens and set a flag for which you can test. To make this happen, you need to intercept the signal which causes the KeyboardInterrupt, which is SIGINT. In your signal handler, you can then set a flag which you test for in your calculation function. Example:
import signal
import time
interrupted = False
def on_interrupt(signum, stack):
global interrupted
interrupted = True
def long_running_function():
signal.signal(signal.SIGINT, on_interrupt)
while not interrupted:
time.sleep(1) # do your work here
signal.signal(signal.SIGINT, signal.SIG_DFL)
long_running_function()
The key advantage is that you have control over the point at which the function is interrupted. You can add checks for if interrupted at any place you like. This helps with being in a consistent, resumable state when the function is being interrupted.
(With python3, this could be solved nicer using nonlocal; this is left as an excercise for the reader as the Asker did not specify which Python version they are at.)
(This should work on Windows according to the documentation, but I have not tested it. Please report back if it does not so that future readers are warned.)

Python - wait on a condition without high cpu usage

In this case, say I wanted to wait on a condition to happen, that may happen at any random time.
while True:
if condition:
#Do Whatever
else:
pass
As you can see, pass will just happen until the condition is True. But while the condition isn't True the cpu is being pegged with pass causing higher cpu usage, when I simply just want it to wait until the condition occurs. How may I do this?
See Busy_loop#Busy-waiting_alternatives:
Most operating systems and threading libraries provide a variety of system calls that will block the process on an event, such as lock acquisition, timer changes, I/O availability or signals.
Basically, to wait for something, you have two options (same as IRL):
Check for it periodically with a reasonable interval (this is called "polling")
Make the event you're waiting for notify you: invoke (or, as a special case, unblock) your code somehow (this is called "event handling" or "notifications". For system calls that block, "blocking call" or "synchronous call" or call-specific terms are typically used instead)
As already mentioned you can a) poll i.e. check for a condition and if it is not true wait for some time interval, if your condition is an external event you can arrange for a blocking wait for the state to change, or you can also take a look at the publish subscribe model, pubsub, where your code registers an interest in a given item and then other parts of the code publish the item.
This is not really a Python problem. Optimally, you want to put your process to sleep and wait for some sort of signal that the action has occured, which will use no CPU while waiting. So it's not so much a case of writing Python code but figuring out what mechanism is used to make condition true and thus wait on that.
If the condition is a simple flag set by another thread in your program rather than an external resource, you need to go back and learn from scratch how threading works.
Only if the thing that you're waiting for does not provide any sort of push notification that you can wait on should you consider polling it in a loop. A sleep will help reduce the CPU load but not eliminate it and it will also increase the response latency as the sleep has to complete before you can commence processing.
As for waiting on events, an event-driven paradigm might be what you want unless your program is utterly trivial. Python has the Twisted framework for this.

Force Python to run in a single thread

I am using Python with the Rasbian OS (based on Linux) on the Raspberry Pi board. My Python script uses GPIOs (hardware inputs). I have noticed when a GPIO activates, its callback will interrupt the current thread.
This has forced me to use locks to prevent issues when the threads access common resources. However it is getting a bit complicated. It struck me that if the GPIO was 'queued up' until the main thread went to sleep (e.g. hits a time.sleep) it would simplify things considerably (i.e. like the way that javascript deals with things).
Is there a way to implement this in Python?
Are you using RPi.GPIO library? Or you call your Python code from C when a callback fires?
In case of RPi.GPIO, it runs a valid Python thread, and you do not need extra synchronization if you organize the threads interaction properly.
The most common pattern is to put your event in a queue (in case of Python 3 this library will do the job, Python 2 has this one). Then, when your main thread is ready to process the event, process all the events in your queue. The only problem is how you find a moment for processing them. The simplest solution is to implement a function that does that and call it from time to time. If you use a long sleep call, you may have to split it into many smaller sleeps to make sure the external events are processed often enough. You may even implement your own wrapper for sleep that splits one large delay into several smaller ones and processes the queue between them. The other solution is to use Queue.get with timeout parameter instead of sleep (it returns immediately after an event arrives into the queue), however, if you need to sleep exactly for a period you specified, you may have to do some extra magic such as measuring the time yourself and calling get again if you need to wait more after processing the events.
Use a Queue from the multithreading module to store the tasks you want to execute. The main loop periodically checks for entries in the queue and executes them one by one when it finds something.
You GPIO monitoring threads put their tasks into the queue (only one is required to collect from many threads).
You can model your tasks as callable objects or function objects.

Efficiency of infinite loop to service GPIO

I'm using Python on Raspbian (a type of linux) on the Raspberry Pi (an embedded processor board) to monitor GPIO inputs.
See simplified version of my code below. I have an infinite loop in the python script waiting for something to happen on a GPIO i/p. Is this the correct way to do it? I.e. does this mean that the CPU is running at full whack just going round this loop, leaving no CPU cycles for other stuff? Especially as I need to be running other things in parallel (e.g. the browser).
Also what happens if the CPU is busy doing something else and a GPIO i/p changes? Does the GPIO event get stored somewhere so it is eventually serviced, or does it just get lost?
Is there a better way of doing this?
(For your answers, please note that I'm new to linux, and v. new to python and real-time programming)
#!/usr/bin/python
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(16, GPIO.IN, pull_up_down=GPIO.PUD_UP)
def ButtonHandler(channel):
print "Button pressed " + str(channel)
# do stuff here
GPIO.add_event_detect(16, GPIO.FALLING, callback=ButtonHandler, bouncetime=200)
while True:
pass
Yes, doing while True: pass will burn 100% of your CPU (or as close to it as possible) doing nothing.
From what I understand (hopefully this is documented somewhere), the RPi.GPIO module spawns a background thread that waits on the GPIO and calls your callback function for each event. So your main thread really has nothing to do. If you want this to run as a service, make it sleep for long periods of time. If you want to run it interactively (in which case you probably want it easier to cancel), sleep for shorter periods of time, maybe 0.5 seconds, and add some way to exit the loop.
It would be even nicer if you could do the GPIO select in the main thread, or get a handle to the GPIO background thread that you can just join, either of which would burn no CPU at all. However, the module doesn't seem to be designed in a way to make that easy.
However, looking at the source, there is a wait_for_edge method. Presumably you could loop around GPIO.wait_for_edge instead of setting a callback. But without the documentation, and without a device to test for myself, I'm not sure I'd want to recommend this to a novice.
Meanwhile:
Also what happens if the CPU is busy doing something else and a GPIO i/p changes? Does the GPIO event get stored somewhere so it is eventually serviced, or does it just get lost?
Well, while your thread isn't doing anything, the GPIO background thread seems to be waiting on select, and select won't let it miss events. (Based on the name, that wait_for_edge function sounds like it might be edge-triggered rather than level-triggered, however, which is part of the reason I'm wary of recommending it.)

Categories

Resources