I'm writing a program to control GPIO's on my raspberry pi. I would like my program to ask me how long I would like to keep a GPIO on before it turns off.
Is it possible to have it stay on for 1 hour and then turn off. The problem I'm having is that when its on for an hour I cant issue any other commands to turn on other GPIO's because sleep.time is still being processed. I'd like to set multiple GPIO's for different times at the same time.
There are many ways to solve the problem. Conceptually, instead of sleeping 30 seconds and then doing something, you can sleep one second, do a bunch of stuff, check the time, lather, rinse, repeat. And by "sleep one second" it could just as easily be a tenth of a second or five seconds or whatever seems reasonable to me.
Another solution is to create a second thread (or process) for this sleep command so that your main thread (or process) runs unabated.
The choice depends on what all you need to do, how accurate you need the delay to be, what other things are running on the system, and so on.
Your current script only have one thread running, the sleep() will put the current running thread into sleep mode which blocks further commands.
time.sleep(secs) Python Doc
time.sleep(secs)
Suspend execution of the current thread for the given number of seconds.
You will need one more thread on the background which keeps the timer for you. In the mean time, the thread on the foreground can still takes other commands.
I recommend you read this page
threading – Manage concurrent threads
Related
Across the internet, I've saw several examples of handling a service, which awaits for messages from clients in an endless loop, including a short sleep. Example:
while True:
check_messages_stack()
sleep(0.1)
What's the point of the sleep there? Is it supposed to save resources? If it does save some resources, would it be a meaningful amount?
sleep like the others have said relaxes the CPU usage, but in addition if it's something like accessing network/web data the sleep is needed to not crash the host server and ban you.
While your PC is running, your CPU needs to execute a lot of process (to make your PC run).
As CPUs are extremly fast they can fake to do multiple tasks at the same time but they don't really do (nowadays we get multiple cores CPU and mutithreading but forget about it for this explaination).
They just execute some part of a process during a certain amount of time then a part of an other process then go back to the first and so on.
In a simple way, the CPU is allow to switch to one process to another when it is not used by the process it actualy runing, for instance when the process does some I/O or waiting for user interaction.
When you do a while true it will loop ASAP it finished to execute the function in the while loop. Here ASAP realy mean ASAP, so no other process would be able to do anything in between of two loops. Because the CPU is continously processing.
When you put a sleep you allow the CPU to let other process to be executed. There is no really mater of how long the sleep is because a random CPU can do billions of operations in a microsecond.
So in your question, the sleep(0.1) allows your CPU to execute somme bilions operations in between of two check_messages_stack() call.
For more information, looking for "CPU Scheduling".
sleep doesn't use CPU resources, but constantly executing check_messages_stack() might (depending what you have there) so if you don't need to do it constantly it is a good practice to give CPU some time off.
If the function doesn't wait for anything, it would consume 100% CPU, by adding that sleep, it gives the CPU some time to execute other processes
I have a python function that turns on some LEDs, then pauses (time.sleep), and then turns off the LEDs via the Raspberry Pi. It's a bit more complicated than that - it's actually a bunch of LEDs in various patterns so several hundred lines of code. This function does everything in an infinite loop. Originally, I called the function in a thread because I have some other code that runs continuously as well.
Now, I need to be able to terminate the function. This could be required after 10 seconds or 100 seconds. Each time will just depend. In looking through the site and researching threading, it doesn't sound wise to just terminate the thread and I can't really use a flag because there are so many lines of code in the function.
Is there an alternative to using threads?
If you don't need much explicit data sharing between threads, you could use multiprocessing, which is very similar to the threading module, but uses processes (which can be terminated safely).
I have a project using two Raspberry Pis, where one should send the control signals to the other one, which should recieve and process them to control some Servomotors accordingly, all done in Python.
Now that is not a problem for me, but the problem lies in recieving the signals:
The method I use to ensure long ranges, needs to be in sync perfectly and has a timeout period. If it is not perfectly in sync, it waits for the signal and can stop the whole program for approx. 5 seconds or so.
Now, is there any possibility to let the checking and the moving be done at the same time, so that the movement stops when the signal says to stop but at the same time the checking does not interrupt the movement?
-Chrono
I am using Python with the Rasbian OS (based on Linux) on the Raspberry Pi board. My Python script uses GPIOs (hardware inputs). I have noticed when a GPIO activates, its callback will interrupt the current thread.
This has forced me to use locks to prevent issues when the threads access common resources. However it is getting a bit complicated. It struck me that if the GPIO was 'queued up' until the main thread went to sleep (e.g. hits a time.sleep) it would simplify things considerably (i.e. like the way that javascript deals with things).
Is there a way to implement this in Python?
Are you using RPi.GPIO library? Or you call your Python code from C when a callback fires?
In case of RPi.GPIO, it runs a valid Python thread, and you do not need extra synchronization if you organize the threads interaction properly.
The most common pattern is to put your event in a queue (in case of Python 3 this library will do the job, Python 2 has this one). Then, when your main thread is ready to process the event, process all the events in your queue. The only problem is how you find a moment for processing them. The simplest solution is to implement a function that does that and call it from time to time. If you use a long sleep call, you may have to split it into many smaller sleeps to make sure the external events are processed often enough. You may even implement your own wrapper for sleep that splits one large delay into several smaller ones and processes the queue between them. The other solution is to use Queue.get with timeout parameter instead of sleep (it returns immediately after an event arrives into the queue), however, if you need to sleep exactly for a period you specified, you may have to do some extra magic such as measuring the time yourself and calling get again if you need to wait more after processing the events.
Use a Queue from the multithreading module to store the tasks you want to execute. The main loop periodically checks for entries in the queue and executes them one by one when it finds something.
You GPIO monitoring threads put their tasks into the queue (only one is required to collect from many threads).
You can model your tasks as callable objects or function objects.
I've been coding the python "apscheduler" package (Advanced Python Scheduler) into my app, so far it's going good, I'm able to do almost everything that I had envisioned doing with it.
Only one kink left to iron out...
The function my events are calling will only accept around 3 calls a second or fail as it is triggering very slow hardware I/O :(
I've tried limiting the max number of threads in the threadpool from 20 to just 1 to try and slow down execution, but since I'm not really putting a bit load on apscheduler my events are still firing pretty much concurrently (well... very, very close together at least).
Is there a way to 'stagger' different events that fire within the same second?
I have recently found this question because I, like yourself, was trying to stagger scheduled jobs slightly to compensate for slow hardware.
Including an argument like this in the scheduler add_job call staggers the start time for each job by 200ms (while incrementing idx for each job):
next_run_time=datetime.datetime.now() + datetime.timedelta(seconds=idx * 0.2)
What you want to use is the 'jitter' option.
From the docs:
The jitter option enables you to add a random component to the
execution time. This might be useful if you have multiple servers and
don’t want them to run a job at the exact same moment or if you want
to prevent multiple jobs with similar options from always running
concurrently
Example:
# Run the `job_function` every hour with an extra-delay picked randomly
# in a [-120,+120] seconds window.
sched.add_job(job_function, 'interval', hours=1, jitter=120)
I don't know about apscheduler but have you considered using a Redis LIST (queue) and simply serializing the event feed into that one critically bounded function so that it fires no more than three times per second? (For example you could have it do a blocking POP with a one second max delay, increment your trigger count for every event, sleep when it hits three, and zero the trigger count any time the blocking POP times out (Or you could just use 333 millisecond sleeps after each event).
My solution for future reference:
I added a basic bool lock in the function being called and a wait which seems to do the trick nicely - since it's not the calling of the function itself that raises the error, but rather a deadlock situation with what the function carries out :D