How to start, stop, and restart a thread? - python

Fairly new to Python; working on a Raspberry Pi 4 with Python 3.4.3.
Got a code working to listen for 2 distinct alarms in my lab - one for a -80 freezer getting too warm, and the other for a -20 freezer. Code listens on a microphone, streams data, Fourier-transforms it, detects the peaks I'm interested in, and triggers events when they're found - eventually going to email me and my team if an alarm goes off, but still just testing with Print commands atm. Let's call them Alarm A/EventA and Alarm B/Event B.
I want it to trigger Event A when Alarm A is detected, but then wait 1 hour before triggering Event A again (if Alarm A is still going off/goes off again in an hour).
Meanwhile, though, I also want it to continue listening for Alarm B and trigger Event B if detected - again, only once per hour.
Since I can't just do time.sleep, then, I'm trying to do it with Threads - but am having trouble starting, stopping, and restarting a thread for the 1 hour (currently just 10 second for testing purposes) delay.
I have variables CounterA and CounterB set to 0 to start. When Alarm A is detected I have the program execute EventA and up CounterA to 1; ditto for AlarmB/EventB/CounterB. EventA and EventB are only triggered if CounterA and CounterB are <1.
I'm having a real hard time resetting the counters after a time delay, though. Either I end up stalling the whole program after an event is triggered, or I get the error that threads can only be started once.
Here are the relevant sections of the code:
import time
import threading
CounterA = 0
CounterB = 0
def Aresetter():
time.sleep(10)
global CounterA
CounterA=CounterA-1
thA.join()
def Bresetter():
time.sleep(10)
global CounterB
CounterB=CounterB-1
thB.join()
thA = threading.Thread(target = Aresetter)
thB = threading.Thread(target = Bresetter)
if any(#Alarm A detection) and CounterA<1:
print('Alarm A!')
CounterA=CounterA+1
thA.start()
elif any(#Alarm B detection) and CounterB<1:
print('Alarm B!')
CounterB=CounterB+1
thB.start()
else:
pass
I think the crux of my problem is I can't have the resetter functions join the threads to main once they're finished with their delayed maths - but I also don't know how to do that in the main program without making it wait for the same amount of time and thus stalling everything...

You don't need threads for this at all.
Just keep track of the last time (time.time()) you triggered each alarm, and don't trigger them if less than 60 minutes (or whatever the threshold is) has elapsed since the last time.
Something like (semi pseudocode)...
import time
last_alarm_1 = 0 # a long time ago, so alarm can trigger immediately
# ...
if alarm_1_cond_met():
now = time.time()
if now - last_alarm_1 > 60 * 60: # seconds
send_alarm_1_mail()
last_alarm_1 = now
Repeat for alarm 2 :)

AKX has a better solution to your problem, but you should be aware of what this does when Aresetter() is called by the thA thread:
def Aresetter():
...
thA.join()
The thA.join() method doesn't do anything to the thA thread. All it does is, it waits for the thread to die, and then it returns. But, if it's the thA thread waiting for itself to die, it's going to be waiting for a very long time.

Also, there's this:
How to...restart a thread?
You can't. I don't want to explore why it makes any sense, but you just can't do that. It's not how threads work. If you want your program to do the same task more than one time "in another thread," you have a couple of options:
Create a new thread to do the task each time.
Create a single thread that does the same task again and again, possibly sleep()ing in between, or possibly awaiting some message/signal/trigger before each repetition.
Submit a task to a thread pool* each time you want the thing to be done.
Option (2) could be better than option (1) because creating and destroying threads is a lot of work. With option (2) you're only doing that once.
Option (1) could be better than option (2) because threads use a significant amount of memory. If the thread doesn't exist when it's not needed, then that memory could be used by something else.
Option (3) could be better than the both of them if the same thread pool is also used for other purposes in your program. The marginal cost of throwing a few more tasks at an already-existing thread pool is trivial.
* I don't know that Python has a ready-made, first-class ThreadPool class for you to use. It has this, https://stackoverflow.com/a/64373926/801894 , but I've never used it. It's not that hard though to create your own simple thread pool.

Related

Avoid increased runtime when opening threads in consecutive runs

I'm doing my final thesis and my topic is the creation of a software that will run and control an on-satellite experiment.
For that reason, I had to implement the reading of multiple sensors while the experiment is running. To do that, I wrote the code so that it will create a new thread for each sensor (multiprocessing might not work because I don't yet know which system the software will run on and therefore I can't say if there will be multiple processors available) and these threads run as daemons all the while the software does its thing. It works well, but now I need to test the whole thing and this is where it gets problematic:
To properly test each and every route the software could take, I have multiple variables that need to be set and so there will be a lot of test runs (I calculated around 17.000 but could be wrong). While the first few test runs go over quickly, each run takes longer and longer. I have fiddled around with my code a little bit and it turns out that without threading, each test takes about the same time. Unfortunately, I do not know why and my knowledge of the matter is very limited. The code concerning the threading is as follows:
This sets up the creation of each thread (sensor_list will be populated with multiple sensors in non-test conditions)
sensor_list = [<a single sensor>]
for sensor in sensor_list:
thread = threading.Thread(
target=self.store_sensor_data,
args=[sensor, query_frequency],
daemon=True,
name=f"Thread_{sensor}",
)
self.threads.append(thread)
thread.start()
The function which actually deals with getting and writing the sensor data, self.store_sensor_data, looks like this:
def store_sensor_data(self, sensor, frequency):
"""Get the current reading and result from 'sensor' and store them.
sensor (Sensor) - the sensor whose data shall be stored
frequency (int) - the frequency (in 1/s) at which data shall be stored
"""
value_id = 0
while not self.HALT:
value_id += 1
sensor_reading = sensor.get_reading()
sensor_result = sensor.get_result()
try:
# if there already is a list for that sensor, append the data to it
self.experiment_report.sensor_data_raw[str(sensor)].append(
(value_id, sensor_reading)
)
except KeyError:
# if there is no list, create one containing the current sensor value
self.experiment_report.sensor_data_raw[str(sensor)] = [
(value_id, sensor_reading)
]
# repeat the same for the 'result'
try:
self.experiment_report.sensor_data[str(sensor)].append(
(value_id, sensor_result)
)
except KeyError:
self.experiment_report.sensor_data[str(sensor)] = [
(value_id, sensor_result)
]
time.sleep(1 / frequency)
after the experiment is done, I stop the threads by calling
def interrupt_sensor_data_recording(self):
"""Interrupt the storing of sensor data by ending all daemon threads.
threads (list) - a list of currently running threads
"""
if len(self.threads) > 0:
self.HALT = True
for thread in self.threads:
if thread.is_alive():
logger.debug(f"Stopping thread '{thread.getName()}'")
thread.join()
else:
thread.join()
logger.debug(f"Thread '{thread.getName()}' was already stopped")
Now I am unsure if how I stop the daemon threads is appropriate and this might be the source of my problems. But there also might be some implication that I don't know about yet and in both cases, it would be nice if someone with more knowledge than me could help me out here.
Thanks in advance!

Python - How to wake up a sleeping process- multiprocessing?

I need to wake up a sleeping process ?
The time (t) for which it sleeps is calculated as t = D/S . Now since s is varying, can increase or decrease, I need to increase/decrease the sleeping time as well. The speed is received over a UDP procotol. So, how do I change the sleeping time of a process, keeping in mind the following:-
If as per the previous speed `S1`, the time to sleep is `(D/S1)` .
Now the speed is changed, it should now sleep for the new time,ie (D/S2).
Since, it has already slept for D/S1 time, now it should sleep for D/S2 - D/S1.
How would I do it?
As of right now, I'm just assuming that the speed will remain constant all throughout the program, hence not notifying the process. But how would I do that according to the above condition?
def process2():
p = multiprocessing.current_process()
time.sleep(secs1)
# send some packet1 via UDP
time.sleep(secs2)
# send some packet2 via UDP
time.sleep(secs3)
# send some packet3 via UDP
Also, as in threads,
1) threading.activeCount(): Returns the number of thread objects that are active.
2) threading.currentThread(): Returns the number of thread objects in the caller's thread control.
3) threading.enumerate(): Returns a list of all thread objects that are currently active.
What are the similar functions for getting activecount, enumerate in multiprocessing?
Not yet tested but, i think this could work :
Instead of using sleep, create a condition object and use it's wait() method.
Create a Timer object, which call the notify() method of the condition object when timed out.
If you want to change the sleep time, just discard the old Timer (with cancel() method), and create a new Timer.
* UPDATE *
I just tested this and it works.
This is the wait() in the process, don't forge to acquire it first.
def process(condition):
condition.acquire()
condition.wait()
condition.release()
and this is wake_up function, called from main process :
def wake_up(condition):
condition.acquire()
condition.notify()
condition.release()
and create and pass a condition object when creating a process (in your main, or other functions) :
condition=multiprocessing.Condition(multiprocessing.Lock())
p=multiprocessing.Process(target=process, args=(condition,))
p.start()
create a Timer (this timer thread will be created on main process) :
timer=threading.Timer(wake_up_time, wake_up, args(condition,))
start_time=time.time()
timer.start()
if you want to change the time, just stop it and make a new Timer :
timer.cancel()
elapsed_time=time.time-start_time
timer=threading.Timer(new_wake_up_time-elapsed_time, wake_up, args(condition,))
timer.start()

Python Counter in Text-Based Game

I'm making a text-based farmville clone using objects but I need to be able to control growth rate. I need some sort of counter that will run in the background of my program and determine how grown a crop is.
for example:
class Grow(object):
def growth(self, crop):
self.grown = 0
while self.grown < 5:
<every x number of seconds add one to self.grown>
I need something like time.sleep() but something that does not stop the program from running.
Thanks =D
If you only need to know how much the crop would have grown since you last checked, you can build this into your Crop objects:
from datetime import datetime
class Crop:
RATE = 1 # rate of growth, units per second
def __init__(self, ..., grown=0): # allow starting growth to be set
...
self.last_update = datetime.now()
self.grown = grown
def grow(self):
"""Set current growth based on time since last update."""
now = datetime.now()
self.grown += Crop.RATE * (now - self.last_update).seconds
self.last_update = now
Alternatively, you could define this functionality in a separate Growable class and have all objects that grow (e.g. Crop, Animal) inherit the grow method from that superclass.
class Growable:
def __init__(self, grown=0):
self.last_update = datetime.now()
self.grown = grown
def grow(self, rate):
"""Set current growth based on time since last update and rate."""
now = datetime.now()
self.grown += rate * (now - self.last_update).seconds
self.last_update = now
class Crop(Growable):
RATE = 1
def __init__(self, ..., grown=0):
super().__init__(grown)
...
def grow(self):
super().grow(Crop.RATE)
There are different ways to do this, which depend on how you want to structure your app. Every game is basically running some kind of loop; the question is what kind of loop you're using.
For a simple "console mode" game, the loop is just a loop around input().. While you're waiting for the user to type his input, nothing else can happen. And that's the problem you're trying to solve.
One way to get around this is to fake it. You may not be able to run any code while you're waiting for the user's input, but you can figure out all the code you would have run, and do the same thing it would have done. If the crop is supposed to grow every 1.0 seconds up to 5 times, and it's been 3.7 seconds since the crop was planted, it's now grown 3 times. jonrsharpe's answer shows a great way to structure this.
This same idea works for graphical games that are driven by a frame-rate loop, like a traditional arcade game, but even simpler. Each frame, you check for input, update all of your objects, do any output, then sleep until it's time for the next frame. Because frames come at a fixed rate, you can just do things like this:
def grow(self, rate):
self.grown += rate / FRAMES_PER_SECOND
A different solution is to use background threads. While your main thread can't run any code while it's waiting around for user input, any other threads keep running. So, you can spin off a background thread for the crop. You can use your original growth method, with the time.sleep(1.0) and everything, but instead of calling self.growth(crop), call threading.Thread(target=self.growth, args=[crop]).start(). That's about as simple as it gets—but that simplicity comes at a cost. If you have a thread for each of 80x25=2000 plots of land, you'll be using all your CPU time in the scheduler and all your memory for thread stacks. So, this option only works if you have only a few dozen independently-active objects. The other problem with threads is that you have to synchronize any objects that are used on multiple threads, or you end up with race conditions, and that can be complicated to get right.
A solution to the "too many threads" problem (but not the synchronization problem) is to use a Timer. The one built into the stdlib isn't really usable (because it creates a thread for each timer), but you can find third-party implementations that are, like timer2. So, instead of sleeping for a second and then doing the rest of your code, move the rest of your code into a function, and create a Timer that calls that function after a second:
def growth(self, crop):
self.grown = 0
def grow_callback():
with self.lock:
if self.grown < 5:
self.grown += 1
Timer(1.0, grow_callback).start()
Timer(1.0, grow_callback).start()
Now you can call self.growth(crop) normally. But notice how the flow of control has been turned inside-out by having to move everything after the sleep (which was in the middle of a loop) into a separate function.
Finally, instead of a loop around input or sleeping until the next frame, you can use a full event loop: wait until something happens, where that "something" can be user input, or a timer expiring, or anything else. This is how most GUI apps and network servers work, and it's also used in many games. Scheduling a timer event in an event loop program looks just like scheduling a threaded timer, but without the locks. For example, with Tkinter, it looks like this:
def growth(self, crop):
self.grown = 0
def grow_callback():
if self.grown < 5:
self.grown += 1
self.after(1000, function=grow_callback)
self.after(1000, function=grow_callback)
One final option is to break your program up into two parts: an engine and an interface. Put them in two separate threads (or child processes, or even entirely independent programs) that communicate over queues (or pipes or sockets), and then you can write each one the way that's most natural. This also means you can replace the interface with a Tkinter GUI, a pygame full-screen graphics interface, or even a web app without rewriting any of your logic in the engine.
In particular, you can write the interface as a loop around input that just checks the input queue for any changes that happened while it was waiting, and then posts any commands on the output queue for the engine. Then write the engine as an event loop that treats new commands on the input queue as events, or a frame-rate loop that checks the queue every frame, or whatever else makes the most sense.

Python: accessing a function by multiple thread concurrently without Lock mechansim

When multiple threads access the same function then do we require to implement the lock mechanism explicitly or not.
I have a program using thread.
There are two thread, t1 and t2. t1 is for add1() and t2 is for subtract1().Both of the threads concurrently access the same function myfunction(caller,num)
1. I have defined a simple lock mechanism in the given program using a variable functionLock. Is this reliable or do we need to modify it.
import time, threading
functionLock = '' # blank means lock is open
def myfunction(caller,num):
global functionLock
while functionLock!='': # check and wait until the lock is open
print "locked by "+ str(functionLock)
time.sleep(1)
functionLock = caller # apply lock
total=0
if caller=='add1':
total+=num
print"1. addition finish with Total:"+str(total)
time.sleep(2)
total+=num
print"2. addition finish with Total:"+str(total)
time.sleep(2)
total+=num
print"3. addition finish with Total:"+str(total)
else:
time.sleep(1)
total-=num
print"\nSubtraction finish with Total:"+str(total)
print '\n For '+caller+'() Total: '+str(total)
functionLock='' # release the lock
def add1(arg1, arg2):
print '\n START add'
myfunction('add1',10)
print '\n END add'
def subtract1():
print '\n START Sub'
myfunction('sub1',100)
print '\n END Sub'
def main():
t1 = threading.Thread(target=add1, args=('arg1','arg2'))
t2 = threading.Thread(target=subtract1)
t1.start()
t2.start()
if __name__ == "__main__":
main()
The output is as follows:
START add
START Sub
1. addition finish with Total:10
locked by add1
locked by add1
2. addition finish with Total:20
locked by add1
locked by add1
3. addition finish with Total:30
locked by add1
For add1() Total: 30
END add
Subtraction finish with Total:-100
For sub1() Total: -100
END Sub
2. is it ok it we do not use locks?
Even if I do not use the lock mechanism defined in the above program the result is same from both threads t1 and t2. Does this mean that python implements locks automatically when multiple threads access the same function.
The output of the program without using the lock, functionLock , in the above program
START add
START Sub
1. addition finish with Total:10
Subtraction finish with Total:-100
For sub1() Total: -100
END Sub
2. addition finish with Total:20
3. addition finish with Total:30
For add1() Total: 30
END add
Thanks!
In addition to the other comments on this thread about busy waiting on a variable, I would like to point out that the fact that you are not using any kind of atomic swap may cause concurrency bugs. Even though your test execution does not cause them come up, if executed enough repetitions with different timings, the following sequence of events may come up:
Thread #1 executes while functionLock!='' and gets False. Then, Thread#1 is interrupted (preempted for something else to be executed), and Thread #2 executes the same line, while functionLock!='' also getting False. In this example, both threads have entered the critical section, which is clearly not what you wanted. In particular, in any line where threads modify total, the result may not be that which you expected, since both threads can be in that section at the same time. See the following example:
total is 10. For the sake of simplicity, assume num is always 1. Thread#1 executes total+=num, which is composed of three operations: (i) loading the value of total, (ii) adding it num and (iii) storing the result in total. If after (i), Thread#1 gets preempted and Thread#2 then executes total-=num, total is set to 9. Then, Thread#1 resumes. However, it had already loaded total = 10, so it adds 1 and stores 11 into the total variable. This effectively transformed the decrement operation by Thread#2 in a no-op.
Notice that in the wikipedia article linked by #ron-klein, the code uses an xchg operation, which atomically swaps a register with a variable. This is vital for the correction of the lock. In conclusion, if you want to steer clear of incredibly hard to debug concurrency bugs, never implement your own locks as alternative to atomic operations.
[edit] I just noticed that in fact total is a local variable in your code, so this could never happen. However, I believe that you are not aware that this is the cause of the code you have working perfectly, due to you affirming "Does this mean that python implements locks automatically when multiple threads access the same function.", which is not true. Please try adding global total to the beginning of myfunction, and executing the threads several times, and you should see errors in the output. [/edit]
Although I don't know much Python, I would say this is like in any other language:
As long as there are no variables involved that have been declared outside of the function and can therefore be shared between threads, there shouldn't be a need for locks. And this doesn't seem to be the case with your function.
Output to console might be garbled, though.
You need to lock when you think that code you are writing is critical section code i.e. whether the code snippet is modifying shared state between threads if it is not then you don't need to worry about locking.
Whether methods should be locked or not is a design choice, ideally you should lock as closer to the shared state access by the threads.
In your code you implement your own spin-lock. While this is possible, I don't think it's recommended in Python, since it might lead to a performance issue.
I used a well known searching engine (starts with G), querying about "python lock". On of the first results is this one: Thread Synchronization Mechanisms in Python. It looks like a good article to start with.
For the code itself: You should lock whenever the operation(s) executed on a shared resource are not atomic. It currently looks like there's no such resource in your code.

How to execute a function asynchronously every 60 seconds in Python?

I want to execute a function every 60 seconds on Python but I don't want to be blocked meanwhile.
How can I do it asynchronously?
import threading
import time
def f():
print("hello world")
threading.Timer(3, f).start()
if __name__ == '__main__':
f()
time.sleep(20)
With this code, the function f is executed every 3 seconds within the 20 seconds time.time.
At the end it gives an error and I think that it is because the threading.timer has not been canceled.
How can I cancel it?
You could try the threading.Timer class: http://docs.python.org/library/threading.html#timer-objects.
import threading
def f(f_stop):
# do something here ...
if not f_stop.is_set():
# call f() again in 60 seconds
threading.Timer(60, f, [f_stop]).start()
f_stop = threading.Event()
# start calling f now and every 60 sec thereafter
f(f_stop)
# stop the thread when needed
#f_stop.set()
The simplest way is to create a background thread that runs something every 60 seconds. A trivial implementation is:
import time
from threading import Thread
class BackgroundTimer(Thread):
def run(self):
while 1:
time.sleep(60)
# do something
# ... SNIP ...
# Inside your main thread
# ... SNIP ...
timer = BackgroundTimer()
timer.start()
Obviously, if the "do something" takes a long time, then you'll need to accommodate for it in your sleep statement. But, 60 seconds serves as a good approximation.
I googled around and found the Python circuits Framework, which makes it possible to wait
for a particular event.
The .callEvent(self, event, *channels) method of circuits contains a fire and suspend-until-response functionality, the documentation says:
Fire the given event to the specified channels and suspend execution
until it has been dispatched. This method may only be invoked as
argument to a yield on the top execution level of a handler (e.g.
"yield self.callEvent(event)"). It effectively creates and returns
a generator that will be invoked by the main loop until the event has
been dispatched (see :func:circuits.core.handlers.handler).
I hope you find it as useful as I do :)
./regards
It depends on what you actually want to do in the mean time. Threads are the most general and least preferred way of doing it; you should be aware of the issues with threading when you use it: not all (non-Python) code allows access from multiple threads simultaneously, communication between threads should be done using thread-safe datastructures like Queue.Queue, you won't be able to interrupt the thread from outside it, and terminating the program while the thread is still running can lead to a hung interpreter or spurious tracebacks.
Often there's an easier way. If you're doing this in a GUI program, use the GUI library's timer or event functionality. All GUIs have this. Likewise, if you're using another event system, like Twisted or another server-process model, you should be able to hook into the main event loop to cause it to call your function regularly. The non-threading approaches do cause your program to be blocked while the function is pending, but not between functioncalls.
Why dont you create a dedicated thread, in which you put a simple sleeping loop:
#!/usr/bin/env python
import time
while True:
# Your code here
time.sleep(60)
I think the right way to run a thread repeatedly is the next:
import threading
import time
def f():
print("hello world") # your code here
myThread.run()
if __name__ == '__main__':
myThread = threading.Timer(3, f) # timer is set to 3 seconds
myThread.start()
time.sleep(10) # it can be loop or other time consuming code here
if myThread.is_alive():
myThread.cancel()
With this code, the function f is executed every 3 seconds within the 10 seconds time.sleep(10). At the end running of thread is canceled.
If you want to invoke the method "on the clock" (e.g. every hour on the hour), you can integrate the following idea with whichever threading mechanism you choose:
import time
def wait(n):
'''Wait until the next increment of n seconds'''
x = time.time()
time.sleep(n-(x%n))
print(time.asctime())
[snip. removed non async version]
To use asyncing you would use trio. I recommend trio to everyone who asks about async python. It is much easier to work with especially sockets. With sockets I have a nursery with 1 read and 1 write function and the write function writes data from an deque where it is placed by the read function; and waiting to be sent. The following app works by using trio.run(function,parameters) and then opening an nursery where the program functions in loops with an await trio.sleep(60) between each loop to give the rest of the app a chance to run. This will run the program in a single processes but your machine can handle 1500 TCP connections insead of just 255 with the non async method.
I have not yet mastered the cancellation statements but I put at move_on_after(70) which is means the code will wait 10 seconds longer than to execute a 60 second sleep before moving on to the next loop.
import trio
async def execTimer():
'''This function gets executed in a nursery simultaneously with the rest of the program'''
while True:
trio.move_on_after(70):
await trio.sleep(60)
print('60 Second Loop')
async def OneTime_OneMinute():
'''This functions gets run by trio.run to start the entire program'''
with trio.open_nursery() as nursery:
nursery.start_soon(execTimer)
nursery.start_soon(print,'do the rest of the program simultaneously')
def start():
'''You many have only one trio.run in the entire application'''
trio.run(OneTime_OneMinute)
if __name__ == '__main__':
start()
This will run any number of functions simultaneously in the nursery. You can use any of the cancellable statements for checkpoints where the rest of the program gets to continue running. All trio statements are checkpoints so use them a lot. I did not test this app; so if there are any questions just ask.
As you can see trio is the champion of easy-to-use functionality. It is based on using functions instead of objects but you can use objects if you wish.
Read more at:
[1]: https://trio.readthedocs.io/en/stable/reference-core.html

Categories

Resources