Python starting/stopping thread from another thread causes unexpected behavior - python

After some research on how to properly ask a thread to stop, I am stuck into an unexpected behavior.
I am working on a personal project. My aim is to run a program on a RaspberryPi dedicated to domotics.
My code is structured as below:
a first thread is dedicated to scheduling : everyday at the same hour, I send a signal on GPIO output
a second thread is dedicated to monitoring keyboard for manual events
whenever a specific key is pressed, I want to start a new thread that is dedicated to another routine just like my first thread
Here is how I proceed:
import schedule
from pynput import keyboard
import threading
first_thread = threading.Thread(target=heating, name="heating")
second_thread = threading.Thread(target=keyboard, name="keyboard")
first_thread.start()
second_thread.start()
stop_event = threading.Event()
My heating routine is defined by:
def heating():
def job():
GPIO.output(4,GPIO.HIGH)
return
schedule.every().day.at("01:00").do(job)
while True:
schedule.run_pending()
time.sleep(0.5)
My keyboard monitor is defined as follow:
def keyboard():
def on_press(key):
if key == keyboard.Key.f4:
shutter_thread = threading.Thread(name="shutter", target=shutter, args=(stop_event,))
shutter_thread.start()
if key == keyboard.Key.f5:
stop_event.set()
with keyboard.Listener(on_press=on_press,on_release=on_release) as listener:
listener.join()
My shutter thread target is similar to the heating one:
def shutter(stop_event):
def open():
GPIO.output(6,GPIO.HIGH)
return
t = threading.currentThread()
schedule.every().day.at("22:00").do(open)
while not stop_event.is_set():
schedule.run_pending()
time.sleep(0.5)
Problem is everytime I press the key to start my shutter thread, the shutter routine is called but:
the job within my shutter routine is executed twice
the job within the first thread is also now executed twice every time it is on schedule !
once I press the key to ask the shutter thread to stop, the heating (first) thread come back to its original (and correct) behaviour, but the shutter thread does not stop
I have no idea why starting this new thread yields such modification in the behaviour of the other thread. And why my stopping event is not working ?
What am I doing wrong ?

Since you are using the schedule framework for managing tasks a clean solution would be to use the same framework's API for canceling jobs (instead of using threading.Event). That way tasks management remains within schedule and user interaction is handled by threading.
def keyboard():
tasks = []
def on_press(key):
if key == keyboard.Key.f4:
# Shutter task.
tasks.append(
schedule.every().day.at("22:00").do(lambda: GPIO.output(6,GPIO.HIGH))
)
if key == keyboard.Key.f5:
schedule.cancel_job(tasks.pop(-1))
with keyboard.Listener(on_press=on_press,on_release=on_release) as listener:
listener.join()
# Heating task.
schedule.every().day.at("01:00").do(lambda: GPIO.output(4,GPIO.HIGH))
# Start keyboard listener.
ui = threading.Thread(target=keyboard)
ui.start()
while True:
schedule.run_pending()
time.sleep(0.5)

Even if a_guest solution is a clean one, I can share a second solution for those who can face a similar situation.
A working solution is to define a specific scheduler in the different threads instead of using the default one.
Illustration:
def heating():
def job():
GPIO.output(4,GPIO.HIGH)
return
heat_sched = schedule.Scheduler()
heat_sched.every().day.at("01:00").do(job)
while True:
heat_sched.run_pending()
time.sleep(1)
def shutter(stop_event):
def open():
GPIO.output(6,GPIO.HIGH)
return
shutter_sched = schedule.Scheduler()
shutter_sched.every().day.at("22:00").do(open)
while True:
if not stop_event.is_set():
shutter_sched.run_pending()
time.sleep(0.5)
else:
shutter_sched.clear()
return

Related

Python run a function on main thread from a background thread

Created a background thread this way
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
if fromNodeRED=="closeDoor":
self.door_closed()
...
self.listenThread = Thread(target=self.listenReply, daemon=True)
self.SOCK_LISTENING = True
self.listenThread.start()
But self.door_closed() has some UI stuffs so that's no good. How do I call self.door_closed in main thread instead?
One thing to keep in mind is that each thread is a sequential execution of a single flow of code, starting from the function the thread was started on.
It doesn't make much sense to simply run something on an existing thread, since that thread is already executing something, and doing so would disrupt it's current flow.
However, it's quite easy to communicate between threads and it's possible to implement a thread's code such that it simply receives functions/events from other threads which tell it what to do. This is commonly known as an event loop.
For example your main thread could look something like this
from queue import Queue
tasks = Queue()
def event_loop():
while True:
next_task = tasks.get()
print('Executing function {} on main thread'.format(next_task))
next_task()
In your other threads you could ask the main thread to run a function by simply adding it to the tasks queue:
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
if fromNodeRED=="closeDoor":
tasks.put(door_closed)
You can use threading.Event() and set it whenever you you receive "closeDoor" from recv.
For example:
g_should_close_door = threading.Event()
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
if fromNodeRED=="closeDoor":
g_should_close_door.set()
...
self.listenThread = Thread(target=self.listenReply, daemon=True)
self.SOCK_LISTENING = True
self.listenThread.start()
if g_should_close_door.is_set():
door_closed()
g_should_close_door.clear()
Solved it myself using signal and slots of the PyQt.
class App(QWidget):
socketSignal = QtCore.pyqtSignal(object) #must be defined in class level
# BG THREAD
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
print(fromNodeRED)
self.socketSignal.emit(fromNodeRED)
.... somewhere in init of Main thread:
self.socketSignal.connect(self.executeOnMain)
self.listenThread = Thread(target=self.listenReply, daemon=True)
self.SOCK_LISTENING = True
self.listenThread.start()
....
def executeOnMain(self, data):
if data=="closeDoor":
self.door_closed() # a method that changes the UI
Works great for me.

Async / Multi-Threading with blinker

I have a raspberry pi which I have hooked up with a 4 button keypad. Using the signal stuff from blinker I hooked it up to run some methods.
#sender
while True:
if buttonIsDown == True: signal.send()
#reciever
#signal.connect
def sayHI():
print("1")
time.sleep(10)
print("2")
This works fine, however when I push the button for the second time (Within 10 seconds of the previous button press) it does not fire the method as the thread is paused in the time.sleep(10).
How can I get it to fire the method again while the it is still paused(possibly in another thread)
It is an old question, but still it may be useful for someone else.
You can start a new thread every time the signal is emitted, in that way you will be able to catch all the events as soon as they happen. Remember that in your code, since you have a while True, the signal is never connected to the function, you should have defined them in the opposite order.
Here is a working example, based on your code:
import threading
from blinker import signal
from time import sleep
custom_signal = signal(name='custom')
#custom_signal.connect
def slot(sender):
def say_hello():
print("1")
sleep(10)
print("2")
threading.Thread(target=say_hello).start()
while True:
value = int(input('Press 1 to continue: '))
if value == 1:
custom_signal.send()
else:
break

Python 3 Autoclicker On/Off Hotkey

I'm new to Python and I figured I'd make a simple autoclicker as a cool starter project.
I want a user to be able to specify a click interval and then turn the automatic clicking on and off with a hotkey.
I am aware of Ctrl-C, and you'll see that in my current code, but I want the program to work so that the hotkey doesn't have to be activated in the python window.
import pyautogui, sys
print("Press Ctrl + C to quit.")
interval = float(input("Please give me an interval for between clicks in seconds: "))
try:
while True:
pyautogui.click()
except KeyboardInterrupt:
print("\n")
Do I need to make a tkinter message box to make the switch or can I use a hotkey?
Thanks for the help.
Update
import multiprocessing
import time
import pyHook, pyautogui, pythoncom
import queue
click_interval = float(input("Please give an interval between clicks in seconds: "))
class AutoClicker(multiprocessing.Process):
def __init__(self, queue, interval):
multiprocessing.Process.__init__(self)
self.queue = queue
self.click_interval = click_interval
def run(self):
while True:
try:
task = self.queue.get(block=False)
if task == "Start":
print("Clicking...")
pyautogui.click(interval == click_interval)
except task == "Exit":
print("Exiting")
self.queue.task_done()
break
return
def OnKeyboardEvent(event):
key = event.Key
if key == "F3":
print("Starting auto clicker")
# Start consumers
queue.put("Start")
queue.join
elif key == "F4":
print("Stopping auto clicker")
# Add exit message to queue
queue.put("Exit")
# Wait for all of the tasks to finish
queue.join()
# return True to pass the event to other handlers
return True
if __name__ == '__main__':
# Establish communication queues
queue = multiprocessing.JoinableQueue()
# create a hook manager
hm = pyHook.HookManager()
# watch for all mouse events
hm.KeyDown = OnKeyboardEvent
# set the hook
hm.HookKeyboard()
# wait forever
pythoncom.PumpMessages()
AutoClicker.run(self)
First of all if you want to monitor global input outside the Python window you will need pyHook or something similar. It will allow you to monitor keyboard events. I chose to act upon the F3 and F4 key-presses, which are used for starting and stopping the autoclicker.
To accomplish what you've asked, the best way I'm aware of is to create a process which will do the clicking, and to communicate with it through the use of a Queue. When the F4 key is pressed, it will add an "Exit" string to the queue. The autoclicker will recognize this and then return.
Before the F4 key has been pressed, the queue will remain empty and the queue.Empty exception will continually occur. This will execute a single mouse click.
import multiprocessing
import time
import pyHook, pyautogui, pythoncom
import queue
class AutoClicker(multiprocessing.Process):
def __init__(self, queue, interval):
multiprocessing.Process.__init__(self)
self.queue = queue
self.click_interval = interval
def run(self):
while True:
try:
task = self.queue.get(block=False)
if task == "Exit":
print("Exiting")
self.queue.task_done()
break
except queue.Empty:
time.sleep(self.click_interval)
print("Clicking...")
pyautogui.click()
return
def OnKeyboardEvent(event):
key = event.Key
if key == "F3":
print("Starting auto clicker")
# Start consumers
clicker = AutoClicker(queue, 0.1)
clicker.start()
elif key == "F4":
print("Stopping auto clicker")
# Add exit message to queue
queue.put("Exit")
# Wait for all of the tasks to finish
queue.join()
# return True to pass the event to other handlers
return True
if __name__ == '__main__':
# Establish communication queues
queue = multiprocessing.JoinableQueue()
# create a hook manager
hm = pyHook.HookManager()
# watch for all mouse events
hm.KeyDown = OnKeyboardEvent
# set the hook
hm.HookKeyboard()
# wait forever
pythoncom.PumpMessages()
Keep in mind this implementation is far from perfect, but hopefully it's helpful as a starting point.
Why can't I use a simple while loop or if/else statement?
A while loop is blocking, which means when the loop is running it blocks all other code from executing.
So once the clicker loop starts it will be stuck in that loop indefinitely. You can't check for F4 key presses while this is happening because the loop will block any other code from executing (ie. the code that checks for key-presses). Since we want the auto-clicker clicks and the key-press checks to occur simultaneously, they need to be separate processes (so they don't block each other).
The main process which checks for key presses needs to be able to communicate somehow with the auto-clicker process. So when the F4 key is pressed we want the clicking process to exit. Using a queue is one way to communicate (there are others). The auto-clicker can continuously check the queue from inside the while loop. When we want to stop the clicking we can add an "Exit" string to the queue from the main process. The next time the clicker process reads the queue it will see this and break.

Waking up a specific thread from sleep from another thread in python

I've been chasing my tail on this one for days now, and I'm going crazy. I'm a total amateur and totally new to python, so please excuse my stupidity.
My main thread repeatedly checks a database for an entry and then starts threads for every new entry that it finds in the database.
The threads that it starts up basically poll the database for a value and if it doesn't find that value, it does some things and then it sleeps for 60 seconds and starts over.
Simplified non-code for the started up thread
while True:
stop = _Get_a_Value_From_Database_for_Exit() #..... a call to DBMS
If stop = 0:
Do_stuff()
time.sleep(60)
else:
break
There could be many of these threads running at any given time. What I'd like to do is have the main thread check another spot in the database for specific value and then can interrupt the sleep in the example above in a specific thread that was started. The goal would be to exit a specific thread like the one listed above without having to wait the remainder of the sleep duration. All of these threads can be referenced by the database id that is shared. I've seen references to event.wait() and event.set() been trying to figure out how I replace the time.sleep() with it, but I have no idea how I could use it to wake up a specific thread instead of all of them.
This is where my ignorance show through: is there a way where I could do something based on database id for the event.wait (Like 12345.wait(60) in the started up thread and 12345.set() in the main thread (all dynamic based on the ever changing database id).
Thanks for your help!!
The project is a little complex, and here's my version of it.
scan database file /tmp/db.dat, prefilled with two words
manager: create a thread for each word; default is a "whiskey" thread and a "syrup" thread
if a word ends in _stop, like syrup_stop, tell that thread to die by setting its stop event
each thread scans the database file and exits if it sees the word stop. It'll also exit if its stop event is set.
note that if the Manager thread sets a worker's stop_event, the worker will immediately exit. Each thread does a little bit of stuff, but spends most of its time in the stop_ev.wait() call. Thus, when the event does get set, it doesn't have to sit around, it can exit immediately.
The server is fun to play around with! Start it, then send commands to it by adding lines to the database. Try each one of the following:
$ echo pie >> /tmp/db.dat # start new thread
$ echo pie_stop >> /tmp/db.dat # stop thread by event
$ echo whiskey_stop >> /tmp/db.dat # stop another thread "
$ echo stop >> /tmp/db.dat # stop all threads
source
import logging, sys, threading, time
STOP_VALUE = 'stop'
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)-4s %(threadName)s %(levelname)s %(message)s",
datefmt="%H:%M:%S",
stream=sys.stderr,
)
class Database(list):
PATH = '/tmp/db.dat'
def __init__(self):
super(Database,self).__init__()
self._update_lock = threading.Lock()
def update(self):
with self._update_lock:
self[:] = [ line.strip() for line in open(self.PATH) ]
db = Database()
def spawn(events, key):
events[key] = threading.Event()
th = threading.Thread(
target=search_worker,
kwargs=dict(stop_ev=events[key]),
name='thread-{}'.format(key),
)
th.daemon = True
th.start()
def search_worker(stop_ev):
"""
scan database until "stop" found, or our event is set
"""
logging.info('start')
while True:
logging.debug('scan')
db.update()
if STOP_VALUE in db:
logging.info('stopvalue: done')
return
if stop_ev.wait(timeout=10):
logging.info('event: done')
return
def manager():
"""
scan database
- word: spawn thread if none already
- word_stop: tell thread to die by setting its stop event
"""
logging.info('start')
events = dict()
while True:
db.update()
for key in db:
if key == STOP_VALUE:
continue
if key in events:
continue
if key.endswith('_stop'):
key = key.split('_')[0]
if key not in events:
logging.error('stop: missing key=%s!', key)
else:
# signal thread to stop
logging.info('stop: key=%s', key)
events[key].set()
del events[key]
else:
spawn(events, key)
logging.info('spawn: key=%s', key)
time.sleep(2)
if __name__=='__main__':
with open(Database.PATH, 'w') as dbf:
dbf.write(
'whiskey\nsyrup\n'
)
db.update()
logging.info('start: db=%s -- %s', db.PATH, db)
manager_t = threading.Thread(
target=manager,
name='manager',
)
manager_t.start()
manager_t.join()
Rather change your design architecture and go in for a distributed process-to-process message-passing, than to repetitively re-chase a dbEngine in an infinite loop with repetitive dbSeek-s for a dumb value, re-test it for an in-equality and then trying to "kill-a-sleep".
Both ZeroMQ or nanomsg are smart, broker-less, messaging layers very good in this sense.
A desire to cross-breed fire and water IMHO does not bring anything good for a real-world system.
A smart, scaleable, distributed process-to-process design does.
( Fig. on a simple distributed process-to-process messaging/coordination, courtesy imatix/ZeroMQ )

Block main thread until python background thread finishes side-task

I have a threaded python application with a long-running mainloop in the background thread. This background mainloop is actually a call to pyglet.app.run(), which drives a GUI window and also can be configured to call other code periodically. I need a do_stuff(duration) function to be called at will from the main thread to trigger an animation in the GUI, wait for the animation to stop, and then return. The actual animation must be done in the background thread because the GUI library can't handle being driven by separate threads.
I believe I need to do something like this:
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.max_n_times = 0
self.total_n_times = 0
self.paused_ev = threading.Event()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.paused_ev.is_set():
if self.max_n_times > self.total_n_times:
self.paused_ev.clear()
else:
if self.total_n_times >= self.max_n_times:
self.paused_ev.set()
if not self.paused_ev.is_set():
# do stuff that must execute in the background thread
self.total_n_times += 1
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.max_n_times += n_times
sd.paused_ev.wait_for_clear() # wait_for_clear() does not exist
sd.paused_ev.wait()
assert (sd.total_n_times == sd.max_n_times)
EDIT: use max_n_times instead of stop_time to clarify why Thread.join(duration) won't do the trick.
From the documentation for threading.Event:
wait([timeout])
Block until the internal flag is true.
If the internal flag is true on entry,
return immediately. Otherwise, block
until another thread calls set() to
set the flag to true, or until the
optional timeout occurs.
I've found I can get the behavior I'm looking for if I have a pair of events, paused_ev and not_paused_ev, and use not_paused_ev.wait(). I could almost just use Thread.join(duration), except it needs to only return precisely when the background thread actually registers that the time is up. Is there some other synchronization object or other strategy I should be using instead?
I'd also be open to arguments that I'm approaching this whole thing the wrong way, provided they're good arguments.
Hoping I get some revision or additional info from my comment, but I'm kind of wondering if you're not overworking things by subclassing Thread. You can do things like this:
class MyWorker(object):
def __init__(self):
t = Thread(target = self._do_work, name "Worker Owned Thread")
t.daemon = True
t.start()
def _do_work(self):
While True:
# Something going on here, forever if necessary. This thread
# will go away if the other non-daemon threads terminate, possibly
# raising an exception depending this function's body.
I find this makes more sense when the method you want to run is something that is more appropriately a member function of some other class than it would be to as the run method on the thread. Additionally, this saves you from having to encapsulate a bunch of business logic inside of a Thread. All IMO, of course.
It appears that your GUI animation thread is using a spin-lock in its while True loop. This can be prevented using thread-safe queues. Based on my reading of your question, this approach would be functionally equivalent and efficient.
I'm omitting some details of your code above which would not change. I'm also assuming here that the run() method which you do not control uses the self.stop_time value to do its work; otherwise there is no need for a threadsafe queue.
from Queue import Queue
from threading import Event
class StuffDoer:
def __init__(self, inq, ready):
self.inq = inq
self.ready = ready
def _do_stuff(self):
self.ready.set()
self.stop_time = self.inq.get()
GUIqueue = Queue()
control = Event()
sd = StuffDoer(GUIqueue, control)
def do_stuff(duration):
control.clear()
GUIqueue.put(time.time() + duration)
control.wait()
I ended up using a Queue similar to what #wberry suggested, and making use of Queue.task_done and Queue.wait:
import Queue
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.setDaemon(True)
self.max_n_times = 0
self.total_n_times = 0
self.do_queue = Queue.Queue()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.total_n_times >= self.max_n_times:
try:
self.max_n_times += self.do_queue.get(block=False)
except Queue.Empty, e:
pass
if self.max_n_times > self.total_n_times:
# do stuff that must execute in the background thread
self.total_n_times += 1
if self.total_n_times >= self.max_n_times:
self.do_queue.task_done()
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.do_queue.put(n_times)
sd.do_queue.join()
assert (sd.total_n_times == sd.max_n_times)
I made solution based on #g.d.d.c advice for this question. There is my code:
threads = []
# initializing aux thread(s) in the main thread ...
t = threading.Thread(target=ThreadF, args=(...))
#t.setDaemon(True) # I'm not sure does it really needed
t.start()
threads.append(t.ident)
# Block main thread
while filter(lambda thread: thread.ident in threads, threading.enumerate()):
time.sleep(10)
Also, you can use Thread.join to block the main thread - it is better way.

Categories

Resources