migrating from Inherited QThread to Worker model - python

So through a lot of help in my previous questions
(Interrupting QThread sleep
and PySide passing signals from QThread to a slot in another QThread) I decided to attempt to change from the inherited QThread model to the Worker model. I am thinking I should stay with the QThread model as I had that working, and the other model is not. However I am not sure why the Worker model isn't working for me.
I am attempting to do this please let me know if there is something inherently wrong in my methodology?
I have a QtGui.QWidget that is my main GUI. I am using a QPushButton to signal
I have attempted to reduce the code to the basics of where I believe the issue is. I have verified that datagramHandled Signal gets emitted but the packet_handled Slot doesn't seem to get called.
class myObject(QtCore.QObject):
def __init__(self):
super(myObject, self).__init__()
self.ready=False
#QtCore.Slot()
def do_work(self):
#send a packet
self.ready=False
while not self.ready:
time.sleep(0.01)
#QtCore.Slot(int)
def packet_handled(self, errorCode):
print "Packet received."
self.ready = True
class myWidget(QtGui.QWidget):
datagramHandled = QtCore.Signal(int)
startRunThread = QtCore.Signal()
def __init__(self,parent=None, **kwargs):
super(myWidget, self).__init__(parent=parent)
# Bunch of GUI setup stuff (working)
self.myRunThread = QtCore.QThread()
#QtCore.Slot()
def run_command(self):
self.myRunObj = myObject()
self.myRunObj.moveToThread(self.myRunThread)
self.datagramHandled.connect(self.myRunObj.packet_handled)
self.startRunThread.connect(self.myRunObj.do_work)
self.myRunThread.start()
self.startRunThread.emit()
#QtCore.Slot()
def handle_datagram(self):
#handle the incoming datagram
errorCode = 0
self.datagramHandled.emit(errorCode)

The first issue is that you need to connect your myObject.do_work method to QThread.started:
self.myRunThread.started.connect(self.myRunObj.do_work)
Secondly, your do_work method should include something along these lines to enable event processing (please forgive my rusty PyQt and pseudocode):
def do_work(self):
while someCondition:
#The next two lines are critical for events and queued signals
if self.thread().eventDispatcher().hasPendingEvents():
self.thread().eventDispatcher().processEvents(QEventLoop.AllEvents)
if not self.meetsSomeConditionToContinueRunning():
break
elif self.hasWorkOfSomeKind():
self.do_something_here()
else:
QThread.yieldCurrentThread()
For more on this, check out the docs for QAbstractEventDispatcher.
The logic here is that when a signal is emitted from one thread (myWidget.datagramHandled), it gets queued in your worker thread's event loop. Calling processEvents processes any pending events (including queued signals, which are really just events), invoking the appropriate slots for any queued signals (myRunObj.packet_handled).
Further reading:
How To Really, Truly Use QThreads; The Full Explanation
Threading Basics

There 3 possible ways of distributing the computation/other load with Qt:
Explicitly putting the load to concrete QThread instance. That is thread-based concurrency.
Implicitly putting the load to pooled QThread instance. That is closer to task-based concurrency yet 'manually' managed with your own logic. QThreadPool class is used for maintaining the pool of threads.
Starting the task in own threading context we never explicitly manage. That is task-based concurrency and QtConcurrent namespace used. My guess is that task-based concurrency and "worker model" is the same thing (observed your changes). Mind that QtConcurrent does offer parallelization for tasks and uses exceptions (which may affect the way you write the code) unlike the rest of Qt.
Given you use PyQt you can also take an advantage of the feature designated for the pattern you want to implement with QtConcurrent for PyQt.
P.S. I see use thread.sleep( interval ) and that is not a good practice and one more indication that the proper technique should be used for implementing 'Worker model'.

An alternative to the solution provided by #JonHarper is to replace your while loop with a QTimer. Because you have an event loop running in your worker process now, it can handle QTimer events correctly (as long as you construct the QTimer in the relevant thread).
This way, control is returned to the event loop periodically so that other slots can be run when required.

Related

Terminate application if subprocess ends

I have an application that is doing some data processing in its main thread. So far it was a pure console application. Now I had to add a QT App for visualization purpose and did this as a separate thread.
If the QT Window is closed, the main thread of course still runs. How could I terminate the main thread once the window is closed?
class Window(threading.Thread)
def __init__(self, data_getter):
super(Window, self).__init__()
self.getter = data_getter
def update(self):
data = self.getter()
#update all UI widgets
def run(self):
app: QApplication = QApplication([])
app.setStyleSheet(style.load_stylesheet())
window = QWidget()
window.setWindowTitle("Test Widget")
window.setGeometry(100, 100, 600, 300)
layout = QGridLayout()
self.LABEL_state: QLabel = QLabel("SM State: N/A")
layout.addWidget(self.LABEL_state)
window.setLayout(layout)
window.show()
timer = QTimer()
timer.timeout.connect(self.update)
timer.start(1000)
app.exec_()
class Runner:
def __init__(self)
pass
def data_container(self):
return data
def process_data(self):
#do the data processing
def main():
runner: Runner = Runner()
time.sleep(1)
w = Window(runner.data_container)
w.start()
while True:
runner.process_data()
time.sleep(2)
if __name__ == "__main__": main()
The best idea I had is to give Window another function reference of Runner that is then registered inside Window to atexit and would set a termination flag that is frequently checked inside the main process (Runner). Is there a better approach? I know it migth be better to have the QApp run as the main process, but I'd like to not have to do that in this case.
There are basically two questions here: synchronising an event accross two threads, and stopping a running thread from outside. The way you solve the latter problem will probably affect your solution to the former. Very broadly, you can either:
poll some flag inside a main loop (in your case the while True loop in main would be an obvious target, possibly moving the logic into process_data and having it run to completion), or
use some mechanism to stop the containing process (like a signal), optionally registering cleanup code to get things into a known state.
In either case you can then design your api however you like, but a .stop() or .cancel() method is a very normal solution.
The trouble with relying on polling is that the worse case response time is an entire cycle of your main loop. If that's not acceptable you probably want to trigger the containing process or look for ways to check more frequently (if your process_data() takes << 2s to run, replace the sleep(2) with a looped smaller delay and poll the flag there).
If stopping by setting a flag isn't workable, you can trigger the containing process. This normally implies that the triggering code is running in a different thread/process. Python's threads don't have a .terminate(), but multiprocessing.Processes do, so you could delegate your processing over to a process and then have the main code call .terminate() (or get the pid yourself and send the signal manually). In this case the main code would be doing nothing until signalled, or perhaps nothing at all.
Lastly, communication between the graphical thread and the processing thread depends on how you implement the rest. For simply setting a flag, exposing a method is fine. If you move the processing code to a Process and have the main thread idle, use a blocking event to avoid busy-looping.
And yes, it would be easier if the graphical thread were the main thread and started and stopped the processing code itself. Unless you know this will greatly complicate things, have a look at it to see how much you would need to change to do this: well designed data processing code should just take data, process it, and push it out. If putting it in a thread is hard work, the design probably needs revisiting. Lastly there's the 'nuclear option' of just getting the pid of the main thread inside your window loop and killing it. That's horribly hacky, but might be good enough for a demonstration job.

How can we seamlessly implement a blocking process in generic code?

How can I create a class, that starts a blocking process in the background without needing to specifically design consumers/peers of that class around asyncio/threading? For example, how can we start a websocket connection to run alongside an event-loop, without specific support for threading/asyncio in the event-loop?
I have solved the conflict by automatically starting a threading.Thread on object creation. The thread will run blocking-processes without interruption, and can reference the self of its parent. This allows me to wrap any blocking process into a supporting object, avoiding the need to specifically design around the implementation of said process.
An example is provided below. On creation, the blocking process self.process will start on a separate thread, allowing the main thread to continue operation.
class SmartThread(threading.Thread):
def __init__(self):
super().__init__(target=self.process)
self.start()
def process(self):
while True:
print(self.clock)
time.sleep(1)
#property
def clock(self):
return time.time()

How can I use CoreBluetooth for Python without giving up the main thread

I am trying to implement a generic BLE interface that will run on OS/X and talk to a BLE peripheral device. The peripheral is very complex: It can be queried, sent hundreds of different commands, offers notifications, etc. I need to be able to connect to it, send it commands, read responses, get updates, etc.
I have all of the code I need but am being frustrated by one thing: From the limited information I can find online, it looks like the only way to make CoreBluetooth's delegate callbacks get called is by running:
from PyObjCTools import AppHelper
# [functional CoreBluetooth code that scans for peripherals]
AppHelper.runConsoleEventLoop()
The problem is that AppHelper.runConsoleEventLoop blocks the main thread from continuing, so I cannot execute code to interact with the peripheral.
I have tried running the event loop:
From a different thread ---> Delegate callbacks not called
From a subprocess ---> Delegate callbacks not called
From a forked child ---> Python crashes with error message: The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
From multiprocessing.Pool(1).apply_async(f) ---> Python crashes with error message: The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
all without success.
I do not understand the nature of AppHelper.runConsoleEventLoop. Why does it need to be run in order for the CoreBluetooth delegate callbacks to be called? Is there some other version that can be called that doesn't have to be run on the main thread? I read something on the web about it being GUI related and therefore had to be run on the main thread but my python application does not have any GUI elements. Is there a flag or API that is less concerned with GUI that I could use?
Any help would be enormously appreciated. Thanks for your time!
Update:
I spoke with an iOS/CoreBluetooth expert at work and found out that Dispatch Queues are probably the solution. I dug further and found that the author of the PyObjC package recently released a v4.1 that adds support for dispatch queues that was heretofore missing.
I've been reading Apple developer documentation for hours now and I understand that it's possible to create Dispatch Source objects that monitor certain system events (such as BLE peripheral events that I am interested in) and that configuring them involves creating and assigning a Dispatch Queue, which is the class that calls my CBCentralManager delegate callback methods. The one piece of the puzzle that I am still missing is how to connect the Dispatch Source/Queue stuff to the AppHelper.runConsoleEventLoop, which calls Foundation.NSRunLoop.currentRunLoop(). If I put the call to AppHelper on a separate thread, how do I tell it which Dispatch Source/Queue to work with in order to get event info?
So I finally figured it out. If you want to run an event loop on a separate thread so that you don't lose control of the main thread, you must create a new dispatch queue and initialize your CBCentralManager with it.
import CoreBluetooth
import libdispatch
class CentralManager(object):
def __init__(self):
central_manager = CoreBluetooth.CBCentralManager.alloc()
dispatch_queue = libdispatch.dispatch_queue_create('<queue name>', None)
central_manager.initWithDelegate_queue_options_(delegate, dispatch_queue, None)
def do_the_things(args):
# scan, connect, send messages, w/e
class EventLoopThread(threading.Thread):
def __init__(self):
super(EventLoopThread, self).__init__()
self.setDaemon(True)
self.should_stop = False
def run(self):
logging.info('Starting event loop on background thread')
AppHelper.runConsoleEventLoop(installInterrupt=True)
def stop(self):
logging.info('Stop the event loop')
AppHelper.stopEventLoop()
event_loop_thread = EventLoopThread()
event_loop_thread.start()
central_device = BLECentralDevice(service_uuid_list)
central_device.do_the_things('woo hoo')
event_loop_thread.stop()

Call a twisted protocol method from another thread

I have made a Home Security program in Python that uses Raspberry Pi's GPIOs to sense movement and actuate the siren. The users activate/deactivate the system using a NFC Tag to a nfc reder connected also to raspberry pi.
For this I need to constantly check for nfc tags in a non blocking manner and at the same time constantly check the sensors for movement also non blocking. I need some more parallel stuff to do but I think these two are enough to make my point.
Right now I use threads which I start/stop like this - Stopping a thread after a certain amount of time -
I'm not sure if this is the optimal way but as of now the system works fine.
Now I want to extend its functionality to offer notifications through websockets. I found that this can be done with Twisted but I am confused..
Here is an example code of how I am trying to do it:
from twisted.internet import reactor
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
def thread1(stop_event):
while(not stop_event.is_set()):
stop_event.wait(4)
print "checking sensor"
# sensor_state = GPIO.input(11)
if sensor_state == 1:
# how can I call send_m("sensor detected movement") #<---
t1_stop_event.set()
t1_stop_event = Event()
t1 = Thread(target=thread1, args=(t1_stop_event,))
class EchoServerProtocol(WebSocketServerProtocol):
def onMessage(self, msg, binary):
print "received: "+msg
print "stopping thread1"
t1_stop_event.set()
def send_m(self, msg):
self.sendMessage(msg)
if __name__ == '__main__':
t1.start()
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = EchoServerProtocol
listenWS(factory)
reactor.run()
So how can I call the send method of the server protocol from a thread like the thread1?
As is often the case, the answer to your question about threads and Twisted is "don't use threads".
The reason you're starting a thread here appears to be so you can repeatedly check a GPIO sensor. Does checking the sensor block? I'm guessing not, since if it's a GPIO it's locally available hardware and its results will be available immediately. But I'll give you the answer both ways.
The main thing you are using threads for here is to do something repeatedly. If you want to do something repeatedly in Twisted, there is never a reason to use threads :). Twisted includes a great API for recurring tasks: LoopingCall. Your example, re-written to use LoopingCall (again, assuming that the GPIO call does not block) would look like this:
from somewhere import GPIO
from twisted.internet import reactor, task
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
class EchoServerProtocol(WebSocketServerProtocol):
def check_movement(self):
print "checking sensor"
sensor_state = GPIO.input(11)
if sensor_state == 1:
self.send_m("sensor detected movement")
def connectionMade(self):
WebSocketServerProtocol.connectionMade(self)
self.movement_checker = task.LoopingCall(self.check_movement)
self.movement_checker.start(4)
def onMessage(self, msg, binary):
self.movement_checker.stop()
def send_m(self, msg):
self.sendMessage(msg)
if __name__ == '__main__':
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = EchoServerProtocol
listenWS(factory)
reactor.run()
Of course, there is one case where you still need to use threads: if the GPIO checker (or whatever your recurring task is) needs to run in a thread because it is a potentially blocking operation in a library that can't be modified to make better use of Twisted, and you don't want to block the main loop.
In that case, you still want to use LoopingCall, and take advantage of another one of its features: if you return a Deferred from the function that LoopingCall is calling, then it won't call that function again until the Deferred fires. This means you can shuttle a task off to a thread and not worry about the main loop piling up queries for that thread: you can just resume the loop on the main thread automatically when the thread completes.
To give you a more concrete idea of what I mean, here's the check_movement function modified to work with a long-running blocking call that's run in a thread, instead of a quick polling call that can be run on the main loop:
def check_movement(self):
from twisted.internet.threads import deferToThread
def get_input():
# this is run in a thread
return GPIO.input(11)
def check_input(sensor_state):
# this is back on the main thread, and can safely call send_m
if sensor_state == 1:
self.send_m("sensor movement detected")
return deferToThread(get_input).addCallback(check_input)
Everything else about the above example stays exactly the same.
There are a few factors at play in your example. Short answer: study this documentation on threads in Twisted.
While you don't have to use Twisted's reactor to use protocol classes (threading and protocol implementation are decoupled), you have called reactor.run so all of the below I consider applicable to you.
Let Twisted create threads for you. Going outside the framework can get you in trouble. There are no "public" APIs for IPC messaging with the reactor (I think), so if you use Twisted, you pretty much need to go all the way.
By default, Twisted does not switch threads to call your callbacks. To delegate to a worker thread from the main reactor thread (i.e. to perform blocking I/O), you don't have to create a thread yourself, you use reactor.callInThread and it will run in a worker thread. If you never do this, everything runs in the main reactor thread, meaning for example any I/O operations will block the reactor thread and you can't receive any events until your I/O completes.
Code running in worker threads should use reactor.callFromThread to do anything that is not thread-safe. Provide a callback, which will run in the main reactor thread. You're better safe than sorry here, trust me.
All of the above applies to Deferred processing also. So don't be afraid to use partial(reactor.callFromThread, mycallback) or partial(reactor.callInThread, mycallback) instead of simply mycallback when setting up callbacks. I learned that the hard way; without that, I found that any blocking I/O that I might do in deferred callbacks was either erroring out (due to thread safety issues) or blocking the main thread.
If you're just starting out in Twisted, it's a bit of a "trust fall". Learn to let go of managing your own threads and passing messages via Queue objects, etc. Once you figure out how Deferred and the reactor work (it's called "Twisted" for a reason!), it will seem perfectly natural to you. Twisted does force you to decouple and separate concerns in a functional programming style, but once you're over that, I've found that it's very clean and works well.
One tip: I wrote some decorators to use on all my callback functions so that I didn't have to be constantly calling callInThread and callFromThread and setting up Deferred for exception handling callbacks throughout the code; my decorators enable that behavior for me. It's likely prevented bugs from forgetting to do that, and it's certainly made Twisted development more pleasant for me.

Adding the ability for a python Thread to report when finished

I am currently subclassing python's threading.Thread class in order to add additional logging features to it. At the moment I am trying to get it to report both when the thread is started, and when it has finished. Reporting the thread has started is easy enough since I can just extend the start() function. However reporting exit has been more difficult. I tried to extend the _bootstrap and _boothstrap_inner functions to add logging after they were complete, however that seems to have no effect. I can not modify those functions at all.
Does anyone know of a way to add the ability for a thread to report that it has finished?
I usually use the target function argument to the Thread constructor, so I'd do it this way:
class MyThread(Thread):
def __init__(self, target):
Thread.__init__(self, target=self._target, args=(target,))
def _target(self, target):
print "thread starting"
target()
print "thread ended"
Now, it does seem like you're used to using Thread the other way, by overriding its run() method, but maybe this will be of some use anyway.
Seems that your only option is requiring your users to override some other method instead of run. This way you'll have run in your CustomThread that invokes that other method and reports when done.
This has an extra benefit: start function is non-trivial, you'll be able to report successful start at the beginning of run instead of carefully dealing with overriden start.
If you have a lot of asynchronous threads maybe you should consider a message queue for inter communication instead? have the thread post messages to an exchange and then exit. Then let the calling thread decide when to poll for messages. Kinda depends on your workload though.
This has the advantage that you can go multi process rather than multi thread if you want later.
I understand this suggestion may not be what you were wanted.

Categories

Resources