Terminate application if subprocess ends - python

I have an application that is doing some data processing in its main thread. So far it was a pure console application. Now I had to add a QT App for visualization purpose and did this as a separate thread.
If the QT Window is closed, the main thread of course still runs. How could I terminate the main thread once the window is closed?
class Window(threading.Thread)
def __init__(self, data_getter):
super(Window, self).__init__()
self.getter = data_getter
def update(self):
data = self.getter()
#update all UI widgets
def run(self):
app: QApplication = QApplication([])
app.setStyleSheet(style.load_stylesheet())
window = QWidget()
window.setWindowTitle("Test Widget")
window.setGeometry(100, 100, 600, 300)
layout = QGridLayout()
self.LABEL_state: QLabel = QLabel("SM State: N/A")
layout.addWidget(self.LABEL_state)
window.setLayout(layout)
window.show()
timer = QTimer()
timer.timeout.connect(self.update)
timer.start(1000)
app.exec_()
class Runner:
def __init__(self)
pass
def data_container(self):
return data
def process_data(self):
#do the data processing
def main():
runner: Runner = Runner()
time.sleep(1)
w = Window(runner.data_container)
w.start()
while True:
runner.process_data()
time.sleep(2)
if __name__ == "__main__": main()
The best idea I had is to give Window another function reference of Runner that is then registered inside Window to atexit and would set a termination flag that is frequently checked inside the main process (Runner). Is there a better approach? I know it migth be better to have the QApp run as the main process, but I'd like to not have to do that in this case.

There are basically two questions here: synchronising an event accross two threads, and stopping a running thread from outside. The way you solve the latter problem will probably affect your solution to the former. Very broadly, you can either:
poll some flag inside a main loop (in your case the while True loop in main would be an obvious target, possibly moving the logic into process_data and having it run to completion), or
use some mechanism to stop the containing process (like a signal), optionally registering cleanup code to get things into a known state.
In either case you can then design your api however you like, but a .stop() or .cancel() method is a very normal solution.
The trouble with relying on polling is that the worse case response time is an entire cycle of your main loop. If that's not acceptable you probably want to trigger the containing process or look for ways to check more frequently (if your process_data() takes << 2s to run, replace the sleep(2) with a looped smaller delay and poll the flag there).
If stopping by setting a flag isn't workable, you can trigger the containing process. This normally implies that the triggering code is running in a different thread/process. Python's threads don't have a .terminate(), but multiprocessing.Processes do, so you could delegate your processing over to a process and then have the main code call .terminate() (or get the pid yourself and send the signal manually). In this case the main code would be doing nothing until signalled, or perhaps nothing at all.
Lastly, communication between the graphical thread and the processing thread depends on how you implement the rest. For simply setting a flag, exposing a method is fine. If you move the processing code to a Process and have the main thread idle, use a blocking event to avoid busy-looping.
And yes, it would be easier if the graphical thread were the main thread and started and stopped the processing code itself. Unless you know this will greatly complicate things, have a look at it to see how much you would need to change to do this: well designed data processing code should just take data, process it, and push it out. If putting it in a thread is hard work, the design probably needs revisiting. Lastly there's the 'nuclear option' of just getting the pid of the main thread inside your window loop and killing it. That's horribly hacky, but might be good enough for a demonstration job.

Related

migrating from Inherited QThread to Worker model

So through a lot of help in my previous questions
(Interrupting QThread sleep
and PySide passing signals from QThread to a slot in another QThread) I decided to attempt to change from the inherited QThread model to the Worker model. I am thinking I should stay with the QThread model as I had that working, and the other model is not. However I am not sure why the Worker model isn't working for me.
I am attempting to do this please let me know if there is something inherently wrong in my methodology?
I have a QtGui.QWidget that is my main GUI. I am using a QPushButton to signal
I have attempted to reduce the code to the basics of where I believe the issue is. I have verified that datagramHandled Signal gets emitted but the packet_handled Slot doesn't seem to get called.
class myObject(QtCore.QObject):
def __init__(self):
super(myObject, self).__init__()
self.ready=False
#QtCore.Slot()
def do_work(self):
#send a packet
self.ready=False
while not self.ready:
time.sleep(0.01)
#QtCore.Slot(int)
def packet_handled(self, errorCode):
print "Packet received."
self.ready = True
class myWidget(QtGui.QWidget):
datagramHandled = QtCore.Signal(int)
startRunThread = QtCore.Signal()
def __init__(self,parent=None, **kwargs):
super(myWidget, self).__init__(parent=parent)
# Bunch of GUI setup stuff (working)
self.myRunThread = QtCore.QThread()
#QtCore.Slot()
def run_command(self):
self.myRunObj = myObject()
self.myRunObj.moveToThread(self.myRunThread)
self.datagramHandled.connect(self.myRunObj.packet_handled)
self.startRunThread.connect(self.myRunObj.do_work)
self.myRunThread.start()
self.startRunThread.emit()
#QtCore.Slot()
def handle_datagram(self):
#handle the incoming datagram
errorCode = 0
self.datagramHandled.emit(errorCode)
The first issue is that you need to connect your myObject.do_work method to QThread.started:
self.myRunThread.started.connect(self.myRunObj.do_work)
Secondly, your do_work method should include something along these lines to enable event processing (please forgive my rusty PyQt and pseudocode):
def do_work(self):
while someCondition:
#The next two lines are critical for events and queued signals
if self.thread().eventDispatcher().hasPendingEvents():
self.thread().eventDispatcher().processEvents(QEventLoop.AllEvents)
if not self.meetsSomeConditionToContinueRunning():
break
elif self.hasWorkOfSomeKind():
self.do_something_here()
else:
QThread.yieldCurrentThread()
For more on this, check out the docs for QAbstractEventDispatcher.
The logic here is that when a signal is emitted from one thread (myWidget.datagramHandled), it gets queued in your worker thread's event loop. Calling processEvents processes any pending events (including queued signals, which are really just events), invoking the appropriate slots for any queued signals (myRunObj.packet_handled).
Further reading:
How To Really, Truly Use QThreads; The Full Explanation
Threading Basics
There 3 possible ways of distributing the computation/other load with Qt:
Explicitly putting the load to concrete QThread instance. That is thread-based concurrency.
Implicitly putting the load to pooled QThread instance. That is closer to task-based concurrency yet 'manually' managed with your own logic. QThreadPool class is used for maintaining the pool of threads.
Starting the task in own threading context we never explicitly manage. That is task-based concurrency and QtConcurrent namespace used. My guess is that task-based concurrency and "worker model" is the same thing (observed your changes). Mind that QtConcurrent does offer parallelization for tasks and uses exceptions (which may affect the way you write the code) unlike the rest of Qt.
Given you use PyQt you can also take an advantage of the feature designated for the pattern you want to implement with QtConcurrent for PyQt.
P.S. I see use thread.sleep( interval ) and that is not a good practice and one more indication that the proper technique should be used for implementing 'Worker model'.
An alternative to the solution provided by #JonHarper is to replace your while loop with a QTimer. Because you have an event loop running in your worker process now, it can handle QTimer events correctly (as long as you construct the QTimer in the relevant thread).
This way, control is returned to the event loop periodically so that other slots can be run when required.

Writing to a serial link continuously from a GUI program: need to use threads?

I've written a GUI program with PyQt4 that has to send a message string over a serial data link.
I have implemented a GUI interface with two button widgets. I need to send the data over the serial link continuously, once per second when the first button is clicked, and then stop when the second button is clicked.
My current program is able to send data only at the instant a button is clicked. This is the method I wrote to handle the button click signal:
def sendMessage(self):
while 1:
print "Hello........"
if checke == False:
break
Do I need to use threads to solve this problem?
It depends... if the send operation is fast, you can use the QTimer class. It integrates with the Qt event loop so you don't have to worry about threading issues. Serial communications can be slow, depending on how much data you are sending, so I can't say for sure if this is the right solution for you.
Yes. The key to GUI programming is never do any long operation on the main thread, because it'll block the whole program until that operation is complete.
If you want to continuously send data over network, do it in a background thread.
Some example code for you.
class MessageWorker(QtCore.QThread):
def __init__(self):
super(ParserWorker, self).__init__()
self.ok_to_send = False
self.terminated = True
def run(self):
while not self.terminated:
if self.ok_to_send:
self.send_message()
time.sleep(1)
def start_send():
self.ok_to_send = True
def pause_send():
self.ok_to_send = False
def terminated():
self.terminated = False
Then in the main program just call
worker = MessageWorker()
worker.start() # Start the background thread
worker.start_send() # Start sending message
worker.pause_send() # Pause sending message
worker.terminated() # Stop sending message permanently
Yes, you need to use threads. In any GUI-based program, any work that's going to take a non-trivial amount of time should always happen on a separate thread to avoid blocking the UI—whenever you see an "unresponsive" program, that's almost always due to the program failing to process window messages because its UI thread is blocked inside some long operation.
One easy way to startup a background thread is to use the threading module. Here's how you might use it to write data to the serial port once per second:
class MyClass:
# This method will run on a separate thread
def _serial_port_worker(self):
while self._run_worker:
self.send_data_to_serial_port()
time.sleep(1)
# Call this to start the worker thread
def start_worker_thread(self):
self._run_worker = True
worker_thread = threading.Thread(target=self._serial_port_worker,
args=(self,))
worker_thread.start()
# Call this to tell the worker thread to stop
def stop_worker_thread(self):
self._run_worker = False
Basically you have three options:
Use a second thread to do the serial comms. GUI toolkits aren't always thread-safe, so you should only make calls to them from the main thread. Additionally, there is a limitation to threading in Python; only one thread at a time can be executing Python bytecode.
Use the GUI toolkit's timeout function (might be called differently) to create an event every now and then. In the event handler do the serial comms. Make sure that you use non-blocking reads and writes (in pyserial, configure a timeout in the Serial object), otherwise your app might become unresponsive.
Do the serial communications from the second program using the multiprocessing module. Even if the second process blocks, it won't affect the GUI. You can use multiprocessing.Queue to communicate between the QUI and the other process.

QT/PySide processEvents call locks up

Why would a call to processEvents block doing nothing for up to 9 seconds?
I have an application with a PySide-based QT interface, where the UI code sits as a decoupled layer over the lower level actual application logic. When the user performs an action which executes lower level application logic that may run for a while, effectively what happens is:
Directly on the GUI thread a progress dialog is displayed.
Directly on the GUI thread, the lower level logic starts a worker thread.
Directly on the GUI thread, the lower level logic loops updating the progress dialog (indirectly/decoupled) and ticking the application event queue via QtGui.qApp.processEvents() (again indirectly/decoupled).
On the worker thread, QT functions are invoked (again indirectly/decoupled) in reaction to events, and these happen on the GUI thread via slots/signals, running when the GUI thread (as mentioned above) calls processEvents().
Directly on the GUI thread, before the loop exits, the last processEvents() call blocks for around 9 seconds. This is after all the logic on the worker thread is over and done with, and there are no more functions waiting to run on it via signal/slot calls. Nothing related to my application is happening in this call. What is it doing there? Why is it blocking? I've tried passing in a max processing time of 300 ms and seeing if it exits, but this makes no difference. The call locks up as long as it wants to.
The progress dialog closes and the user gets focus back.
This is all spread over a lot of files, implemented in a decoupled manner. I'll try and provide snippets to give a picture of the flow.
The decoupled lower level logic worker loop:
while not completed_event.wait(0.1) and not work_state.is_cancelled():
work_completeness, work_description = work_state.get_completeness(), work_state.get_description()
for client in self.clients:
if work_completeness != last_completeness or work_description != last_description:
client.event_prolonged_action_update(client is acting_client, work_description, step_count * work_completeness)
# THE LAST CALL TO THE NEXT LINE LOCKS UP FOR NO REASON
client.event_tick(client is acting_client)
last_completeness, last_description = work_completeness, work_description
The PySide/QT layer client event_tick function:
def event_tick(self, active_client):
# THIS IS WHERE THE LOCK UP HAPPENS
QtGui.qApp.processEvents()
Signal/slot usage in the PySide/QT layer to get worker thread calls happening on the GUI thread:
def event_pre_line_change(self, active_client, line0, line_count):
self.pre_line_change_signal.emit((line0, line_count))
def event_post_line_change(self, active_client, line0, line_count):
self.post_line_change_signal.emit((line0, line_count))
def event_uncertain_reference_modification(self, active_client, data_type_from, data_type_to, address, length):
self.uncertain_reference_modification_signal.emit((data_type_from, data_type_to, address, length))
The reason that I delegate the calls on the worker thread over to the GUI thread using signals/slots, is that this is what PySide/QT requires given they will update the UI.
Reproduction case if you want to dig deeper:
Download and get the code running according to the GitHub project readme text.
Download the test file "NEURO" (<200KB in size) from mega (apologies, it was easiest site to upload to).
Load NEURO in PeaSauce.
Go to offset 0x1A19E (CTRL-G)
Change the data type to code (Menu: Edit / Change address datatype / Code)
Observe progress dialog come and go.
Observe ~9 second lock up.

How do I make a button in my tk UI that makes the program do a whole ton of stuff when it is pressed not freeze the window?

I'm just getting started with Tk and using it in python. I set up a button that does a ton (like two minutes worth) of stuff behind the scenes when you press it. I also set up a status text to show what's going on while this is happening. I set it up like this:
status_text = Tkinter.StringVar()
ttk.Button(mainframe, text="Stats!", command=go).grid(column=1, row=4)
def go(*args):
status_text.set("Logging in...")
do_lots_of_stuff()
status_text.set("Doing stuff #1...")
do_even_more_stuff()
status_text.set("Success!")
The problem is that when you press that button the window freezes, and the status text never actually changes. It looks broken, and doesn't come out of this state until all the processing finishes 2-3 minutes later. How do I make it not freeze and update the status text?
It's time to learn multithreading!
What's happening is that the GUI (main thread) is waiting for the method to return so that it can continue updating the interface.
You'll want to cause the action of a button to spawn a threading.Thread instead of running the heavy code in the main thread. You'll also want to create a Queue to access the data from the other thread (since sending GUI requests should ONLY be made from the main thread).
import threading, Queue #queue in 3.x
my_queue = Queue.Queue()
def go(*args):
my_thread = threading.Thread(target=function_that_does_stuff)
def function_that_does_stuff():
my_queue.put("Logging in...")
do_lots_of_stuff()
my_queue.put("Doing stuff #1...")
do_even_more_stuff()
my_queue.put("Success!")
Then you'll need a function that is run when the update event happens.
def OnUpdate(*args):
try:
status_text.set(my_queue.get())
except Queue.Empty:
pass
If you have control of do_lots_of_stuff and you can break it into small chunks, you can place small jobs on the event queue to do each chunk.
For example, if you're do_lots_of_stuff is reading lines from a file, create a method that reads one line and then puts a job on the event queue to call itself after a ms or two. This way the event loop continues to run, and your code does a little processing on each iteration. This is a surprisingly effective technique, though it only works if you're able to break your 'lots of stuff' into atomic chunks.
If you can't do that you'll have to either use multiple threads or multiple processes. I personally prefer the latter -- multiprocessing code is (arguably) less difficult to write and debug.

On-Demand Python Thread Start/Join Freezing Up from wxPython GUI

I'm attempting to build a very simple wxPython GUI that monitors and displays external data. There is a button that turns the monitoring on/off. When monitoring is turned on, the GUI updates a couple of wx StaticLabels with real-time data. When monitoring is turned off, the GUI idles.
The way I tried to build it was with a fairly simple Python Thread layout. When the 'Start Monitoring' button is clicked, the program spawns a thread that updates the labels with real-time information. When the 'Stop Monitoring' button is clicked, thread.join() is called, and it should stop.
The start function works and the real-time data updating works great, but when I click 'Stop', the whole program freezes. I'm running this on Windows 7 64-bit, so I get the usual "This Program has Stopped Responding" Windows dialog.
Here is the relevant code:
class MonGUI(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
...
... other code for the GUI here ...
...
# Create the thread that will update the VFO information
self.monThread = Thread(None, target=self.monThreadWork)
self.monThread.daemon = True
self.runThread = False
def monThreadWork(self):
while self.runThread:
...
... Update the StaticLabels with info
... (This part working)
...
# Turn monitoring on/off when the button is pressed.
def OnClick(self, event):
if self.isMonitoring:
self.button.SetLabel("Start Monitoring")
self.isMonitoring = False
self.runThread = False
self.monThread.join()
else:
self.button.SetLabel("Stop Monitoring")
self.isMonitoring = True
# Start the monitor thread!
self.runThread = True
self.monThread.start()
I'm sure there is a better way to do this, but I'm fairly new to GUI programming and Python threads, and this was the first thing I came up with.
So, why does clicking the button to stop the thread make the whole thing freeze up?
In wxPython, GUI operations need to take place in the main thread. At places in your code you are calling the GUI from a different thread.
The easiest solution is to use wx.CallAfter(). A line of code would look like
wx.CallAfter(self.button.SetLabel, “Start Monitoring”)
which will then call self.button.SetLabel(“Start Monitoring”) from the main thread after the function completes.
There are other ways around this as well, such as using a Python threading Queue or wx.PostEvent, but start with CallAfter because it's easiest.
Other issues are also relevant, like you can't restart the same thread, but using CallAfter will stop the crashing.
It's likely hanging on join([timeout]), which blocks the calling thread until the thread whose join() method is called terminates – either normally or through an unhandled exception – or until the optional timeout occurs.
Do you have some inner loop in your thread, or a blocking call that waits for some source of data that may never come? When I wrote a basic serial program that grabbed COM port data, it would sometimes hang because a read function in my thread would block until it got something.
I would sprinkle in a few debugging print statements to see whats happening.
Edit:
I'd also use a threading.Event() instead of a Boolean flag, e.g.:
# in the init code...
self.runThread = threading.Event()
# when starting thread...
self.runThread.set()
self.monThread.start()
# in the thread...
while self.runThread.isSet():
pass # do stuff
# killing the thread...
self.runThread.clear()
self.monThread.join()
This shouldn't make it work differently, but it's a slightly safer way to do it.
tom10 has the right idea with avoiding UI updates from the monitor thread.
Also, it is probably not a good idea to have the blocking call self.monThread.join() in your UI thread. If you want the UI to give some feedback that the monitor thread has actually ended, have monThreadWorker issue a wx.CallAfter() or wx.PostEvent() just before it closes.
Avoid anything that blocks in your UI thread, and you will avoid deadlocking the UI

Categories

Resources