I started doing a simple command line chat, then I learned about tkinter and added GUI, then I added voice support.
As soon as I learned that python threads suck, I started using Processes and pipes for inter-process communication.
The main process, launches 3 processes to deal with: GUI, sound input, sound output.
The main process connects to the server, sends/receives messages (updating the GUI).
The GUI process runs the tkinter loop and communicates with the main process when needed.
The way I call procedures in another process is by sending, through a pipe, the name of the method and arguments for it, which the receiving process will execute. There is no return, only calls.
I would like know the wrongs of this architecture and alternatives.
What design patterns exist for this problem?
Related
I learned the hard way that tkinter is not thread-safe when starting independent threads with tkinter functionality from the main tkinter thread. I got error messages in a (for me) non-reproducible way, mostly "main thread is not in main loop" in connection to internal del calls after I stopped my application. Sometimes the kernel crashed during or after execution, often everything just ran smoothly.
These independent threads should run data acquisitions (DAQ) at a couple of instruments, with different GUIs depending on the type of instrument. Threading seems to be feasible as it is not known from start which instrument will be needed at some time, DAQ tasks should be queued up if an instrument is busy etc.
So, my idea now is to start the DAQ threads without any tkinter functionality from the main thread. The specific DAQ thread knows which specific GUI to use and puts this specific GUI class into a queue which is handled in the main GUI/tkinter thread. The instance of the GUI class will then be created in the GUI/tkinter thread.
Will this approach still violate thread-safety or is everything ok, as long as the GUI instances are created in the main tkinter thread?
As long as you only access tkinter widgets and functions from a single thread, it should work just fine. One exception, as far as I understand, is that it's safe to call the event_genereate method from other threads. You can push data on a queue and then generate an event, then the event can be handled in the main thread where data can be pulled off the queue and processed.
I am a bit confused with multiprocessing. I have a video processing script which can be run from the command line or launched from a PySide application using a subprocess call. The script seems to run fine from the command line and basically initializes a pool of workers which each process a separate video file.
When I run the program however the OS tells me my program is not responding. I would like to make use of all the cores on my system for multiprocessing but I would also like to prevent this annoyance. What should I do I get around this? Do I start the initial script in a thread or something?
As you are speaking of PySide, I assume you program is a GUI one. In a GUI program all processing must occurs in a worker thread if you want to keep the UI responsive. So yes, the initial script must be start in a thread distinct from main thread (main one is reserved for UI)
I wanted to run a function in parallel to the main program in Python. Suppose I have a speech recognition function. I want it to run in the background and interrupt the main program if a particular utterance is heard. But at the same time, I have other tasks to perform. So, the speech recognition should work as a separate process and may be call a function when a command is heard.
I tried the python multiprocessing module, the thread module and the threading module. But all of these required me to wait until the process or thread is finished. What I want is something that will allow me to run functions in the background. They must call some callback function if a specific event occurs.
I hope I will find an effective way of doing this.
I tried the threading module. The code looked like this (pseudocode):
def detected(text):
commands = 'a list of commands'
if text in commands:
#call some function according to the command
def speech_recognition():
#while True:
#If speech detected:
#record it
#process it and covert it to text
#if text is a specified command:
#call the detected(text) function with the recognized text as argument
import threading as t
pr = t.Thread(target=speech_recognition)
pr.start()
#from here starts the main program that does some other functions that
#doesn't need to be mentioned here.
But this doesn't work. The speech recognition runs for a few seconds and then just quits. No Exceptions raised, no system exits, nothing. Its the same when I try the multiprocessing and thread modules.
I don't know how CPU-intense speech recognition is, but I am pretty sure the problem you are describing is best solved with maximum decoupling between entities, i.e. separation in processes. Simple scenario: one of your processes runs your "main program", the other process is entirely responsible for speech recognition. You then implement a communication protocol between those processes. The "main program" still needs some kind of event system and asynchronous execution based on threading, because it needs to be able to listen and immediately react to events sent by the speech-reco-process. Hence, a working model would contain:
one main process which should not be CPU-bound
one child process, spawned via multiprocessing, handling speech recognition
a communication protocol enabling the transmission of data/events between main and child process
one additional thread in the main process that waits for events sent by the child process and makes sure that the main program reacts correspondingly
Main and child process run concurrently as scheduled by the operating system. Of course this works best on a system with at least two CPU cores. In the main process, the main thread and the other thread to not really run synchronously -- only one thread can run at a time due to CPython's global interpreter lock (GIL).
The communication between main process and child process can be implemented with basic multiprocessing tools such as Queue or Pipe.
As you realize, you need to invest some serious thought into a problem like this. Don't try to solve this quick&dirty or just by trial and error. You need to make sure you understand your self-developed architecture.
Just use threading and pass your thread either a function handle to call when it has data ready or the application handle in the case of GUI applications so that the thread can create an event with the data attached.
So I have written a small console application based on cmd.Cmd. The application has a command loop triggered by cmd.cmdloop
On the other hand, my console application uses dbus to launch remote processes. I'm trying to make the laumch asyncronous, but I get
RuntimeError: To make asynchronous calls, receive signals or export objects, D-Bus
connections must be attached to a main loop by passing mainloop=... to the constructor or
calling dbus.set_default_main_loop(...)
So I would like to use gobject.MainLoop() as the main loop.
Is there a way that cmd.cmdloop and gobject.MainLoop can play together?
It looks like cmd.cmdloop isn't a main loop, just a way to get input from a user over-and-over. Your best bet here, if you want to make this as asynchronous as possible, and you're already using dbus may be to have a client process which uses cmd.cmdloop and sends signals to another process that uses the gobject mainloop to actually launch the remote processes. The client process would send signals to the gobject process which contain the command to run, the gobject process would execute them. I'm not sure this will do what you want it to do, but it looks like cmd.cmdloop blocks on user input and as such won't play nicely with a mainloop.
Python have been really bumpy for me, because the last time I created a GUI client, the client seems to hang when spawning a process, calling a shell script, and calling outside application.
This have been my major problem with Python since then, and now I'm in a new project, can someone give me pointers, and a word of advice in order for my GUI python application to still be interactive when spawning another process?
Simplest (not necessarily "best" in an abstract sense): spawn the subprocess in a separate thread, communicating results back to the main thread via a Queue.Queue instance -- the main thread must periodically check that queue to see if the results have arrived yet, but periodic polling isn't hard to arrange in any event loop.
Your main GUI thread will freeze if you spawn off a process and wait for it to completely. Often, you can simply use subprocess and poll it now and then for completion rather than waiting for it to finish. This will keep your GUI from freezing.