Unlike subprocess.Popen, multiprocessing.Process doesn't have a send_signal method. Why? Is there a recommended way to send signals like SIGINT to multiprocessing.Process? Should I use os.kill() for that purpose? Thanks in advance.
Your first question makes total sense.
I think that's because multiprocessing and subprocess libraries have different design goals (as explained by this answer) : the former is for making multiple Python scripts cooperate over different CPUs for achieving a common task, while the latter is for integrating external programs in your Python program. Because IPC (inter-process communication) is far easier between cooperating Python multiprocesses (there are queues and pipes, you can pass Python objects as arguments, ...) than with an external program which we can only assume to adhere to the OS interface (textual stdin/stdout, should handle signals correctly, ...).
The default way to communicate with another multiprocess is thus not an OS signal, so that it was not considered useful to integrate it.
Also remember that (C)Python is OpenSource, so you could contribute this integration yourselves.
As for your second question, there already is an answer (cf How can I send a signal from a python program?), yes :
use os.kill()
Related
I am developing a data monitor using PyQt5. I need to read from multiple sensors through serial ports. One or two of the sensors require different commands to send and then read the data, while the rest send data at a fixed speed.
How can I monitor multiple ports without interrupting the UI? I don't know if I should use QThread, threading, multiprocessing, subprocesses, or any other technique. I'm not trying to ask an opinionated question, and if these all "work," then what are the relevant pros and cons of each technique?
I am really struggling to find information on what I am doing. It is very frustrating, as my problem seems so simple, but I can't find any relevant projects, examples, or tutorials. A point in the right direction would be great.
Just as a real rough description:
threading maintains the same memory space as your main thread. This means that you can reference certain variables between threads.
Qthread is similar to normal threading, but it also includes the ability to restart the thread and can use slots/signals. If you're using PyQt5, I would use this over the normal threading library if possible.
multiprocessing does not use the same memory space. So it'll have to create a copy of any variables needed for the multiprocess and then it's completely independent. You can use a multiprocess queue to pass information between multiprocesses though.
subprocess lets you control other programs. It's used to integrate external programs into your project.
Here's a link to another similar question to yours: deciding among subprocess, multiprocessing, and thread in Python?
If you want someone to just tell you one to use, I would start with QThread so you can still reference that data easily and it plays well with PyQt5 architecture in general. You can also see if QSerial itself will not disrupt the UI as suggested by musicamante.
I have implemented a server program using Twisted. I am using twisted.protocols.basic.LineReceiver along with twisted.internet.protocol.ServerFactory.
I would like to have each client that connects to the server run a set of functions in parallel (I'm thinking of multi-threading for this).
I have some confusion with using twisted.internet.threads.deferToThread for this problem.
Should I call deferToThread in the ServerFactory for this purpose?
Are twisted threads, thread-safe with respect to race conditions?
Previously, I tried using multiprocessing in my server program but it seemed not to work in combination with the Twisted reactor, while deferToThread did the job.
I'm wondering how are Twisted threads implemented? Don't they utilize multiprocessing?
Previously, I tried using multiprocessing in my server program but it seemed not to work in combination with the Twisted reactor, while deferToThread did the job. I'm wondering how are Twisted threads implemented? Don't they utilize multiprocessing?
You didn't say whether you used the multi-threaded version of multiprocessing or the multi-process version of multiprocessing.
You can read about mixing Twisted and multiprocessing on Stack Overflow, though:
Mix Python Twisted with multiprocessing?
Twisted network client with multiprocessing workers?
is twisted incompatible with multiprocessing events and queues?
(And more)
To answer the shorter part of this question - no, Twisted does not use the stdlib multiprocessing package to implement its threading APIs. It uses the stdlib threading module.
Are twisted threads, thread-safe with respect to race conditions?
The answer to this is implied by the above answer: no. "Twisted threads" aren't really a thing. Twisted's threading APIs are just a layer on top of the stdlib threading module (which is really just a Python API for POSIX threads (or something kind of similar but different on Windows). Twisted's threading APIs don't magically eliminate the possibility of race conditions (if there is any magic in Twisted, it is the ability to do certain things concurrently without using threads at all - which helps reduce the number of race conditions in your program, though it doesn't entirely eliminate the possibility of creating them).
Should I call deferToThread in the ServerFactory for this purpose?
I'm not quite sure what the point of this question is. Are you wondering if a method on your ServerFactory subclass is the best place to put your calls to deferToThread? That probably depends on the details of your implementation approach. It probably doesn't make a huge difference overall, though. If you like the pattern of having the factory provide services to protocol instances - go for it.
I've written a script that uses two thread pools of ten threads each to pull in data from an API. The thread pool implements this code on ActiveState. Each thread pool is monitoring a Redis database via PubSub for new entries. When a new entry is published, python passes the data to a function that uses python's Subprocess.POpen to execute a PHP shell to do the actual work of calling the API.
This system of launching PHP shells is necessary for functionality with my PHP web app, so launching PHP shells with Python can't be avoided.
This script will only be running on Linux servers.
How do I control the niceness (scheduling priority) of the application's threads?
Edit:
It seems controlling scheduling priority for individual threads in Python isn't possible. Is there a python solution, or at the very least a UNIX command I can run along with my script, to control the priority?
Edit 2:
Well I didn't end up finding a python way to handle it. I'm just running my script with nice now like this:
nice -n 19 python MyScript.py
I believe that threading priority is not controllable in python due to how they are implemented using a global interpreter lock (GIL). Having said that, even if you could give one thread more CPU processing priority, the python implementation that hands around the GIL would not be aware of this as it handed around the GIL. If you were able to increase niceness in a single thread in your pool (say it is doing a more important job) you would need to use your own implementation of locks to give the higher priority thread access to the GIL more often.
A google search returns this article which I believe is similar to what you are asking
Explains why it doesnt work
http://www.velocityreviews.com/forums/t329441-threading-priority.html
Explains the workaround I was suggesting
http://bytes.com/topic/python/answers/645966-setting-thread-priorities
The python threading-docs mention explicitly that there is no support for setting thread-priorities:
The design of this module is loosely based on Java’s threading model. However, where Java makes locks and condition variables basic behavior of every object, they are separate objects in Python. Python’s Thread class supports a subset of the behavior of Java’s Thread class; currently, there are no priorities, no thread groups, and threads cannot be destroyed, stopped, suspended, resumed, or interrupted. The static methods of Java’s Thread class, when implemented, are mapped to module-level functions.
It doesn't work, but I tried:
getting the parent pid and priority
launching threads using concurrent.futures.ThreadPoolExecutor
using ctypes to get the (linux) thread id from within the thread(works)
using the tid with os.setpriority(os.PRIO_PROCESS,tid,parent_priority+1)
calling pool.shutdown() from the parent.
Even with liberal sprinkling of os.sched_yield(), the child threads never actually run past the setpriority().
Reading man pages, it seems threads don't have the capability to change (even their) scheduling priority; you have to do something with "capabilities" to give the thread the "CAP_SYS_NICE" capability. Running the process with root permissions didn't help either; child threads still don't run.
I know, a lot of time has passed, but I recently came across this question, and I thought it would be useful to add another option.
Have a look at threading2, which is a drop-in replacement and extension for the default threading module, with support – sort of – for priority and affinity.
I was wondering if this answer at another related question might be useful in this scenario? (link)
As you are already using Subprocess.POpen to launch your PHP script, it strikes me that you can use "preexec_fn" and either a predefined function, or a lambda function (as demonstrated in the above linked answer) to set the nice level of each launched PHP thread?
This is for a moderation bot for C&C Renegade, in case anyone wants some background.
I have a class which will act as a parent to a load of subclasses that provide IRC connections, connections to the gamelog (UDP socket), etc, and I want to know if it is possible to split some of these subclasses (notably the two socket connections [IRC, gamelog]) into their own threads using the threading module.
If anyone has any suggestions, even if it's just saying it can't be done, I'd appreciate the input.
Tom
Edit: I have experience with working with threaded applications, so I'm not a complete noob, honest.
It is feasible, take a look at:
multiprocessing
Besides the simple process forking, it also provides memory sharing - which is likely to be needed.
The best option would be to run your app with gevent coroutines. Those are much more light-weight than threads and processes. The library has been created based on green threads execution units. Here you can find a good comparison and benchmark of the execution models of Eventlet (A python library that provides a synchronous interface to do asynchronous I/O operations which uses green threads to achieve cooperative sockets) and node.js.
I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.
[Yes, I know, now I have two problems.]
Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python threading module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?
Edit (re bounty): While the solution in my particular case will probably be using a non-blocking network request like Jeff Ober and Lukáš Lalinský suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:
What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the threading module)?
Edit 2: Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.
For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that abbot links to.
There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.
Thanks again.
This was discussed not too long ago in PyQt mailing list. Quoting Giovanni Bajo's comments on the subject:
It's mostly the same. The main difference is that QThreads are better
integrated with Qt (asynchrnous signals/slots, event loop, etc.).
Also, you can't use Qt from a Python thread (you can't for instance
post event to the main thread through QApplication.postEvent): you
need a QThread for that to work.
A general rule of thumb might be to use QThreads if you're going to interact somehow with Qt, and use Python threads otherwise.
And some earlier comment on this subject from PyQt's author: "they are both wrappers around the same native thread implementations". And both implementations use GIL in the same way.
Python's threads will be simpler and safer, and since it is for an I/O-based application, they are able to bypass the GIL. That said, have you considered non-blocking I/O using Twisted or non-blocking sockets/select?
EDIT: more on threads
Python threads
Python's threads are system threads. However, Python uses a global interpreter lock (GIL) to ensure that the interpreter is only ever executing a certain size block of byte-code instructions at a time. Luckily, Python releases the GIL during input/output operations, making threads useful for simulating non-blocking I/O.
Important caveat: This can be misleading, since the number of byte-code instructions does not correspond to the number of lines in a program. Even a single assignment may not be atomic in Python, so a mutex lock is necessary for any block of code that must be executed atomically, even with the GIL.
QT threads
When Python hands off control to a 3rd party compiled module, it releases the GIL. It becomes the responsibility of the module to ensure atomicity where required. When control is passed back, Python will use the GIL. This can make using 3rd party libraries in conjunction with threads confusing. It is even more difficult to use an external threading library because it adds uncertainty as to where and when control is in the hands of the module vs the interpreter.
QT threads operate with the GIL released. QT threads are able to execute QT library code (and other compiled module code that does not acquire the GIL) concurrently. However, the Python code executed within the context of a QT thread still acquires the GIL, and now you have to manage two sets of logic for locking your code.
In the end, both QT threads and Python threads are wrappers around system threads. Python threads are marginally safer to use, since those parts that are not written in Python (implicitly using the GIL) use the GIL in any case (although the caveat above still applies.)
Non-blocking I/O
Threads add extraordinarily complexity to your application. Especially when dealing with the already complex interaction between the Python interpreter and compiled module code. While many find event-based programming difficult to follow, event-based, non-blocking I/O is often much less difficult to reason about than threads.
With asynchronous I/O, you can always be sure that, for each open descriptor, the path of execution is consistent and orderly. There are, obviously, issues that must be addressed, such as what to do when code depending on one open channel further depends on the results of code to be called when another open channel returns data.
One nice solution for event-based, non-blocking I/O is the new Diesel library. It is restricted to Linux at the moment, but it is extraordinarily fast and quite elegant.
It is also worth your time to learn pyevent, a wrapper around the wonderful libevent library, which provides a basic framework for event-based programming using the fastest available method for your system (determined at compile time).
The advantage of QThread is that it's integrated with the rest of the Qt library. That is, thread-aware methods in Qt will need to know in which thread they run, and to move objects between threads, you will need to use QThread. Another useful feature is running your own event loop in a thread.
If you are accessing a HTTP server, you should consider QNetworkAccessManager.
I asked myself the same question when I was working to PyTalk.
If you are using Qt, you need to use QThread to be able to use the Qt framework and expecially the signal/slot system.
With the signal/slot engine, you will be able to talk from a thread to another and with every part of your project.
Moreover, there is not very performance question about this choice since both are a C++ bindings.
Here is my experience of PyQt and thread.
I encourage you to use QThread.
Jeff has some good points. Only one main thread can do any GUI updates. If you do need to update the GUI from within the thread, Qt-4's queued connection signals make it easy to send data across threads and will automatically be invoked if you're using QThread; I'm not sure if they will be if you're using Python threads, although it's easy to add a parameter to connect().
I can't really recommend either, but I can try describing differences between CPython and Qt threads.
First of all, CPython threads do not run concurrently, at least not Python code. Yes, they do create system threads for each Python thread, however only the thread currently holding Global Interpreter Lock is allowed to run (C extensions and FFI code might bypass it, but Python bytecode is not executed while thread doesn't hold GIL).
On the other hand, we have Qt threads, which are basically common layer over system threads, don't have Global Interpreter Lock, and thus are capable of running concurrently. I'm not sure how PyQt deals with it, however unless your Qt threads call Python code, they should be able to run concurrently (bar various extra locks that might be implemented in various structures).
For extra fine-tuning, you can modify the amount of bytecode instructions that are interpreted before switching ownership of GIL - lower values mean more context switching (and possibly higher responsiveness) but lower performance per individual thread (context switches have their cost - if you try switching every few instructions it doesn't help speed.)
Hope it helps with your problems :)
I can't comment on the exact differences between Python and PyQt threads, but I've been doing what you're attempting to do using QThread, QNetworkAcessManager and making sure to call QApplication.processEvents() while the thread is alive. If GUI responsiveness is really the issue you're trying to solve, the later will help.