Are Python threads buggy? - python

A reliable coder friend told me that Python's current multi-threading implementation is seriously buggy - enough to avoid using altogether. What can said about this rumor?

Python threads are good for concurrent I/O programming. Threads are swapped out of the CPU as soon as they block waiting for input from file, network, etc. This allows other Python threads to use the CPU while others wait. This would allow you to write a multi-threaded web server or web crawler, for example.
However, Python threads are serialized by the GIL when they enter interpreter core. This means that if two threads are crunching numbers, only one can run at any given moment. It also means that you can't take advantage of multi-core or multi-processor architectures.
There are solutions like running multiple Python interpreters concurrently, using a C based threading library. This is not for the faint of heart and the benefits might not be worth the trouble. Let's hope for an all Python solution in a future release.

The standard implementation of Python (generally known as CPython as it is written in C) uses OS threads, but since there is the Global Interpreter Lock, only one thread at a time is allowed to run Python code. But within those limitations, the threading libraries are robust and widely used.
If you want to be able to use multiple CPU cores, there are a few options. One is to use multiple python interpreters concurrently, as mentioned by others. Another option is to use a different implementation of Python that does not use a GIL. The two main options are Jython and IronPython.
Jython is written in Java, and is now fairly mature, though some incompatibilities remain. For example, the web framework Django does not run perfectly yet, but is getting closer all the time. Jython is great for thread safety, comes out better in benchmarks and has a cheeky message for those wanting the GIL.
IronPython uses the .NET framework and is written in C#. Compatibility is reaching the stage where Django can run on IronPython (at least as a demo) and there are guides to using threads in IronPython.

The GIL (Global Interpreter Lock) might be a problem, but the API is quite OK. Try out the excellent processing module, which implements the Threading API for separate processes. I am using that right now (albeit on OS X, have yet to do some testing on Windows) and am really impressed. The Queue class is really saving my bacon in terms of managing complexity!
EDIT: it seemes the processing module is being included in the standard library as of version 2.6 (import multiprocessing). Joy!

As far as I know there are no real bugs, but the performance when threading in cPython is really bad (compared to most other threading implementations, but usually good enough if all most of the threads do is block) due to the GIL (Global Interpreter Lock), so really it is implementation specific rather than language specific. Jython, for example, does not suffer from this due to using the Java thread model.
See this post on why it is not really feasible to remove the GIL from the cPython implementation, and this for some practical elaboration and workarounds.
Do a quick google for "Python GIL" for more information.

If you want to code in python and get great threading support, you might want to check out IronPython or Jython. Since the python code in IronPython and Jython run on the .NET CLR and Java VM respectively, they enjoy the great threading support built into those libraries. In addition to that, IronPython doesn't have the GIL, an issue that prevents CPython threads from taking full advantage of multi-core architectures.

I've used it in several applications and have never had nor heard of threading being anything other than 100% reliable, as long as you know its limits. You can't spawn 1000 threads at the same time and expect your program to run properly on Windows, however you can easily write a worker pool and just feed it 1000 operations, and keep everything nice and under control.

Related

It is said that python doesn't support multithreading, then why does it have a threading module?

I have been working on python programming language, python is arguably a slow language due to many factors, out of which include the lack of multithreading features, if it doesn't support multithreading, then why does it have a threading module?
Python's single-threaded nature is due to the GIL (Global interpreter lock). When people refer to python being single threaded, they are describing how python operates when not using the threading or multiprocessing libraries. You can still make python use more threads, or spin up multiple processes, but for each instance of the code that you are running, it will only be using a single thread.
Javascript for example can make use of multiple threads and doesn't require any additional "work" to make this happen.
Check out this video for some more info: https://www.youtube.com/watch?v=m2yeB94CxVQ

Python: Using modules which support different Python versions in one project

I have 2 python modules where one only supports Python 2.x and the other one 3.x.
Unfortunately I need both for a project.
My workaround for now is to have them run on their own as separate programs and building up their communication via the socket module.
I will be ending up with 2 executables, what I would like to avoid.
The "connection" between both modules has to be as fast as possible.
So my question is if there is a way to somehow combine both to one executable at the end and if there is a better solution for a fast communication as the client-server construction I have now.
There really is no good way to avoid that workaround.
Conceptually, there's no reason that you couldn't embed two interpreters into the same process. But practically, the CPython interpreter depends on some static/global state. While 3.7 is much better about that than, say, 3.0 or 2.6 was, that state still hasn't nearly been eliminated.1 And, the way C linkage works, there's no way to get around that without changing the interpreter.
Also, embedding CPython isn't hard, but it's not trivial, in the way that running an interpreter as a subprocess is trivial—and it may be harder than coming up with an efficient way to pass or share state between subprocesses.
Of course there are other interpreters besides CPython. But the other major implementation with both 2.7 and 3.x versions isn't easily embeddable (PyPy), and the two that are easily embeddable don't have 3.x versions, and also can only be embedded in another VM, and can't run C extension modules (Jython and IronPython). It is possible to do something like using JEP to embed CPython 3.7 via JNI in a JVM while also using Jython 2.7 natively in that same JVM, but I doubt that approach will work for you.
Meanwhile, I mentioned that passing or sharing data between processes generally isn't that hard.
If you don't have that much data, you can usually just pass it pickled over a pipe.
If you do have a ton of data, it usually is, or could be, stored in memory in some structured form—numpy arrays, big hunks of ASCII or UTF-8 text, arrays of ctypes structs, etc.—that you can overlay on an mmap or shared memory segment.
Or, of course, you can come up with your own protocol and communicate with it over a (UNIX or IP) socket. But you don't necessarily have to jump right to that option.
Notice that multiprocessing supports both of the first two—although to take advantage of it with independent interpreters, you have to dig into its source and pull out the bits you need. And there are also third-party libraries that can help. (For example, if you need to pickle things that don't pickle natively, the answer is often as simple as "replace pickle with dill".)
1. Running multiple subinterpreters in various restricted ways does sort of work with things like mod_wsgi, and PEP 554 aims to get things to the state where you can easily and cleanly run multiple 3.7 subinterpreters in the same process, but still nothing like completely independent embeddings of CPython—the subinterpreters share a GIL, a cycle collector, an atexit handler, etc.

What parts of the standard library can run multi cored?

I am learning multi-threaded Python (CPython). I'm aware of the GIL and how it limits threading to a single core (in most circumstances).
I know that I/O functionality can be run multi-cored, however I have been unable to find a list of what parts of the standard library can be run across multiple cores. I believe that urllib can be run multi cored, allowing downloading on a thread on a separate core (but have been unable to find confirmation of this in the docs).
What I am trying to find out is, which parts of the standard library will run multi-core, as this doesn't seem to be specified in the documentation.
Taken from the docs:
However, some extension modules, either standard or third-party, are designed so as to release the GIL when doing computationally-intensive tasks such as compression or hashing. Also, the GIL is always released when doing I/O.
With the multiprocessing package you can write truly parallel programs where separate processes run on different cores. There is no limitation to which libraries (standard or not) each sub-process can use.
The tricky part about multi-process programming is when the processes need to exchange information (e.g., pass each other values, wait for each other to finish with a certain task). The multiprocessing package contains several tools for that.

Python and Threads with PyPy?

I have a kivy application in python which uses some threads.
As python is not able to run these threads on different Cores due to the Global Interpreter Lock, I would have liked to try to use PyPy for it and see if I can make the threads run faster of different cores since PyPy is different and offers stackless (what ever that is? :).
Does somebody have some information to share on how to make a simple python program, which launches some threads by the module threading, running with the pypy interpreter such that it uses this stackless feature?
Pypy won't resolve Python problems of running a single-thread each time, since it also makes use of the GIL - http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why
Besides that, Kivy is a complex project embedding Python itself - although I don't know it very well, I doubt it is possible to switch the Python used in it for Pypy.
Depending on what you are doing, you may want to use the multiprocessing module instead of threading - it is a drop-in replacement that will make transparent inter-process calls to Python functions, and can therefore take advantage of multiple-cores.
https://docs.python.org/3/library/multiprocessing.html
This is standard in cPython and can likely be used from within Kivy, if (and only if) all code in the subprocess just take care of number-crunching, and so on, and all user interaction and display updates are made on the main process.

What is LLVM and How is replacing Python VM with LLVM increasing speeds 5x?

Google is sponsoring an Open Source project to increase the speed of Python by 5x.
Unladen-Swallow seems to have a good project plan
Why is concurrency such a hard problem?
Is LLVM going to solve the concurrency problem?
Are there solutions other than Multi-core for Hardware advancement?
LLVM is several things together - kind of a virtual machine/optimizing compiler, combined with different frontends that take the input in a particular language and output the result in an intermediate language. This intermediate output can be run with the virtual machine, or can be used to generate a standalone executable.
The problem with concurrency is that, although it was used for a long time in scientific computing, it has just recently has become common in consumer apps. So while it's widely known how to program a scientific calculation program to achieve great performance, it is completely different thing to write a mail user agent/word processor that can be good at concurrency. Also, most of the current OS's were being designed with a single processor in mind, and they may not be fully prepared for multicore processors.
The benefit of LLVM with respect to concurrency is that you have an intermediate output, and if in the future there are advances in concurrency, then by updating your interpreter you instantly gain those benefits in all LLVM-compiled programs. This is not so easy if you had compiled to a standalone executable. So LLVM doesn't solve the concurrency problem per se but it leaves an open door for future enhancements.
Sure there are more possible advances for the hardware like quantum computers, genetics computers, etc. But we have to wait for them to become a reality.
The switch to LLVM itself isn't solving the concurrency problem. That's being solved separately, by getting rid of the Global Interpreter Lock.
I'm not sure how I feel about that; I use threads mainly to deal with blocking I/O, not to take advantage of multicore processors (for that, I would use the multiprocessing module to spawn separate processes).
So I kind of like the GIL; it makes my life a lot easier not having to think about tricky synchronization issues.
LLVM takes care of the nitty-gritty of code generation, so it lets them rewrite Psyco in a way that's more general, portable, maintainable. That in turn allows them to rewrite the CPython core, which lets them experiment with alternate GCs and other things needed to improve python's support for concurrency.
In other words, LLVM doesn't solve the concurrency problem, it just frees up your hands so YOU can solve it.

Categories

Resources