Multiprocessing vs Threading Python [duplicate] - python

This question already has answers here:
What are the differences between the threading and multiprocessing modules?
(6 answers)
Closed 3 years ago.
I am trying to understand the advantages of multiprocessing over threading. I know that multiprocessing gets around the Global Interpreter Lock, but what other advantages are there, and can threading not do the same thing?

Here are some pros/cons I came up with.
Multiprocessing
Pros
Separate memory space
Code is usually straightforward
Takes advantage of multiple CPUs & cores
Avoids GIL limitations for cPython
Eliminates most needs for synchronization primitives unless if you use shared memory (instead, it's more of a communication model for IPC)
Child processes are interruptible/killable
Python multiprocessing module includes useful abstractions with an interface much like threading.Thread
A must with cPython for CPU-bound processing
Cons
IPC a little more complicated with more overhead (communication model vs. shared memory/objects)
Larger memory footprint
Threading
Pros
Lightweight - low memory footprint
Shared memory - makes access to state from another context easier
Allows you to easily make responsive UIs
cPython C extension modules that properly release the GIL will run in parallel
Great option for I/O-bound applications
Cons
cPython - subject to the GIL
Not interruptible/killable
If not following a command queue/message pump model (using the Queue module), then manual use of synchronization primitives become a necessity (decisions are needed for the granularity of locking)
Code is usually harder to understand and to get right - the potential for race conditions increases dramatically

The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for.
Spawning processes is a bit slower than spawning threads.

Threading's job is to enable applications to be responsive. Suppose you have a database connection and you need to respond to user input. Without threading, if the database connection is busy the application will not be able to respond to the user. By splitting off the database connection into a separate thread you can make the application more responsive. Also because both threads are in the same process, they can access the same data structures - good performance, plus a flexible software design.
Note that due to the GIL the app isn't actually doing two things at once, but what we've done is put the resource lock on the database into a separate thread so that CPU time can be switched between it and the user interaction. CPU time gets rationed out between the threads.
Multiprocessing is for times when you really do want more than one thing to be done at any given time. Suppose your application needs to connect to 6 databases and perform a complex matrix transformation on each dataset. Putting each job in a separate thread might help a little because when one connection is idle another one could get some CPU time, but the processing would not be done in parallel because the GIL means that you're only ever using the resources of one CPU. By putting each job in a Multiprocessing process, each can run on its own CPU and run at full efficiency.

Python documentation quotes
The canonical version of this answer is now at the dupliquee question: What are the differences between the threading and multiprocessing modules?
I've highlighted the key Python documentation quotes about Process vs Threads and the GIL at: What is the global interpreter lock (GIL) in CPython?
Process vs thread experiments
I did a bit of benchmarking in order to show the difference more concretely.
In the benchmark, I timed CPU and IO bound work for various numbers of threads on an 8 hyperthread CPU. The work supplied per thread is always the same, such that more threads means more total work supplied.
The results were:
Plot data.
Conclusions:
for CPU bound work, multiprocessing is always faster, presumably due to the GIL
for IO bound work. both are exactly the same speed
threads only scale up to about 4x instead of the expected 8x since I'm on an 8 hyperthread machine.
Contrast that with a C POSIX CPU-bound work which reaches the expected 8x speedup: What do 'real', 'user' and 'sys' mean in the output of time(1)?
TODO: I don't know the reason for this, there must be other Python inefficiencies coming into play.
Test code:
#!/usr/bin/env python3
import multiprocessing
import threading
import time
import sys
def cpu_func(result, niters):
'''
A useless CPU bound function.
'''
for i in range(niters):
result = (result * result * i + 2 * result * i * i + 3) % 10000000
return result
class CpuThread(threading.Thread):
def __init__(self, niters):
super().__init__()
self.niters = niters
self.result = 1
def run(self):
self.result = cpu_func(self.result, self.niters)
class CpuProcess(multiprocessing.Process):
def __init__(self, niters):
super().__init__()
self.niters = niters
self.result = 1
def run(self):
self.result = cpu_func(self.result, self.niters)
class IoThread(threading.Thread):
def __init__(self, sleep):
super().__init__()
self.sleep = sleep
self.result = self.sleep
def run(self):
time.sleep(self.sleep)
class IoProcess(multiprocessing.Process):
def __init__(self, sleep):
super().__init__()
self.sleep = sleep
self.result = self.sleep
def run(self):
time.sleep(self.sleep)
if __name__ == '__main__':
cpu_n_iters = int(sys.argv[1])
sleep = 1
cpu_count = multiprocessing.cpu_count()
input_params = [
(CpuThread, cpu_n_iters),
(CpuProcess, cpu_n_iters),
(IoThread, sleep),
(IoProcess, sleep),
]
header = ['nthreads']
for thread_class, _ in input_params:
header.append(thread_class.__name__)
print(' '.join(header))
for nthreads in range(1, 2 * cpu_count):
results = [nthreads]
for thread_class, work_size in input_params:
start_time = time.time()
threads = []
for i in range(nthreads):
thread = thread_class(work_size)
threads.append(thread)
thread.start()
for i, thread in enumerate(threads):
thread.join()
results.append(time.time() - start_time)
print(' '.join('{:.6e}'.format(result) for result in results))
GitHub upstream + plotting code on same directory.
Tested on Ubuntu 18.10, Python 3.6.7, in a Lenovo ThinkPad P51 laptop with CPU: Intel Core i7-7820HQ CPU (4 cores / 8 threads), RAM: 2x Samsung M471A2K43BB1-CRC (2x 16GiB), SSD: Samsung MZVLB512HAJQ-000L7 (3,000 MB/s).
Visualize which threads are running at a given time
This post https://rohanvarma.me/GIL/ taught me that you can run a callback whenever a thread is scheduled with the target= argument of threading.Thread and the same for multiprocessing.Process.
This allows us to view exactly which thread runs at each time. When this is done, we would see something like (I made this particular graph up):
+--------------------------------------+
+ Active threads / processes +
+-----------+--------------------------------------+
|Thread 1 |******** ************ |
| 2 | ***** *************|
+-----------+--------------------------------------+
|Process 1 |*** ************** ****** **** |
| 2 |** **** ****** ** ********* **********|
+-----------+--------------------------------------+
+ Time --> +
+--------------------------------------+
which would show that:
threads are fully serialized by the GIL
processes can run in parallel

The key advantage is isolation. A crashing process won't bring down other processes, whereas a crashing thread will probably wreak havoc with other threads.

As mentioned in the question, Multiprocessing in Python is the only real way to achieve true parallelism. Multithreading cannot achieve this because the GIL prevents threads from running in parallel.
As a consequence, threading may not always be useful in Python, and in fact, may even result in worse performance depending on what you are trying to achieve. For example, if you are performing a CPU-bound task such as decompressing gzip files or 3D-rendering (anything CPU intensive) then threading may actually hinder your performance rather than help. In such a case, you would want to use Multiprocessing as only this method actually runs in parallel and will help distribute the weight of the task at hand. There could be some overhead to this since Multiprocessing involves copying the memory of a script into each subprocess which may cause issues for larger-sized applications.
However, Multithreading becomes useful when your task is IO-bound. For example, if most of your task involves waiting on API-calls, you would use Multithreading because why not start up another request in another thread while you wait, rather than have your CPU sit idly by.
TL;DR
Multithreading is concurrent and is used for IO-bound tasks
Multiprocessing achieves true parallelism and is used for CPU-bound tasks

Another thing not mentioned is that it depends on what OS you are using where speed is concerned. In Windows processes are costly so threads would be better in windows but in unix processes are faster than their windows variants so using processes in unix is much safer plus quick to spawn.

Other answers have focused more on the multithreading vs multiprocessing aspect, but in python Global Interpreter Lock (GIL) has to be taken into account. When more number (say k) of threads are created, generally they will not increase the performance by k times, as it will still be running as a single threaded application. GIL is a global lock which locks everything out and allows only single thread execution utilizing only a single core. The performance does increase in places where C extensions like numpy, Network, I/O are being used, where a lot of background work is done and GIL is released. So when threading is used, there is only a single operating system level thread while python creates pseudo-threads which are completely managed by threading itself but are essentially running as a single process. Preemption takes place between these pseudo threads. If the CPU runs at maximum capacity, you may want to switch to multiprocessing.
Now in case of self-contained instances of execution, you can instead opt for pool. But in case of overlapping data, where you may want processes communicating you should use multiprocessing.Process.

MULTIPROCESSING
Multiprocessing adds CPUs to increase computing power.
Multiple processes are executed concurrently.
Creation of a process is time-consuming and resource intensive.
Multiprocessing can be symmetric or asymmetric.
The multiprocessing library in Python uses separate memory space, multiple CPU cores, bypasses GIL limitations in CPython, child processes are killable (ex. function calls in program) and is much easier to use.
Some caveats of the module are a larger memory footprint and IPC’s a little more complicated with more overhead.
MULTITHREADING
Multithreading creates multiple threads of a single process to increase computing power.
Multiple threads of a single process are executed concurrently.
Creation of a thread is economical in both sense time and resource.
The multithreading library is lightweight, shares memory, responsible for responsive UI and is used well for I/O bound applications.
The module isn’t killable and is subject to the GIL.
Multiple threads live in the same process in the same space, each thread will do a specific task, have its own code, own stack memory, instruction pointer, and share heap memory.
If a thread has a memory leak it can damage the other threads and parent process.
Example of Multi-threading and Multiprocessing using Python
Python 3 has the facility of Launching parallel tasks. This makes our work easier.
It has for thread pooling and Process pooling.
The following gives an insight:
ThreadPoolExecutor Example
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
ProcessPoolExecutor
import concurrent.futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('%d is prime: %s' % (number, prime))
if __name__ == '__main__':
main()

Threads share the same memory space to guarantee that two threads don't share the same memory location so special precautions must be taken the CPython interpreter handles this using a mechanism called GIL, or the Global Interpreter Lock
what is GIL(Just I want to Clarify GIL it's repeated above)?
In CPython, the global interpreter lock, or GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe.
For the main question, we can compare using Use Cases, How?
1-Use Cases for Threading: in case of GUI programs threading can be used to make the application responsive For example, in a text editing program, one thread can take care of recording the user inputs, another can be responsible for displaying the text, a third can do spell-checking, and so on. Here, the program has to wait for user interaction. which is the biggest bottleneck. Another use case for threading is programs that are IO bound or network bound, such as web-scrapers.
2-Use Cases for Multiprocessing: Multiprocessing outshines threading in cases where the program is CPU intensive and doesn’t have to do any IO or user interaction.
For More Details visit this link and link or you need in-depth knowledge for threading visit here for Multiprocessing visit here

Process may have multiple threads. These threads may share memory and are the units of execution within a process.
Processes run on the CPU, so threads are residing under each process. Processes are individual entities which run independently. If you want to share data or state between each process, you may use a memory-storage tool such as Cache(redis, memcache), Files, or a Database.

As I learnd in university most of the answers above are right. In PRACTISE on different platforms (always using python) spawning multiple threads ends up like spawning one process. The difference is the multiple cores share the load instead of only 1 core processing everything at 100%. So if you spawn for example 10 threads on a 4 core pc, you will end up getting only the 25% of the cpus power!! And if u spawn 10 processes u will end up with the cpu processing at 100% (if u dont have other limitations). Im not a expert in all the new technologies. Im answering with own real experience background

Related

Why use multithreading when multiprocessing is available in python? [duplicate]

I found that in Python 3.4, there are few different libraries for multiprocessing/threading: multiprocessing vs threading vs asyncio.
But I don't know which one to use or is the "recommended one". Do they do the same thing, or are different? If so, which one is used for what? I want to write a program that uses multicores in my computer. But I don't know which library I should learn.
TL;DR
Making the Right Choice:
We have walked through the most popular forms of concurrency. But the question remains - when should choose which one? It really depends on the use cases. From my experience (and reading), I tend to follow this pseudo code:
if io_bound:
if io_very_slow:
print("Use Asyncio")
else:
print("Use Threads")
else:
print("Multi Processing")
CPU Bound => Multi Processing
I/O Bound, Fast I/O, Limited Number of Connections => Multi Threading
I/O Bound, Slow I/O, Many connections => Asyncio
Reference
[NOTE]:
If you have a long call method (e.g. a method containing a sleep time or lazy I/O), the best choice is asyncio, Twisted or Tornado approach (coroutine methods), that works with a single thread as concurrency.
asyncio works on Python3.4 and later.
Tornado and Twisted are ready since Python2.7
uvloop is ultra fast asyncio event loop (uvloop makes asyncio 2-4x faster).
[UPDATE (2019)]:
Japranto (GitHub) is a very fast pipelining HTTP server based on uvloop.
They are intended for (slightly) different purposes and/or requirements. CPython (a typical, mainline Python implementation) still has the global interpreter lock so a multi-threaded application (a standard way to implement parallel processing nowadays) is suboptimal. That's why multiprocessing may be preferred over threading. But not every problem may be effectively split into [almost independent] pieces, so there may be a need in heavy interprocess communications. That's why multiprocessing may not be preferred over threading in general.
asyncio (this technique is available not only in Python, other languages and/or frameworks also have it, e.g. Boost.ASIO) is a method to effectively handle a lot of I/O operations from many simultaneous sources w/o need of parallel code execution. So it's just a solution (a good one indeed!) for a particular task, not for parallel processing in general.
In multiprocessing you leverage multiple CPUs to distribute your calculations. Since each of the CPUs runs in parallel, you're effectively able to run multiple tasks simultaneously. You would want to use multiprocessing for CPU-bound tasks. An example would be trying to calculate a sum of all elements of a huge list. If your machine has 8 cores, you can "cut" the list into 8 smaller lists and calculate the sum of each of those lists separately on separate core and then just add up those numbers. You'll get a ~8x speedup by doing that.
In (multi)threading you don't need multiple CPUs. Imagine a program that sends lots of HTTP requests to the web. If you used a single-threaded program, it would stop the execution (block) at each request, wait for a response, and then continue once received a response. The problem here is that your CPU isn't really doing work while waiting for some external server to do the job; it could have actually done some useful work in the meantime! The fix is to use threads - you can create many of them, each responsible for requesting some content from the web. The nice thing about threads is that, even if they run on one CPU, the CPU from time to time "freezes" the execution of one thread and jumps to executing the other one (it's called context switching and it happens constantly at non-deterministic intervals). So if your task is I/O bound - use threading.
asyncio is essentially threading where not the CPU but you, as a programmer (or actually your application), decide where and when does the context switch happen. In Python you use an await keyword to suspend the execution of your coroutine (defined using async keyword).
This is the basic idea:
Is it IO-BOUND ? -----------> USE asyncio
IS IT CPU-HEAVY ? ---------> USE multiprocessing
ELSE ? ----------------------> USE threading
So basically stick to threading unless you have IO/CPU problems.
Many of the answers suggest how to choose only 1 option, but why not be able to use all 3? In this answer I explain how you can use asyncio to manage combining all 3 forms of concurrency instead as well as easily swap between them later if need be.
The short answer
Many developers that are first-timers to concurrency in Python will end up using processing.Process and threading.Thread. However, these are the low-level APIs which have been merged together by the high-level API provided by the concurrent.futures module. Furthermore, spawning processes and threads has overhead, such as requiring more memory, a problem which plagued one of the examples I showed below. To an extent, concurrent.futures manages this for you so that you cannot as easily do something like spawn a thousand processes and crash your computer by only spawning a few processes and then just re-using those processes each time one finishes.
These high-level APIs are provided through concurrent.futures.Executor, which are then implemented by concurrent.futures.ProcessPoolExecutor and concurrent.futures.ThreadPoolExecutor. In most cases, you should use these over the multiprocessing.Process and threading.Thread, because it's easier to change from one to the other in the future when you use concurrent.futures and you don't have to learn the detailed differences of each.
Since these share a unified interfaces, you'll also find that code using multiprocessing or threading will often use concurrent.futures. asyncio is no exception to this, and provides a way to use it via the following code:
import asyncio
from concurrent.futures import Executor
from functools import partial
from typing import Any, Callable, Optional, TypeVar
T = TypeVar("T")
async def run_in_executor(
executor: Optional[Executor],
func: Callable[..., T],
/,
*args: Any,
**kwargs: Any,
) -> T:
"""
Run `func(*args, **kwargs)` asynchronously, using an executor.
If the executor is None, use the default ThreadPoolExecutor.
"""
return await asyncio.get_running_loop().run_in_executor(
executor,
partial(func, *args, **kwargs),
)
# Example usage for running `print` in a thread.
async def main():
await run_in_executor(None, print, "O" * 100_000)
asyncio.run(main())
In fact it turns out that using threading with asyncio was so common that in Python 3.9 they added asyncio.to_thread(func, *args, **kwargs) to shorten it for the default ThreadPoolExecutor.
The long answer
Are there any disadvantages to this approach?
Yes. With asyncio, the biggest disadvantage is that asynchronous functions aren't the same as synchronous functions. This can trip up new users of asyncio a lot and cause a lot of rework to be done if you didn't start programming with asyncio in mind from the beginning.
Another disadvantage is that users of your code will also become forced to use asyncio. All of this necessary rework will often leave first-time asyncio users with a really sour taste in their mouth.
Are there any non-performance advantages to this?
Yes. Similar to how using concurrent.futures is advantageous over threading.Thread and multiprocessing.Process for its unified interface, this approach can be considered a further abstraction from an Executor to an asynchronous function. You can start off using asyncio, and if later you find a part of it you need threading or multiprocessing, you can use asyncio.to_thread or run_in_executor. Likewise, you may later discover that an asynchronous version of what you're trying to run with threading already exists, so you can easily step back from using threading and switch to asyncio instead.
Are there any performance advantages to this?
Yes... and no. Ultimately it depends on the task. In some cases, it may not help (though it likely does not hurt), while in other cases it may help a lot. The rest of this answer provides some explanations as to why using asyncio to run an Executor may be advantageous.
- Combining multiple executors and other asynchronous code
asyncio essentially provides significantly more control over concurrency at the cost of you need to take control of the concurrency more. If you want to simultaneously run some code using a ThreadPoolExecutor along side some other code using a ProcessPoolExecutor, it is not so easy managing this using synchronous code, but it is very easy with asyncio.
import asyncio
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
async def with_processing():
with ProcessPoolExecutor() as executor:
tasks = [...]
for task in asyncio.as_completed(tasks):
result = await task
...
async def with_threading():
with ThreadPoolExecutor() as executor:
tasks = [...]
for task in asyncio.as_completed(tasks):
result = await task
...
async def main():
await asyncio.gather(with_processing(), with_threading())
asyncio.run(main())
How does this work? Essentially asyncio asks the executors to run their functions. Then, while an executor is running, asyncio will go run other code. For example, the ProcessPoolExecutor starts a bunch of processes, and then while waiting for those processes to finish, the ThreadPoolExecutor starts a bunch of threads. asyncio will then check in on these executors and collect their results when they are done. Furthermore, if you have other code using asyncio, you can run them while waiting for the processes and threads to finish.
- Narrowing in on what sections of code needs executors
It is not common that you will have many executors in your code, but what is a common problem that I have seen when people use threads/processes is that they will shove the entirety of their code into a thread/process, expecting it to work. For example, I once saw the following code (approximately):
from concurrent.futures import ThreadPoolExecutor
import requests
def get_data(url):
return requests.get(url).json()["data"]
urls = [...]
with ThreadPoolExecutor() as executor:
for data in executor.map(get_data, urls):
print(data)
The funny thing about this piece of code is that it was slower with concurrency than without. Why? Because the resulting json was large, and having many threads consume a huge amount of memory was disastrous. Luckily the solution was simple:
from concurrent.futures import ThreadPoolExecutor
import requests
urls = [...]
with ThreadPoolExecutor() as executor:
for response in executor.map(requests.get, urls):
print(response.json()["data"])
Now only one json is unloaded into memory at a time, and everything is fine.
The lesson here?
You shouldn't try to just slap all of your code into threads/processes, you should instead focus in on what part of the code actually needs concurrency.
But what if get_data was not a function as simple as this case? What if we had to apply the executor somewhere deep in the middle of the function? This is where asyncio comes in:
import asyncio
import requests
async def get_data(url):
# A lot of code.
...
# The specific part that needs threading.
response = await asyncio.to_thread(requests.get, url, some_other_params)
# A lot of code.
...
return data
urls = [...]
async def main():
tasks = [get_data(url) for url in urls]
for task in asyncio.as_completed(tasks):
data = await task
print(data)
asyncio.run(main())
Attempting the same with concurrent.futures is by no means pretty. You could use things such as callbacks, queues, etc., but it would be significantly harder to manage than basic asyncio code.
Already a lot of good answers. Can't elaborate more on the when to use each one. This is more an interesting combination of two. Multiprocessing + asyncio: https://pypi.org/project/aiomultiprocess/.
The use case for which it was designed was highio, but still utilizing as many of the cores available. Facebook used this library to write some kind of python based File server. Asyncio allowing for IO bound traffic, but multiprocessing allowing multiple event loops and threads on multiple cores.
Ex code from the repo:
import asyncio
from aiohttp import request
from aiomultiprocess import Pool
async def get(url):
async with request("GET", url) as response:
return await response.text("utf-8")
async def main():
urls = ["https://jreese.sh", ...]
async with Pool() as pool:
async for result in pool.map(get, urls):
... # process result
if __name__ == '__main__':
# Python 3.7
asyncio.run(main())
# Python 3.6
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
Just and addition here, would not working in say jupyter notebook very well, as the notebook already has a asyncio loop running. Just a little note for you to not pull your hair out.
I’m not a professional Python user, but as a student in computer architecture I think I can share some of my considerations when choosing between multi processing and multi threading. Besides, some of the other answers (even among those with higher votes) are misusing technical terminology, so I thinks it’s also necessary to make some clarification on those as well, and I’ll do it first.
The fundamental difference between multiprocessing and multithreading is whether they share the same memory space. Threads share access to the same virtual memory space, so it is efficient and easy for threads to exchange their computation results (zero copy, and totally user-space execution).
Processes on the other hand have separate virtual memory spaces. They cannot directly read or write the other process’ memory space, just like a person cannot read or alter the mind of another person without talking to him. (Allowing so would be a violation of memory protection and defeat the purpose of using virtual memory. ) To exchange data between processes, they have to rely on the operating system’s facility (e.g. message passing), and for more than one reasons this is more costly to do than the “shared memory” scheme used by threads. One reason is that invoking the OS’ message passing mechanism requires making a system call which will switch the code execution from user mode to kernel mode, which is time consuming; another reason is likely that OS message passing scheme will have to copy the data bytes from the senders’ memory space to the receivers’ memory space, so non-zero copy cost.
It is incorrect to say a multithread program can only use one CPU. The reason why many people say so is due to an artifact of the CPython implementation: global interpreter lock (GIL). Because of the GIL, threads in a CPython process are serialized. As a result, it appears that the multithreaded python program only uses one CPU.
But multi thread computer programs in general are not restricted to one core, and for Python, implementations that do not use the GIL can indeed run many threads in parallel, that is, run on more than one CPU at the same time. (See https://wiki.python.org/moin/GlobalInterpreterLock).
Given that CPython is the predominant implementation of Python, it’s understandable why multithreaded python programs are commonly equated to being bound to a single core.
With Python with GIL, the only way to unleash the power of multicores is to use multiprocessing (there are exceptions to this as mentioned below). But your problem better be easily partition-able into parallel sub-problems that have minimal intercommunication, otherwise a lot of inter-process communication will have to take place and as explained above, the overhead of using the OS’ message passing mechanism will be costly, sometimes so costly the benefits of parallel processing are totally offset. If the nature of your problem requires intense communication between concurrent routines, multithreading is the natural way to go. Unfortunately with CPython, true, effectively parallel multithreading is not possible due to the GIL. In this case you should realize Python is not the optimal tool for your project and consider using another language.
There’s one alternative solution, that is to implement the concurrent processing routines in an external library written in C (or other languages), and import that module to Python. The CPython GIL will not bother to block the threads spawned by that external library.
So, with the burdens of GIL, is multithreading in CPython any good? It still offers benefits though, as other answers have mentioned, if you’re doing IO or network communication. In these cases the relevant computation is not done by your CPU but done by other devices (in the case of IO, the disk controller and DMA (direct memory access) controller will transfer the data with minimal CPU participation; in the case of networking, the NIC (network interface card) and DMA will take care of much of the task without CPU’s participation), so once a thread delegates such task to the NIC or disk controller, the OS can put that thread to a sleeping state and switch to other threads of the same program to do useful work.
In my understanding, the asyncio module is essentially a specific case of multithreading for IO operations.
So:
CPU-intensive programs, that can easily be partitioned to run on multiple processes with limited communication: Use multithreading if GIL does not exist (eg Jython), or use multiprocess if GIL is present (eg CPython).
CPU-intensive programs, that requires intensive communication between concurrent routines: Use multithreading if GIL does not exist, or use another programming language.
Lot’s of IO: asyncio
Multiprocessing can be run parallelly.
Multithreading and asyncio cannot be run parallelly.
With Intel(R) Core(TM) i7-8700K CPU # 3.70GHz and 32.0 GB RAM, I timed how many prime numbers are between 2 and 100000 with 2 processes, 2 threads and 2 asyncio tasks as shown below. *This is CPU bound calculation:
Multiprocessing
Multithreading
asyncio
23.87 seconds
45.24 seconds
44.77 seconds
Because multiprocessing can be run parallelly so multiprocessing is double more faster than multithreading and asyncio as shown above.
I used 3 sets of code below:
Multiprocessing:
# "process_test.py"
from multiprocessing import Process
import time
start_time = time.time()
def test():
num = 100000
primes = 0
for i in range(2, num + 1):
for j in range(2, i):
if i % j == 0:
break
else:
primes += 1
print(primes)
if __name__ == "__main__": # This is needed to run processes on Windows
process_list = []
for _ in range(0, 2): # 2 processes
process = Process(target=test)
process_list.append(process)
for process in process_list:
process.start()
for process in process_list:
process.join()
print(round((time.time() - start_time), 2), "seconds") # 23.87 seconds
Result:
...
9592
9592
23.87 seconds
Multithreading:
# "thread_test.py"
from threading import Thread
import time
start_time = time.time()
def test():
num = 100000
primes = 0
for i in range(2, num + 1):
for j in range(2, i):
if i % j == 0:
break
else:
primes += 1
print(primes)
thread_list = []
for _ in range(0, 2): # 2 threads
thread = Thread(target=test)
thread_list.append(thread)
for thread in thread_list:
thread.start()
for thread in thread_list:
thread.join()
print(round((time.time() - start_time), 2), "seconds") # 45.24 seconds
Result:
...
9592
9592
45.24 seconds
Asyncio:
# "asyncio_test.py"
import asyncio
import time
start_time = time.time()
async def test():
num = 100000
primes = 0
for i in range(2, num + 1):
for j in range(2, i):
if i % j == 0:
break
else:
primes += 1
print(primes)
async def call_tests():
tasks = []
for _ in range(0, 2): # 2 asyncio tasks
tasks.append(test())
await asyncio.gather(*tasks)
asyncio.run(call_tests())
print(round((time.time() - start_time), 2), "seconds") # 44.77 seconds
Result:
...
9592
9592
44.77 seconds
Multiprocessing
Each process has its own Python interpreter and can run on a separate core of a processor. Python multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers true parallelism, effectively side-stepping the Global Interpreter Lock by using sub processes instead of threads.
Use multiprocessing when you have CPU intensive tasks.
Multithreading
Python multithreading allows you to spawn multiple threads within the process. These threads can share the same memory and resources of the process. In CPython due to Global interpreter lock at any given time only a single thread can run, hence you cannot utilize multiple cores. Multithreading in Python does not offer true parallelism due to GIL limitation.
Asyncio
Asyncio works on co-operative multitasking concepts. Asyncio tasks run on the same thread so there is no parallelism, but it provides better control to the developer instead of the OS which is the case in multithreading.
There is a nice discussion on this link regarding the advantages of asyncio over threads.
There is a nice blog by Lei Mao on Python concurrency here
Multiprocessing VS Threading VS AsyncIO in Python Summary

Why isn't my threaded python script not much faster?

I wanted to speed up a python script I have that iterates over 300 records. So I figured I'd try to use threading. My non-thread version takes just under 1 minute to execute. My threaded version does 1 seconds better. Here are the pertinent parts of my thread version of the script:
... other imports ...
import threading
import concurrent.futures
# global vars
threads = []
check_records = []
default_max_problems = 5
problems_found = 0
lock = threading.Lock()
... some functions ...
def check_host(rec):
with lock:
global problems_found
global max_problems
if problems_found >= max_problems:
# I'd prefer to stop all threads and stop new ones from starting,
# but I don't know how to do that.
return
... bunch of function calls that do network stuff ...
check_records.append(rec)
if not(reachable and dns_ready):
problems_found += 1
logging.debug(f"check_host problems_found is {problems_found}.")
if __name__ == '__main__':
... handle command line args ...
try:
with concurrent.futures.ThreadPoolExecutor() as executor:
for ip in get_ips():
req_rec = find_dns_req_record(ip, dns_record_reqs)
executor.submit(check_host, req_rec)
Why is performance of my threaded script almost the same my non-thread version?
The kind of work you are performing is important to answer the question. If you are performing many IO-bound tasks (network calls, disk reads, etc.), then using Python's multi-threading should provide a good speed increase, since you can now have multiple threads waiting for multiple IO calls.
However, if you are performing raw computation, then multi-threading wont help you, because of Python's GIL (global interpreter lock), which basically only allows one thread to run at a time. To speed up non IO-bound computation, you will need to use the multiprocessing module, and spin up multiple Python processes. One of the disadvantages of multiple processes vs multiple threads is that it is harder to share data/memory between processes (because they have separate address spaces) vs threads (threads share memory because they are part of the same process).
Another thing that is important to consider is how you are using locks. If you put too much code under a lock, then threads won't be able to concurrently execute that code. You should try to have the smallest amount of code possible under any given lock, and only in places where shared data is accessed. If your entire thread function body is under a lock then you eliminate the potential for speed improvement via multi-threading.

How python multithreaded program can run on different Cores of CPU simultaneously despite of having GIL

In this video, he shows how multithreading runs on physical(Intel or
AMD) processor cores.
https://youtu.be/ecKWiaHCEKs
and
is python capable of running on multiple cores?
All these links basically say:
Python threads cannot take advantage of many physical cores. This is due to an internal implementation detail called the GIL (global interpreter lock) and if we want to utilize multiple physical cores of the CPU
we must use true parallel multiprocessing module
But when I ran this below code on my laptop
import threading
import math
def worker(argument):
for i in range(200000):
print(math.sqrt(i))
return
for i in range(3):
t = threading.Thread(target=worker, args=[i])
t.start()
I got this result
Questions:
1. Why did code run on all of my physical CPU cores instead of using one out of four physical cores of my
CPU? If so what is the point of multiprocessing module?
2. The second time, I changed the above code to create only one thread and that also took all of the CPU physical cores and took 4/4 physical cores to run.Why is that?
https://docs.python.org/3/library/math.html
The math module consists mostly of thin wrappers around the platform C math library functions.
While python itself can only execute a single instruction at a time, a low level c function that is called by python does not have this limitation.
So it's not python that is using multiple cores but your system's well optimized math library that is wrapped by python's math module.
That basically answers both your questions.
Regarding the usefulness of multiprocessing: It is still useful for those cases, where you're trying to parallelize pure python code or code that does not call libraries that already use multiple cores.
However, it comes with inter process communication (IPC) overhead that may or may not be larger than the performance gain that you get from using multiple cores. Tuning IPC is therefore often crucial for multiprocessing in python.
First to answer one misconception / error from your code:
import threading
import math
def worker(argument):
for i in range(200000):
print(math.sqrt(i))
return
for i in range(3):
t = threading.Thread(target=worker, args=[i])
t.start()
You are NOT using multiprocessing you are using threading. They are similar BUT NOT THE SAME. Threading IS AFFECTED by GIL, Multiprocessing IS NOT.
So when you ran 4 threads, a CPU intensive worker/job like sqrt() WILL NOT have the benefit of the thread because it is CPU-bound, and CPU-bound tasks ARE hindered by the GIL (wehn used in a Thread() vs. Process()
To have all four cores work well without the GIL, change your code to:
import multiprocessing
import math
def worker(argument):
for i in range(200000):
print(math.sqrt(i))
return
for i in range(3):
t = multiprocessing.Process(target=worker, args=[i])
t.start()

Python ThreadPool from multiprocessing.pool cannot ultilize all CPUs

I have some string processing job in Python. And I wish to speed up the job
by using a thread pool. The string processing job has no dependency to each
other. The result will be stored into a mongodb database.
I wrote my code as follow:
thread_pool_size = multiprocessing.cpu_count()
pool = ThreadPool(thread_pool_size)
for single_string in string_list:
pool.apply_async(_process, [single_string ])
pool.close()
pool.join()
def _process(s):
# Do staff, pure python string manipulation.
# Save the output to a database (pyMongo).
I try to run the code in a Linux machine with 8 CPU cores. And it turns out
that the maximum CPU usage can only be around 130% (read from top), when I
run the job for a few minutes.
Is my approach correct to use a thread pool? Is there any better way to do so?
You might check using multiple processes instead of multiple threads. Here is a good comparison of both options. In one of the comments it is stated that Python is not able to use multiple CPUs while working with multiple threads (due to the Global interpreter lock). So instead of using a Thread pool you should use a Process pool to take full leverage of your machine.
Perhaps _process isn't CPU bound; it might be slowed by the file system or network if you're writing to a database. You could see if the CPU usage rises if you make your process truly CPU bound, for example:
def _process(s):
for i in xrange(100000000):
j = i * i

What are the advantages of multithreaded programming in Python?

When I hear about multithreaded programming, I think about the opportunity to accelerate my program, but it is not?
import eventlet
from eventlet.green import socket
from iptools import IpRangeList
class Scanner(object):
def __init__(self, ip_range, port_range, workers_num):
self.workers_num = workers_num or 1000
self.ip_range = self._get_ip_range(ip_range)
self.port_range = self._get_port_range(port_range)
self.scaned_range = self._get_scaned_range()
def _get_ip_range(self, ip_range):
return [ip for ip in IpRangeList(ip_range)]
def _get_port_range(self, port_range):
return [r for r in range(*port_range)]
def _get_scaned_range(self):
for ip in self.ip_range:
for port in self.port_range:
yield (ip, port)
def scan(self, address):
try:
return bool(socket.create_connection(address))
except:
return False
def run(self):
pool = eventlet.GreenPool(self.workers_num)
for status in pool.imap(self.scan, self.scaned_range):
if status:
yield True
def run_std(self):
for status in map(self.scan, self.scaned_range):
if status:
yield True
if __name__ == '__main__':
s = Scanner(('127.0.0.1'), (1, 65000), 100000)
import time
now = time.time()
open_ports = [i for i in s.run()]
print 'Eventlet time: %s (sec) open: %s' % (now - time.time(),
len(open_ports))
del s
s = Scanner(('127.0.0.1'), (1, 65000), 100000)
now = time.time()
open_ports = [i for i in s.run()]
print 'CPython time: %s (sec) open: %s' % (now - time.time(),
len(open_ports))
and results:
Eventlet time: -4.40343403816 (sec) open: 2
CPython time: -4.48356699944 (sec) open: 2
And my question is, if I run this code is not on my laptop but on the server and set more value of workers it will run faster than the CPython's version?
What are the advantages of threads?
ADD:
And so I rewrite app with use original cpython's threads
import socket
from threading import Thread
from Queue import Queue
from iptools import IpRangeList
class Scanner(object):
def __init__(self, ip_range, port_range, workers_num):
self.workers_num = workers_num or 1000
self.ip_range = self._get_ip_range(ip_range)
self.port_range = self._get_port_range(port_range)
self.scaned_range = [i for i in self._get_scaned_range()]
def _get_ip_range(self, ip_range):
return [ip for ip in IpRangeList(ip_range)]
def _get_port_range(self, port_range):
return [r for r in range(*port_range)]
def _get_scaned_range(self):
for ip in self.ip_range:
for port in self.port_range:
yield (ip, port)
def scan(self, q):
while True:
try:
r = bool(socket.create_conection(q.get()))
except Exception:
r = False
q.task_done()
def run(self):
queue = Queue()
for address in self.scaned_range:
queue.put(address)
for i in range(self.workers_num):
worker = Thread(target=self.scan,args=(queue,))
worker.setDaemon(True)
worker.start()
queue.join()
if __name__ == '__main__':
s = Scanner(('127.0.0.1'), (1, 65000), 5)
import time
now = time.time()
s.run()
print time.time() - now
and result is
Cpython's thread: 1.4 sec
And I think this is a very good result. I take as a standard nmap scanning time:
$ nmap 127.0.0.1 -p1-65000
Starting Nmap 5.21 ( http://nmap.org ) at 2012-10-22 18:43 MSK
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00021s latency).
Not shown: 64986 closed ports
PORT STATE SERVICE
53/tcp open domain
80/tcp open http
443/tcp open https
631/tcp open ipp
3306/tcp open mysql
6379/tcp open unknown
8000/tcp open http-alt
8020/tcp open unknown
8888/tcp open sun-answerbook
9980/tcp open unknown
27017/tcp open unknown
27634/tcp open unknown
28017/tcp open unknown
39900/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.85 seconds
And my question is now: how threads implemented in Eventlet as I can understand this is not threads but something special for Eventlet and why they dont speed up tasks?
Eventlet is used by many of the major projects like OpenStack and etc.
But why? Just do the heavy queries to a DB in asynchronous manner or something else?
Cpython threads:
Each cpython thread maps to an OS level thread (lightweight process/pthread in user space)
If there are many cpython threads executing python code concurrently: due to the global interpreter lock, only one cpython thread can interpret python at one time. The remaining threads will be blocked on the GIL when they need to interpret python instructions. When there are many python threads this slows things down a lot.
Now if your python code is spending most of its time inside networking operations (send, connect, etc): in this case there will be less threads fighting for GIL to interpret code. So the effect of GIL is not so bad.
Eventlet/Green threads:
From above we know that cpython has a performance limitation with threads. Eventlets tries to solve the problem by using a single thread running on a single core and using non blocking i/o for everything.
Green threads are not real OS level threads. They are a user space abstraction for concurrency. Most importantly, N green threads will map to 1 OS thread. This avoids the GIL problem.
Green threads cooperatively yield to each other instead of preemptively being scheduled.
For networking operations, the socket libraries are patched in run time (monkey patching) so that all calls are non-blocking.
So even when you create a pool of eventlet green threads, you are actually creating only one OS level thread. This single OS level thread will execute all the eventlets. The idea is that if all the networking calls are non blocking, this should be faster than python threads, in some cases.
Summary
For your program above, "true" concurrency happens to be faster (cpython version, 5 threads running on multiple processors ) than the eventlet model (single thread running on 1 processor.).
There are some cpython workloads that will perform badly on many threads/cores (e.g. if you have 100 clients connecting to a server, and one thread per client). Eventlet is an elegant programming model for such workloads, so its used in several places.
The title of your question is "What are the advantages of multithreaded programming in Python?" so I am giving you an example rather than try to solve your problem. I have a python program running on a pentium core duo I bought in 2005, running windows xp that downloads 500 csv files from finance.yahoo.com, each being about 2K bytes, one for each stock in the S&P 500. It uses the urllib2. If I do not use threads it takes over 2 minutes, using standard python threads (40 threads) it is between 3 to 4 seconds with an average of around 1/4 second each (this is wall clock time and includes compute and I/O). When I look at the start and stop times of each thread (wall clock) there is tremendous overlap. I have the same thing running as a java program and the performance is almost identical between python and java. Also same as c++ using curllib but curllib is just a tad slower than java or python. I am using standard python version 2.2.6
Python has a Global Interpreter Lock http://en.wikipedia.org/wiki/Global_Interpreter_Lock which prevents two threads from ever executing at the same time.
If you're using something like cython, the C portions can execute concurrently, which is why you see the speedup.
In pure python programs there's no performance benefit (in terms of amount of computation you can get done), but it's sometimes the easiest way to write code which does a lot of IO (e.g. leave a thread waiting for a socket read to finish while you do something else).
The main advantages of multithreaded programming, regardless of programming language are:
If you have a system with multiple CPUs or cores, then you can have all CPUs executing application code all at the same time. So for example, if you have a system with four CPUs, a process could potentially run up to 4 times faster with multithreading (though it is highly unlikely it will be that fast in most cases, since typical applications require threads to synchronize their access to shared resources, creation contention).
If the process needs to block for some reason (disk I/O, user input, network I/O) then while a thread or threads are blocked waiting for I/O completion other thread(s) can be doing other work. Note that for this type of concurrency you do not need multiple CPUs or cores, a process running on a single CPU can also benefit greatly from threading.
Whether these benefits can be applied to your process or not will largely depend on what your process does. In some cases you will get a considerable performance improvements, in other cases you won't and the threaded version might be slower. Note that writing good and efficient multithreaded apps is hard.
Now, since you are asking about Python in particular, let's discuss how these benefits apply to Python.
Due to the Global Interpreter Lock that is present in Python, running code in parallel in multiple CPUs is not possible. The GIL ensures that only one thread is interpreting Python code at a time, so there isn't really a way to take full advantage of multiple CPUs.
If a Python thread performs a blocking operation, another thread will get the CPU and continue to run, while the first thread is blocked waiting. When the blocking event completes, the blocked thread will resume. So this is a good reason to implement multithreading in a Python script (though it isn't the only way to achieve this type of concurrency, non-blocking I/O can achieve similar results).
Here are some examples that benefit from using multiple threads:
a GUI program that is doing a lengthy operation can have a thread that continues to keep the application window refreshed and responsive, maybe even showing a progress report on the long operation and a cancel button.
a process that needs to repeatedly read records from disk, then do some processing on them and finally write them back to disk can benefit from threading because while a thread is blocked waiting to get a record from disk another thread can be doing processing another record that was already read, and yet another thread can be writing another record back to disk. Without threads when the process is reading or writing to disk nothing else can happen. For a language that does not have a GIL (say C++) the benefit is even greater, as you can also have multiple threads, each running on a different core, all doing processing of different records.
I hope this helps!
Using the threading or multiprocessing modules enables you to use the multiple cores that are prevalent in modern CPUs.
This comes with a price; added complexity in your program needed to regulate access to shared data (especially writing); If one thread was iterating over a list while another thread was updating it, the result would be undetermined. This also applies to the internal data of the python interpreter.
Therefore the standard cpython has an important limitation with regard to using threads: only one thread at a time can be executing python bytecode.
If you want to paralellize a job that doesn't require a lot of communication between instances, multiprocessing (and especially multiprocessing.Pool) is often a better choice than threads because those jobs run in different processes that do not influence each other.
Adding threads won't necessarily make a process faster as there is an overhead associated with the management of the threads which may outweigh any performance gain you get from the threads.
If you are running this on a machine with few CPU's as opposed to one with many you may well find that it runs slower as it swaps each thread in and out of execution. There may be other factors at play as well. If the threads need access to some other subsystem or hardware that can't handle concurrent requests (a serial port for example) then multithreading won't help you improve performance.

Categories

Resources