Get the number of physical cores with Python [duplicate] - python

I have a function foo which consumes a lot of memory and which I would like to run several instances of in parallel.
Suppose I have a CPU with 4 physical cores, each with two logical cores.
My system has enough memory to accommodate 4 instances of foo in parallel but not 8. Moreover, since 4 of these 8 cores are logical ones anyway, I also do not expect using all 8 cores will provide much gains above and beyond using the 4 physical ones only.
So I want to run foo on the 4 physical cores only. In other words, I would like to ensure that doing multiprocessing.Pool(4) (4 being the maximum number of concurrent run of the function I can accommodate on this machine due to memory limitations) dispatches the job to the four physical cores (and not, for example, to a combo of two physical cores and their two logical offsprings).
How to do that in python?
Edit:
I earlier used a code example from multiprocessing but I am library agnostic ,so to avoid confusion, I removed that.

I know the topic is quite old now, but as it still appears as the first answer when typing 'multiprocessing logical core' in google... I feel like I have to give an additional answer because I can see that it would be possible for people in 2018 (or even later..) to get easily confused here (some answers are indeed a little bit confusing)
I can see no better place than here to warn readers about some of the answers above, so sorry for bringing the topic back to life.
--> TO COUNT THE CPUs (LOGICAL/PHYSICAL) USE THE PSUTIL MODULE
For a 4 physical core / 8 thread i7 for ex it will return
import psutil
psutil.cpu_count(logical = False)
4
psutil.cpu_count(logical = True)
8
As simple as that.
There you won't have to worry about the OS, the platform, the hardware itself or whatever. I am convinced it is much better than multiprocessing.cpu_count() which can sometimes give weird results, from my own experience at least.
--> TO USE N PHYSICAL CORES (up to your choice) USE THE MULTIPROCESSING MODULE DESCRIBED BY YUGI
Just count how many physical processes you have, launch a multiprocessing.Pool of 4 workers.
Or you can also try to use the joblib.Parallel() function
joblib in 2018 is not part of the standard distribution of python, but is just a wrapper of the multiprocessing module as described by Yugi.
--> MOST OF THE TIME, DON'T USE MORE CORES THAN AVAILABLE (unless you have benchmarked a very specific code and proved it was worth it)
Misinformation abounds that "the OS will handle things if you specify more cores than are available". It is absolutely 100% false. If you use more cores than available, you will face huge performance drops. The exception would be if the worker processes are IO bound. Because the OS scheduler will try its best to work on every task with the same attention, switching regularly from one to another, and depending on the OS, it can spend up to 100% of its working time to just switching between processes, which would be disastrous.
Don't just trust me: try it, benchmark it, you will see how clear it is.
IS IT POSSIBLE TO DECIDE WHETHER THE CODE WILL BE EXECUTED ON LOGICAL OR PHYSICAL CORE?
If you are asking this question, this means you don't understand the way physical and logical cores are designed, so maybe you should check a little bit more about a processor's architecture.
If you want to run on core 3 rather than core 1 for example, Well I guess there are indeed some solutions, but available only if you know how to code an OS's kernel and scheduler, which I think is not the case if you're asking this question.
If you launch 4 CPU-intensive processes on a 4 physical / 8 logical processor, the scheduler will attribute each of your processes to 1 distinct physical core (and 4 logical core will remain not/poorly used). But on a 4 logical / 8 thread proc, if the processing units are (0,1) (1,2) (2,3) (4,5) (5,6) (6,7), then it makes no difference if the process is executed on 0 or 1 : it is the same processing unit.
From my knowledge at least (but an expert could confirm, maybe it differs from very specific hardware specifications also) I think there is no or very little difference between executing a code on 0 or 1. In the processing unit (0,1), I am not sure that 0 is the logical whereas 1 is the physical, or vice-versa. From my understanding (which can be wrong), both are processors from the same processing unit, and they just share their cache memory / access to the hardware (RAM included), and 0 is not more a physical unit than 1.
More than that you should let the OS decide. Because the OS scheduler can take advantage of a hardware logical-core turbo boost that exist on some platforms (ex i7, i5, i3...), something else that you have no power over, and that could be truly helpful to you.
If you launch 5 CPU-intensive tasks on a 4 physical / 8 logical core, the behaviour will be chaotic, almost unpredictable, mostly dependent on your hardware and OS. The scheduler will try its best. Almost every time, you will face really bad performance.
Let's presume for a moment that we are still talking about a 4(8) classical architecture: Because the scheduler tries its best (and therefore often switches the attributions), depending on the process you are executing, it could be even worse to launch on 5 logical cores than on 8 logical cores (where at least he knows everything will be used at 100% anyway, so lost for lost he won't try much to avoid it, won't switch too often, and therefore won't lose too much time by switching).
It is 99% sure however (but benchmark it on your hardware to be sure) that almost any multiprocessing program will run slower if you use more physical core than available.
A lot of things can intervene... The program, the hardware, the state of the OS, the scheduler it uses, the fruit you ate this morning, your sister's name... In case you doubt about something, just benchmark it, there is no other easy way to see whether you are losing performances or not. Sometimes informatics can be really weird.
--> MOST OF THE TIME, ADDITIONAL LOGICAL CORES ARE INDEED USELESS IN PYTHON (but not always)
There are 2 main ways of doing really parallel tasks in python.
multiprocessing (cannot take advantage of logical cores)
multithreading (can take advantage of logical cores)
For example to run 4 tasks in parallel
--> multiprocessing will create 4 different python interpreter. For each of them you have to start a python interpreter, define the rights of reading/writing, define the environment, allocate a lot of memory, etc. Let's say it as it is: You will start a whole new program instance from 0. It can take a huge amount of time, so you have to be sure that this new program will work long enough so that it is worth it.
If your program has enough work (let's say, a few seconds of work at least), then because the OS allocates CPU-consuming processes on different physical cores, it works, and you can gain a lot of performances, which is great. And because the OS almost always allows processes to communicate between them (although it is slow) they can even exchange (a little bit of) data.
--> multithreading is different. Within your python interpreter, it will just create a small amount of memory that many CPU will be available to share, and work on it at the same time. It is WAY much quicker to spawn (where spawning a new process on an old computer can take many seconds sometimes, spawning a thread is done within a ridiculously small fraction of time). You don't create new processes, but "threads" which are much lighter.
Threads can share memory between threads very quickly, because they literally work together on the same memory (while it has to be copied/exchanged when working with different processes).
BUT: WHY CANNOT WE USE MULTITHREADING IN MOST SITUATIONS ? IT LOOKS VERY CONVENIENT ?
There is a very BIG limitation in python: Only one python line can be executed at a time in a python interpreter, which is called the GIL (Global Interpreter Lock). So most of the time, you will even LOSE performances by using multithreading, because different threads will have to wait to access to the same resource. For pure computational processing (with no IO), multithreading is USELESS and even WORSE if your code is pure python. However, if your threads involve any waiting for IO, multithreading can be very beneficial.
--> WHY SHOULDN'T I USE LOGICAL CORES WHEN USING MULTIPROCESSING ?
Logical cores don't have their own memory access. They can only work on the memory access and on the cache of its hosting physical processor. For example it is very likely (and often used indeed) that the logical and the physical core of a same processing unit both use the same C/C++ function on different emplacements of the cache memory at the same time. Making the treatment hugely faster indeed.
But... these are C/C++ functions ! Python is a big C/C++ wrapper, that needs much more memory and CPU than its equivalent C++ code. It is very likely in 2018 that, whatever you want to do, 2 big python processes will need much, much more memory and cache reading/writing than what a single physical+logical unit can afford, and much more that what the equivalent C/C++ truly-multithreaded code would consume. This once again, would almost always cause performances to drop. Remember that every variable that is not available in the processor's cache, will take x1000 time to read in the memory. If your cache is already completely full for 1 single python process, guess what will happened if you force 2 processes to use it: They will use it one at the time, and switch permanently, causing data to be stupidly flushed and re-read every time it switches. When the data is being read or written from memory, you might think that your CPU "is" working but it's not. It's waiting for the data ! By doing nothing.
--> HOW CAN YOU TAKE ADVANTAGE OF LOGICAL CORES THEN ?
Like I said there is no true multithreading (so no true usage of logical cores) in default python, because of the global interpreter lock. You can force the GIL to be removed during some parts of the program, but I think it would be a wise advise that you don't touch to it if you don't know exactly what you are doing.
Removing the GIL definitely has been a subject of a lot of research (see the experimental PyPy or Cython projects that both try to do so).
For now, no real solution exists for it, as it is a much more complex problem than it seems.
There is, I admit, another solution that can work:
Code your function in C
Wrap it in python with ctype
Use the python multithreading module to call your wrapped C function
This will work 100%, and you will be able to use all the logical cores, in python, with multithreading, and for real. The GIL won't bother you, because you won't be executing true python functions, but C functions instead.
For example, some libraries like Numpy can work on all available threads, because they are coded in C. But if you come to this point, I always thought it could be wise to think about doing your program in C/C++ directly because it is a consideration very far from the original pythonic spirit.
**--> DON'T ALWAYS USE ALL AVAILABLE PHYSICAL CORES **
I often see people be like "Ok I have 8 physical core, so I will take 8 core for my job". It often works, but sometimes turns out to be a poor idea, especially if your job needs a lot of I/O.
Try with N-1 cores (once again, especially for highly I/O-demanding tasks), and you will see that 100% of time, on per-task/average, single tasks will always run faster on N-1 core. Indeed, your computer makes a lot of different things: USB, mouse, keyboard, network, Hard drive, etc... Even on a working station, periodical tasks are performed anytime in the background that you have no idea about. If you don't let 1 physical core to manage those tasks, your calculation will be regularly interrupted (flushed out from the memory / replaced back in memory) which can also lead to performance issues.
You might think "Well, background tasks will use only 5% of CPU-time so there is 95% left". But it's not the case.
The processor handles one task at a time. And every time it switches, a considerably high amount of time is wasted to place everything back at its place in the memory cache/registries. Then, if for some weird reason the OS scheduler does this switching too often (something you have no control on), all of this computing time is lost forever and there's nothing you can do about it.
If (and it sometimes happen) for some unknown reason this scheduler problem impacts the performances of not 1 but 30 tasks, it can result in really intriguing situations where working on 29/30 physical core can be significantly faster than on 30/30
MORE CPU IS NOT ALWAYS THE BEST
It is very frequent, when you use a multiprocessing.Pool, to use a multiprocessing.Queue or manager queue, shared between processes, to allow some basic communication between them. Sometimes (I must have said 100 times but I repeat it), in an hardware-dependent manner, it can occur (but you should benchmark it for your specific application, your code implementation and your hardware) that using more CPU might create a bottleneck when you make processes communicate / synchronize. In those specific cases, it could be interesting to run on a lower CPU number, or even try to deport the synchronization task on a faster processor (here I'm talking about scientific intensive calculation ran on a cluster of course). As multiprocessing is often meant to be used on clusters, you have to notice that clusters often are underclocked in frequency for energy-saving purposes. Because of that, single-core performances can be really bad (balanced by a way-much higher number of CPUs), making the problem even worse when you scale your code from your local computer (few cores, high single-core performance) to a cluster (lot of cores, lower single-core performance), because your code bottleneck according to the single_core_perf/nb_cpu ratio, making it sometimes really annoying
Everyone has the temptation to use as many CPU as possible. But benchmark for those cases is mandatory.
The typical case (in data science for ex) is to have N processes running in parallel and you want to summarize the results in one file. Because you cannot wait the job to be done, you do it through a specific writer process. The writer will write in the outputfile everything that is pushed in his multiprocessing.Queue (single-core and hard-drive limited process). The N processes fill the multiprocessing.Queue.
It is easy then to imagine that if you have 31 CPU writing informations to one really slow CPU, then your performances will drop (and possibly something will crash if you overcome the system's capability to handle temporary data)
--> Take home message
Use psutil to count logical/physical processors, rather than multiprocessing.cpu_count() or whatsoever
Multiprocessing can only work on physical core (or at least benchmark it to prove it is not true in your case)
Multithreading will work on logical core BUT you will have to code and wrap your functions in C, or remove the global lock interpreter (and every time you do so, one kitten atrociously dies somewhere in the world)
If you are trying to run multithreading on pure python code, you will have huge performance drops, so you should 99% of the time use multiprocessing instead
Unless your processes/threads are having long pauses that you can exploit, never use more core than available, and benchmark properly if you want to try
If your task is I/O intensive, you should let 1 physical core to handle the I/O, and if you have enough physical core, it will be worth it. For multiprocessing implementations it needs to use N-1 physical core. For a classical 2-way multithreading, it means to use N-2 logical core.
If you have need for more performances, try PyPy (not production ready) or Cython, or even to code it in C
Last but not least, and the most important of all: If you are really seeking for performance, you should absolutely, always, always benchmark, and not guess anything. Benchmark often reveal strange platform/hardware/driver very specific behaviour that you would have no idea about.

Note: This approach doesn't work on windows and it is tested only on linux.
Using multiprocessing.Process:
Assigning a physical core to each process is quite easy when using Process(). You can create a for loop that iterates trough each core and assigns the new process to the new core using taskset -p [mask] [pid] :
import multiprocessing
import os
def foo():
return
if __name__ == "__main__" :
for process_idx in range(multiprocessing.cpu_count()):
p = multiprocessing.Process(target=foo)
os.system("taskset -p -c %d %d" % (process_idx % multiprocessing.cpu_count(), os.getpid()))
p.start()
I have 32 cores on my workstation so I'll put partial results here:
pid 520811's current affinity list: 0-31
pid 520811's new affinity list: 0
pid 520811's current affinity list: 0
pid 520811's new affinity list: 1
pid 520811's current affinity list: 1
pid 520811's new affinity list: 2
pid 520811's current affinity list: 2
pid 520811's new affinity list: 3
pid 520811's current affinity list: 3
pid 520811's new affinity list: 4
pid 520811's current affinity list: 4
pid 520811's new affinity list: 5
...
As you see, the previous and new affinity of each process here. The first one is for all cores (0-31) and is then assigned to core 0, second process is by default assigned to core0 and then its affinity is changed to the next core (1), and so forth.
Using multiprocessing.Pool:
Warning: This approach needs tweaking the pool.py module since there is no way that I know of that you can extract the pid from the Pool(). Also this changes have been tested on python 2.7 and multiprocessing.__version__ = '0.70a1'.
In Pool.py, find the line where the _task_handler_start() method is being called. In the next line, you can assign the process in the pool to each "physical" core using (I put the import os here so that the reader doesn't forget to import it):
import os
for worker in range(len(self._pool)):
p = self._pool[worker]
os.system("taskset -p -c %d %d" % (worker % cpu_count(), p.pid))
and you're done. Test:
import multiprocessing
def foo(i):
return
if __name__ == "__main__" :
pool = multiprocessing.Pool(multiprocessing.cpu_count())
pool.map(foo,'iterable here')
result:
pid 524730's current affinity list: 0-31
pid 524730's new affinity list: 0
pid 524731's current affinity list: 0-31
pid 524731's new affinity list: 1
pid 524732's current affinity list: 0-31
pid 524732's new affinity list: 2
pid 524733's current affinity list: 0-31
pid 524733's new affinity list: 3
pid 524734's current affinity list: 0-31
pid 524734's new affinity list: 4
pid 524735's current affinity list: 0-31
pid 524735's new affinity list: 5
...
Note that this modification to pool.py assign the jobs to the cores round-robinly. So if you assign more jobs than the cpu-cores, you will end up having multiple of them on the same core.
EDIT:
What OP is looking for is to have a pool() that is capable of staring the pool on specific cores. For this more tweaks on multiprocessing are needed (undo the above-mentioned changes first).
Warning:
Don't try to copy-paste the function definitions and function calls. Only copy paste the part that is supposed to be added after self._worker_handler.start() (you'll see it below). Note that my multiprocessing.__version__ tells me the version is '0.70a1', but it doesn't matter as long as you just add what you need to add:
multiprocessing's pool.py:
add a cores_idx = None argument to __init__() definition. In my version it looks like this after adding it:
def __init__(self, processes=None, initializer=None, initargs=(),
maxtasksperchild=None,cores_idx=None)
also you should add the following code after self._worker_handler.start():
if not cores_idx is None:
import os
for worker in range(len(self._pool)):
p = self._pool[worker]
os.system("taskset -p -c %d %d" % (cores_idx[worker % (len(cores_idx))], p.pid))
multiprocessing's __init__.py:
Add a cores_idx=None argument to definition of the Pool() in as well as the other Pool() function call in the the return part. In my version it looks like:
def Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None,cores_idx=None):
'''
Returns a process pool object
'''
from multiprocessing.pool import Pool
return Pool(processes, initializer, initargs, maxtasksperchild,cores_idx)
And you're done. The following example runs a pool of 5 workers on cores 0 and 2 only:
import multiprocessing
def foo(i):
return
if __name__ == "__main__":
pool = multiprocessing.Pool(processes=5,cores_idx=[0,2])
pool.map(foo,'iterable here')
result:
pid 705235's current affinity list: 0-31
pid 705235's new affinity list: 0
pid 705236's current affinity list: 0-31
pid 705236's new affinity list: 2
pid 705237's current affinity list: 0-31
pid 705237's new affinity list: 0
pid 705238's current affinity list: 0-31
pid 705238's new affinity list: 2
pid 705239's current affinity list: 0-31
pid 705239's new affinity list: 0
Of course you can still have the usual functionality of the multiprocessing.Poll() as well by removing the cores_idx argument.

I found a solution that doesn't involve changing the source code of a python module. It uses the approach suggested here. One can check that only
the physical cores are active after running that script by doing:
lscpu
in the bash returns:
CPU(s): 8
On-line CPU(s) list: 0,2,4,6
Off-line CPU(s) list: 1,3,5,7
Thread(s) per core: 1
[One can run the script linked above from within python]. In any case, after running the script above, typing these commands in python:
import multiprocessing
multiprocessing.cpu_count()
returns 4.

Related

Multiprocessing with Multithreading? How do I make this more efficient?

I have an interesting problem on my hands. I have access to a 128 CPU ec2 instance. I need to run a program that accepts a 10 million row csv, and sends a request to a DB for each row in that csv to augment the existing data in the csv. In order to speed this up, I use:
executor = concurrent.futures.ProcessPoolExecutor(len(chunks))
futures = [executor.submit(<func_name>, chnk) for chnk in chunks]
successes = concurrent.futures.wait(futures)
I chunk up the 10 million row csv into 128 portions and then use futures to spin up 128 processes (+1 for the main one, so total 129). Each process takes a chunk, and retrieves the records for its chunk and spits the output into a file. At the end of the process, I merge all the files together and voila.
I have a few questions about this.
is this the most efficient way to do this?
by creating 128 subprocesses, am I really using the 128 CPUs of the machine?
would multithreading be better/more efficient?
can I multithread on each CPU?
advice on what to read up on?
Thanks in advance!
Is this most efficient?
Hard to tell without profiling. There's always a bottleneck somewhere. For example if you are cpu limited, and the algorithm can't be made more efficient, that's probably a hard limit. If you're storage bandwidth limited, and you're already using efficient read/write caching (typically handled by the OS or by low level drivers), that's probably a hard limit.
Are all cores of the machine actually used?
(Assuming python is running on a single physical machine, and you mean individual cores of one cpu) Yes, python's mp.Process creates a new OS level process with a single thread which is then assigned to execute for a given amount of time on a physical core by the OS's scheduler. Scheduling algorithms are typically quite good, so if you have an equal number of busy threads as logical cores, the OS will keep all the cores busy.
Would threads be better?
Not likely. Python is not thread safe, so it must only allow a single thread per process run at a time. There are specific exceptions to this when a function is written in c or c++, and calls the python macro: Py_BEGIN_ALLOW_THREADS though this is not extremely common. If most of your time is spent in such functions, threads will actually be allowed to run concurrently, and will have less overhead compared to processes. Threads also share memory, making passing results back after completion easier (threads can simply modify some global state rather than passing results via a queue or similar).
multithreading on each CPU?
Again, I think what you probably have is a single CPU with 128 cores.. The OS scheduler decides which threads should run on each core at any given time. Unless the threads are releasing the GIL, only one thread from each process can run at a time. For example running 128 processes each with 8 threads would result in 1024 threads, but still only 128 of them could ever run at a time, so the extra threads would only add overhead.
what to read up on?
When you want to make code fast, you need to be profiling. Profiling for parallel processing is more challenging, and profiling for a remote / virtualized computer can sometimes be challenging as well. It is not always obvious what is making a particular piece of code slow, and the only way to be sure is to test it. Also look into the tools you're using. I'm specifically thinking about the database you're using, because most database software has had a great deal of work put into optimization, but you must use it in the correct way to get the most speed out of it. Batched requests come to mind rather than accessing a single row at a time.

concurrent.futures.ThreadPoolExecutor scaling poorly on HPC

I'm new to using HPC and had some questions regarding parallelization of code. I have some python code which I've successfully parallelized using multi-threading which works great on my personal machine and a server. However, I just got access to HPC resources at my university. The general section of code which I use looks like this:
# Iterate over weathers
print(f'Number of CPU: {os.cpu_count()}')
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [executor.submit(run_weather, i) for i in range(len(weather_list))]
for f in concurrent.futures.as_completed(futures):
results.append(f.result())
On my personal machine, when running two weathers (the item I've parallelized) I get the following results:
Number of CPU: 8
done in 29.645377159118652 seconds
On the HPC when running on 1 node with 32 cores, I get the following results:
Number of CPU: 32
done in 86.95256996154785 seconds
So it is running almost two times as worse - like it's running serially with the same processing overhead. I also attempted switching the code to ProcessPoolExecutor, and it was much slower than ThreadPoolExecutor. I'm assuming this must be something with data transfer (I've seen that multiprocessing on multiple nodes attempts to pass the entire program, packages and all), but as I said I'm very new to HPC and the wiki provided by my university leaves much to be desired.
Adding threads will only slow down the execution of CPU-bound programs (as so much time is wasted acquiring and releasing locks). Unlike some other programming languages, in Python, threads are bound by a global interpreter lock (GIL), so you may not see the same behavior you may have seen in other languages.
As a rule of thumb for Python: multiple processes are good for CPU-bound work and spreading workloads across multiple cores. Threads are great for things like concurrent IO. Threads will not speed up any kind of computational work.
Additionally, running n number threads equal to the number of CPUs probably does not make sense for your program. Particularly because threading won't spread work across CPU cores, anyhow. More threads adds more overhead in acquiring and releasing the GIL, hence why your execution times are higher the more threads you use.
Next steps:
Determine if threading is speeding up the execution of your program at all.
Try running your code single-threaded, then try again with a small number of threads. Does it speed up at all?
Chances are, you will find threading is not helping you at all here.
Explore multiprocessing instead of threading. I.E. you should use the ProcessPoolExecutor instead of threadpool.
Keep in mind, processes have their own caveats just like threads.
Here is a useful reference to learn more.

Launching multiple C processes in Python

I have two programs, one written in C and one written in Python. I want to pass a few arguments to C program from Python and do it many times in parallel, because I have about 1 million of such C calls.
Essentially I did like this:
from subprocess import check_call
import multiprocessing as mp
from itertools import combinations
def run_parallel(f1, f2):
check_call(f"./c_compiled {f1} {f2} &", cwd='.', shell=True)
if __name__ == '__main__':
pairs = combinations(fns, 2)
pool = mp.Pool(processes=32)
pool.starmap(run_parallel, pairs)
pool.close()
However, sometimes I get the following errors (though the main process is still running)
/bin/sh: fork: retry: No child processes
Moreover, sometimes the whole program in Python fails with
BlockingIOError: [Errno 11] Resource temporarily unavailable
I found while it's still running I can see a lot of processes spawned for my user (around 500), while I have at most 512 available.
This does not happen all the time (depending on the arguments) but it often does. How I can avoid these problems?
I'd wager you're running up against a process/file descriptor/... limit there.
You can "save" one process per invocation by not using shell=True:
check_call(["./c_compiled", f1, f2], cwd='.')
But it'd be better still to make that C code callable from Python instead of creating processes to do so. By far the easiest way to interface "random" C code with Python is Cython.
"many times in parallel" you can certainly do, for reasonable values of "many", but "about 1 million of such C calls" all running at the same time on the same individual machine is almost surely out of the question.
You can lighten the load by running the jobs without interposing a shell, as discussed in #AKX's answer, but that's not enough to bring your objective into range. Better would be to queue up the jobs so as to run only a few at a time -- once you reach that number of jobs, start a new one only when a previous one has finished. The exact number you should try to keep running concurrently depends on your machine and on the details of the computation, but something around the number of CPU cores might be a good first guess.
Note in particular that it is counterproductive to have more jobs at any one time than the machine has resources to run concurrently. If your processes do little or no I/O then the number of cores in your machine puts a cap on that, for only the processes that are scheduled on a core at any given time (at most one per core) will make progress while the others wait. Switching among many processes so as to attempt to avoid starving any of them will add overhead. If your processes do a lot of I/O then they will probably spend a fair proportion of their time blocked on I/O, and therefore not (directly) requiring a core, but in this case your I/O devices may well create a bottleneck, which might prove even worse than the limitation from number of cores.

Python: Does `multiprocessing.Process(...).start()` start all processes on the same cpu?

Suppose I have a function that takes several seconds to compute and I have 8 CPUs (according to multiprocessing.cpu_count()).
What happens when I start this process less than 8 times and more than 8 times?
When I tried to do this I observed that even when I started 20 processes they were all running parallel-y. I expected them to run 8 at a time, others will wait for them to finish.
What happens depends on the underlying operating system.
But generally there are two things: physical cpu cores and threads/processes (all major systems have them). The OS keeps track of this abstract thing called thread/process (which among the other things contains code to execute) and it maps it to some cpu core for a given time or until some syscall (for example network access) is called. If you have more threads then cores (which is usually the case, there are lots of things running in the background of the modern OS) then some of them wait for the context switch to happen (i.e. for their turn).
Now if you have cpu intensive tasks (hard calculations) then you won't see any benefits when running more then 8 threads/processes. But even if you do they won't run one after another. The OS will interrupt a cpu core at some point to allow other threads to run. So the OS will mix the execution: a little bit of this, a little bit of that. That way every thread/process slowly progresses and doesn't wait possibly forever.
On the other hand if your tasks are io bound (for example you make HTTP call so you wait for network) then having 20 tasks will increase performance because when those threads wait for io then the OS will put those threads on hold and will allow other threads to run. And those other threads can do io as well, that way you achieve a higher level of concurrency.

Why would using 8 threads be faster than 4 threads on a 4 core Hyper Threaded CPU?

I have a quad core i7 920 CPU. It is Hyperthreaded, so the computer thinks it has 8 cores.
From what I've read on the interweb, when doing parallel tasks, I should use the number of physical cores, not the number of hyper threaded cores.
So I have done some timings, and was surprised that using 8 threads in a parallel loop is faster than using 4 threads.
Why is this? My example code is too long to post here, but can be found by running the example here: https://github.com/jsphon/MTVectorizer
A chart of the performance is here:
(Intel) hyperthreaded cores act like (up to) two CPUs.
The observation is that a single CPU has a set of resources that are ideally busy continuously, but in practice sit idle surprising often while the CPU waits for some external event, typically memory reads or writes.
By adding a bit of additional state information for another hardware thread (e.g., another copy of the registers + additional stuff), the "single" CPU can switch its attention to executing the other thread when the first one blocks. (One can generalize this N hardware threads, and other architectures have done this; Intel quit at 2).
If both hardware threads spend their time waiting for various events, the CPU can arguably do the corresponding processing for the hardware threads. 40 nanoseconds for a memory wait is a long time. So if your program fetches lots of memory, I'd expect it to look as if both hardware threads were fully effective, e.g, you should get nearly 2x.
If the two hardware threads are doing work that is highly local (e.g., intense computations in just the registers), then internal waits become minimal and the single CPU can't switch fast enough to service both hardware threads as fast as they generate work. In this case, performance will degrade.
I don't recall where I heard it, and I heard this a long time ago: under such circumstances the net effect is more like 1.3x than the idealized 2x. (Expecting the SO audience to correct me on this).
Your application may switch back and forth in its needs depending on which part is running at the moment. Then you will get a mix of performance. I'm happy with any speed up I can get.
Ira Baxter has explained your question pretty well, but I want to add one more thing (can't comment on his answer cuz not enough rep yet): there is an overhead to switching from one thread to another. This process, called context switching (http://wiki.osdev.org/Context_Switching#Hardware_Context_Switching), requires at minimum your CPU core to change its registers to reflect data in the new thread. This cost is significant if you are doing process-level context switching, but gets quite a bit cheaper when you are doing thread-level switching. This means 2 things:
1) Hyper threading will never give you the theoretical 2x performance boost because the cost of context switching is non-trivial. This is also why highly logical threads degrade performance, per Ira: frequent context switching multiplies that cost.
2) 8 single-threaded processes will run slower than 4 double-threaded processes doing the same work. Thus, you should make use of Python's thread library, or the awesome greenlet library (https://greenlet.readthedocs.org/en/latest/) if you plan on doing multithreading work.

Categories

Resources