Using multithreading for maximum CPU efficiency - python

I am currently working in Python and my program looks like this:
function(1)
function(2)
...
function(100)
Performing a function takes ~30 minutes at 100% CPU, so executing the program takes a lot of time. The functions access the same file for inputs, do a lot of math and print the results.
Would introducing multithreading decrease the time, which the program takes to complete (I am working on a multicore machine)? If so, how many threads should I use?
Thank you!

It depends.
If none of the functions depend on each other at all, you can of course run them on separate threads (or even processes using multiprocessing, to avoid the global interpreter lock). You can either run one process per core, or run 100 processes, or any number in between, depending on the resource constraints of your system. (If you don't own the system, some admins don't like users who spam the process table.)
If the functions must be run one after the other, then you can't do that. You have to restructure the program to try and isolate independent tasks, or accept that you might have a P-complete (inherently hard to parallelize) problem and move on.

Related

Multiprocessing with Multithreading? How do I make this more efficient?

I have an interesting problem on my hands. I have access to a 128 CPU ec2 instance. I need to run a program that accepts a 10 million row csv, and sends a request to a DB for each row in that csv to augment the existing data in the csv. In order to speed this up, I use:
executor = concurrent.futures.ProcessPoolExecutor(len(chunks))
futures = [executor.submit(<func_name>, chnk) for chnk in chunks]
successes = concurrent.futures.wait(futures)
I chunk up the 10 million row csv into 128 portions and then use futures to spin up 128 processes (+1 for the main one, so total 129). Each process takes a chunk, and retrieves the records for its chunk and spits the output into a file. At the end of the process, I merge all the files together and voila.
I have a few questions about this.
is this the most efficient way to do this?
by creating 128 subprocesses, am I really using the 128 CPUs of the machine?
would multithreading be better/more efficient?
can I multithread on each CPU?
advice on what to read up on?
Thanks in advance!
Is this most efficient?
Hard to tell without profiling. There's always a bottleneck somewhere. For example if you are cpu limited, and the algorithm can't be made more efficient, that's probably a hard limit. If you're storage bandwidth limited, and you're already using efficient read/write caching (typically handled by the OS or by low level drivers), that's probably a hard limit.
Are all cores of the machine actually used?
(Assuming python is running on a single physical machine, and you mean individual cores of one cpu) Yes, python's mp.Process creates a new OS level process with a single thread which is then assigned to execute for a given amount of time on a physical core by the OS's scheduler. Scheduling algorithms are typically quite good, so if you have an equal number of busy threads as logical cores, the OS will keep all the cores busy.
Would threads be better?
Not likely. Python is not thread safe, so it must only allow a single thread per process run at a time. There are specific exceptions to this when a function is written in c or c++, and calls the python macro: Py_BEGIN_ALLOW_THREADS though this is not extremely common. If most of your time is spent in such functions, threads will actually be allowed to run concurrently, and will have less overhead compared to processes. Threads also share memory, making passing results back after completion easier (threads can simply modify some global state rather than passing results via a queue or similar).
multithreading on each CPU?
Again, I think what you probably have is a single CPU with 128 cores.. The OS scheduler decides which threads should run on each core at any given time. Unless the threads are releasing the GIL, only one thread from each process can run at a time. For example running 128 processes each with 8 threads would result in 1024 threads, but still only 128 of them could ever run at a time, so the extra threads would only add overhead.
what to read up on?
When you want to make code fast, you need to be profiling. Profiling for parallel processing is more challenging, and profiling for a remote / virtualized computer can sometimes be challenging as well. It is not always obvious what is making a particular piece of code slow, and the only way to be sure is to test it. Also look into the tools you're using. I'm specifically thinking about the database you're using, because most database software has had a great deal of work put into optimization, but you must use it in the correct way to get the most speed out of it. Batched requests come to mind rather than accessing a single row at a time.

Thread are not happening at the same time?

I have a program that fetches data via an API. I created a function that only takes the target data as an argument and with a for-loop I run this method 10 times.
The programm takes quite some time to display the data because the next function call only happens when the function before has done its work.
I want to use Threads to make it all happen quicker. However, I'm confused. On realpython.org I read this:
A thread is a separate flow of execution. This means that your program will have two things happening at once. But for most Python 3 implementations the different threads do not actually execute at the same time: they merely appear to. It’s tempting to think of threading as having two (or more) different processors running on your program, each one doing an independent task at the same time. That’s almost right. The threads may be running on different processors, but they will only be running one at a time.
First they say: "This means that your program will have two things happening at once" and then they say "but they will only be running one at a time". So my threads are not done simultaneously?
I want to make a decision on whether to use Threads or Multiprocessing but I can't figure it out.
Can somebody help?
With both Threads or Multiprocessing you must assume that execution of your program could jump from one thread/process to another randomly. The difference is that with Threads, code is never really executed at the same time. That means there is always only one CPU core doing your work. With Multiprocessing, your code runs on multiple cores at the same time. So only Multiprocessing will solve your computation N times faster with N processes. (There will be some overhead of course.) If you are not doing any heavy computation, but need to create the illusion of things running in parallel, use threads. This is especially useful for GUIs.
The confusing part is that IO (copying files or loading something from the web for example) is not CPU bound, as it does not require a lot of CPU instructions to happen. So always use threads for this. To understand it a bit more, you should realise that when a thread is waiting for an IO operation to finish, it is actually in a blocked state. This allows other threads to run. So if you use threads to fetch data the first thread will start loading it and then block. This makes room for the the second thread to do the same and so on. When one of the threads has the data ready, it will unblock, run the rest of its code and finish.
(Note that when multiple threads are running they can pause randomly and give room for other threads to run for a while and then carry on. (See first sentence of this answer.))
Generally always use threads unless you need to do something CPU heavy in parallel. Multiprocessing has a lot of limitations when it comes to how it works internally and using it is more complicated and heavy.
This only applies to some implementations of Python tough, for example the most commonly used "official" implementation, CPython. In other languages or less common Python implementations threads are often able to execute instructions on multiple cores at the same time.

Using shell vs python for parallelism

Let's say I have many tasks N to run (~thousands) and each task takes a good amount of time X (few minutes to hours). Luckily, each of these tasks can be run independently. Each task is a shell command invoked through Python.
Edit: Each task is almost identical, so I don't really need to abstract tasks.
Which is better? (in terms of memory required, cpu usage, background task limits ...)
Invoke each task as a background process in a single threaded python script (use files/redirection for tracking) or,
Multiple python threads, each one calling the shell command.
I need python here primarily to interact with db, do some logic, read files etc.
Is this a tradeoff worth considering or either way is fine?
Ideally it would be nice if there are some statistics/graphs around the two approaches for different N/X values.
PS: Google and SO search didn't give me any leads. I am sorry if there is something like this already.
Thanks!

Python multiprocessing on Amazon EC2 eventually resorts to single core

I am running into a very weird problem on an Amazon instance running python and multiprocessing.
The context
I want to use pool.map or something similar (imap_unordered would do the trick too) to apply a CPU intensive task to an iterable. The iterable isn't that big (few hundred) but the task takes a long time.
I'm using the multiprocessing module of Python, in Python 2.7.11
The general structure is:
for longer_loop:
for small loop:
pool = Pool(processes=18)
pool.map(f, iterable)
pool.close()
pool.join()
The problem
I start the run. I go and look at "top" and see that Python is nicely using all them cores. I go and work on something else. I come back and see that all of a sudden Python is still in the longer loop but now uses only one core, and completely stopped taking advantage of the multiprocessing. To emphasize: It doesn't hang. Stuff is still happening. But it's happening one item at a time, instead of 18.
Things I tried (that didn't help)
First instinct: This is a load balancing issue, since the function takes a long yet slightly variable time, so some cores are just finishing earlier. Set chunksize to 1 since the bottleneck is definitely the function being applied, not the creation of lots of chunks. That didn't help.
Second instinct: I vaguely remember that numpy and python multiprocessing did not gel very well. Set OMP_NUM_THREADS=1 in the environment variables. While that seemed to help at first (via making everything run faster), a run with longer execution time (more data than my "let's test this stuff on small things first") still got stuck at the "only one thread".
Note: I had the creation of the pool outside the small loop, but that didn't change anything. The actual execution of the map takes the most time, so closing and recreating Pool objects would be insignificant.
More suspicions of what it could be
Currently trying a run where I deal with that core affinity issue, but I feel like if that was the issue then I should see it from the start, not at some undetermined later time.
Is there something weird about the Amazon EC2 instance that says "enough cores for you, fool!" after creating too many processes?
Could it have to do with using too much memory? But then I'd just expect to still see 18 hardworking (and 1 monitoring) python processes, just now they're all busy swapping stuff because they're out of memory. But I really just see a single working process (and the 1 monitoring process) toiling away at the loop, as if map or imap_unordered) decided that 1 was enough now. Which... just shouldn't happen.
Happy for any clues and pointers, and happy to provide more information if required.

Python: Interruptable threading in wx

My wx GUI shows thumbnails, but they're slow to generate, so:
The program should remain usable while the thumbnails are generating.
Switching to a new folder should stop generating thumbnails for the old folder.
If possible, thumbnail generation should make use of multiple processors.
What is the best way to do this?
Putting the thumbnail generation in a background thread with threading.Thread will solve your first problem, making the program usable.
If you want a way to interrupt it, the usual way is to add a "stop" variable which the background thread checks every so often (e.g., once per thumbnail), and the GUI thread sets when it wants to stop it. Ideally you should protect this with a threading.Condition. (The condition isn't actually necessary in most cases—the same GIL that prevents your code from parallelizing well also protects you from certain kinds of race conditions. But you shouldn't rely on that.)
For the third problem, the first question is: Is thumbnail generation actually CPU-bound? If you're spending more time reading and writing images from disk, it probably isn't, so there's no point trying to parallelize it. But, let's assume that it is.
First, if you have N cores, you want a pool of N threads, or N-1 if the main thread has a lot of work to do too, or maybe something like 2N or 2N-1 to trade off a bit of best-case performance for a bit of worst-case performance.
However, if that CPU work is done in Python, or in a C extension that nevertheless holds the Python GIL, this won't help, because most of the time, only one of those threads will actually be running.
One solution to this is to switch from threads to processes, ideally using the standard multiprocessing module. It has built-in APIs to create a pool of processes, and to submit jobs to the pool with simple load-balancing.
The problem with using processes is that you no longer get automatic sharing of data, so that "stop flag" won't work. You need to explicitly create a flag in shared memory, or use a pipe or some other mechanism for communication instead. The multiprocessing docs explain the various ways to do this.
You can actually just kill the subprocesses. However, you may not want to do this. First, unless you've written your code carefully, it may leave your thumbnail cache in an inconsistent state that will confuse the rest of your code. Also, if you want this to be efficient on Windows, creating the subprocesses takes some time (not as in "30 minutes" or anything, but enough to affect the perceived responsiveness of your code if you recreate the pool every time a user clicks a new folder), so you probably want to create the pool before you need it, and keep it for the entire life of the program.
Other than that, all you have to get right is the job size. Hopefully creating one thumbnail isn't too big of a job—but if it's too small of a job, you can batch multiple thumbnails up into a single job—or, more simply, look at the multiprocessing API and change the way it batches jobs when load-balancing.
Meanwhile, if you go with a pool solution (whether threads or processes), if your jobs are small enough, you may not really need to cancel. Just drain the job queue—each worker will finish whichever job it's working on now, but then sleep until you feed in more jobs. Remember to also drain the queue (and then maybe join the pool) when it's time to quit.
One last thing to keep in mind is that if you successfully generate thumbnails as fast as your computer is capable of generating them, you may actually cause the whole computer—and therefore your GUI—to become sluggish and unresponsive. This usually comes up when your code is actually I/O bound and you're using most of the disk bandwidth, or when you use lots of memory and trigger swap thrash, but if your code really is CPU-bound, and you're having problems because you're using all the CPU, you may want to either use 1 fewer core, or look into setting thread/process priorities.

Categories

Resources