I have a code using python multiprocessing pool. But it is using excessive memory. I have tested both pool.map and pool.imap_unordered, however, both using the same amount of memory. Below is a simplified version of my code.
import random, time, multiprocessing
def func(arg):
y = arg**arg # Don't look into here because my original function is
# much complicated and I can't change anything here.
print y
p = multiprocessing.Pool(24)
count = 0
start = time.time()
for res in p.imap_unordered(func, range(48000), chunksize=2):
#for res in p.map(func, range(48000), chunksize=2):
print "[%5.2f] res:%s count:%s"%(time.time()-start, res, count)
count += 1
The function saves output in the files and doesn't have any return statement. The above code took:
p.map: CPU Utilized: 03:18:31, Job Wall-clock time: 00:08:17, Memory Utilized: 162.92 MB
p.imap_unordered: CPU Utilized: 04:00:13, Job Wall-clock time: 00:10:02, Memory Utilized: 161.16 MB
I have total 128GB of memory and my original code stops due to memory exceed. Both map and imap_unordered shows the same problem. I was expecting imap_unordered to take much less memory. What should I modify to have less memory consumption without changing the func (the original one)?
Related
I have written a simple function to demonstrate this behavior which iteratively creates a list and I pass that function to the concurrent.futures.ProcessPoolExecutor. The actual function isn't important as this seems to happen for a wide variety of functions I've tested. As I increase the number of processors it takes longer to run the underlying function. At only 10 processors the total execution time per processor increases by 2.5 times! For this function it continues to increase at a rate of about 15% per processor up to the capacity limits of my machine. I have a Windows machine with 48 processors and my total CPU and memory usage doesn't exceed 25% for this test. I have nothing else running. Is there some blocking lurking somewhere?
from datetime import datetime
import concurrent.futures
def process(num_jobs=1,**kwargs) :
from functools import partial
iterobj = range(num_jobs)
args = []
func = globals()['test_multi']
with concurrent.futures.ProcessPoolExecutor(max_workers=num_jobs) as ex:
## using map
result = ex.map(partial(func,*args,**kwargs),iterobj)
return result
def test_multi(*args,**kwargs):
starttime = datetime.utcnow()
iternum = args[-1]
test = []
for i in range(200000):
test = test + [i]
return iternum, (datetime.utcnow()-starttime)
if __name__ == '__main__' :
max_processors = 10
for i in range(max_processors):
starttime = datetime.utcnow()
result = process(i+1)
finishtime = datetime.utcnow()-starttime
if i == 0:
chng = 0
total = 0
firsttime = finishtime
else:
chng = finishtime/lasttime*100 - 100
total = finishtime/firsttime*100 - 100
lasttime = finishtime
print(f'Multi took {finishtime} for {i+1} processes changed by {round(chng,2)}%, total change {round(total,2)}%')
This gives the following results on my machine:
Multi took 0:00:52.433927 for 1 processes changed by 0%, total change 0%
Multi took 0:00:52.597822 for 2 processes changed by 0.31%, total change 0.31%
Multi took 0:01:13.158140 for 3 processes changed by 39.09%, total change 39.52%
Multi took 0:01:26.666043 for 4 processes changed by 18.46%, total change 65.29%
Multi took 0:01:43.412213 for 5 processes changed by 19.32%, total change 97.22%
Multi took 0:01:41.687714 for 6 processes changed by -1.67%, total change 93.93%
Multi took 0:01:38.316035 for 7 processes changed by -3.32%, total change 87.5%
Multi took 0:01:51.106467 for 8 processes changed by 13.01%, total change 111.9%
Multi took 0:02:15.046646 for 9 processes changed by 21.55%, total change 157.56%
Multi took 0:02:13.467514 for 10 processes changed by -1.17%, total change 154.54%
The increases are not linear and vary from test to test but always end up significantly increasing the time to run the function. Given the ample free resources on this machine and very simple function I would have expected the total time to remain fairly constant or perhaps slightly increase with the spawning of new processes, not to increase dramatically from pure calculation.
yes there is, and it's called MEMORY BANDWIDTH.
while the memory controller is good at pipelining read/write instructions to improve throughput for parallel programs, if too many applications are reading/writing to your RAM sticks then you are going to see a slowdown because the RAM pipeline is being bombarded from all applications at the same time.
other applications running at the same time may not be using the RAM as heavily because each core has cache (L1,L2 and a shared L3) to keep applications running without contesting on the RAM bandwidth, so only applications that do heavy memory operations will be contesting on the RAM bandwidth, and your application is clearly contesting with itself on the RAM bandwidth.
this is one of the hard limits on parallel programs, and the solution is obviously to make a more "Cache friendly" application that reaches out for your RAM less often.
"Cache friendly applications" are more easily written in C/C++ than python, but they are totally doable in python as computers have a few MBs of cache which can fit an entire application in a lot of cases.
I'm bruteforcing a 8-digit pin on a ELF executable (it's for a CTF) and I'm using asynchronous parallel processing. The code is very fast but it fills the memory even faster.
It takes about 10% of the total iterations to fill 8gbs of ram, and I have no idea what's causing it. Any help?
from pwn import *
import multiprocessing as mp
from tqdm import tqdm
def check_pin(pin):
program = process('elf_exe')
program.recvn(36)
program.sendline(str(pin))
program.recvline()
program.recvline()
res = program.recvline()
program.close()
if 'Access denied.' in str(res):
return null, null
else:
return res, pin
def process_result(res, pin):
if(res != null):
print(pin)
if __name__ == '__main__':
print(f'Starting bruteforce on {mp.cpu_count()} cores :)\n')
pool = mp.Pool(mp.cpu_count())
min = 10000000
max = 99999999
for pin in tqdm(range(min, max)):
pool.apply_async(check_pin, args=(pin), callback=process_result)
pool.close()
pool.join()
Multiprocessing pools create several processes. Calls to apply_async create a task that is added to a shared data structure (eg. queue). The data structure is read by processes thanks to inter-process communication (IPC). The thing is apply_async return a synchronization object that you do not use and so there is not synchronizations. Items appended in the data structure take some memory space (at least 32*3=96 bytes due to 3 CPython objects being allocated) and the data structure grow in memory to hold the 89_999_999 items hence at least 8 GiB of RAM. The process are not fast enough to execute the work. What tqdm print is totally is completely misleading: it just print the processing of the number of task submitted, not the one executed that is only a tiny fraction. Almost all the work is done when tqdm print 100% and the submission loop is done. I actually doubt the "code is very fast" since it appears to run 90 millions process while running a process is known to be an expensive operation.
To speed up this code and avoid a big memory usage, you need to aggregate the work in bigger tasks. You can for example and a range of pin variable to be computed and add a loop in check_pin. A reasonable range size is for example 1000. Additionally, you need to accumulate the AsyncResult objects returned by apply_async in a list and perform periodic synchronizations when the list becomes too big so that processes does not have too much work and so the shared data structure can remain small. Here is a simple untested example:
lst = []
for rng in allRanges:
lst.append(pool.apply_async(check_pin, args=(rng), callback=process_result))
if len(lst) > 100:
# Naive synchronization
for i in lst:
i.wait()
lst = []
I am trying to speed up some code with multiprocessing in Python, but I cannot understand one point. Assume I have the following dumb function:
import time
from multiprocessing.pool import Pool
def foo(_):
for _ in range(100000000):
a = 3
When I run this code without using multiprocessing (see the code below) on my laptop (Intel - 8 cores cpu) time taken is ~2.31 seconds.
t1 = time.time()
foo(1)
print(f"Without multiprocessing {time.time() - t1}")
Instead, when I run this code by using Python multiprocessing library (see the code below) time taken is ~6.0 seconds.
pool = Pool(8)
t1 = time.time()
pool.map(foo, range(8))
print(f"Sample multiprocessing {time.time() - t1}")
From the best of my knowledge, I understand that when using multiprocessing there is some time overhead mainly caused by the need to spawn the new processes and to copy the memory state. However, this operation should be performed just once when the processed are initially spawned at the very beginning and should not be that huge.
So what I am missing here? Is there something wrong in my reasoning?
Edit: I think it is better to be more explicit on my question. What I expected here was the multiprocessed code to be slightly slower than the sequential one. It is true that I don't split the whole work across the 8 cores, but I am using 8 cores in parallel to do the same job (hence in an ideal world the processing time should more or less stay the same). Considering the overhead of spawning new processes, I expected a total increase in time of some (not too big) percentage, but not of a ~2.60x increase as I got here.
Well, multiprocessing can't possibly make this faster: you're not dividing the work across 8 processes, you're asking each of 8 processes to do the entire thing. Each process will take at least as long as your code doing it just once without using multiprocessing.
So if multiprocessing weren't helping at all, you'd expect it to take about 8 times as long (it's doing 8x the work!) as your single-processor run. But you said it's not taking 2.31 * 8 ~= 18.5 seconds, but "only" about 6. So you're getting better than a factor of 3 speedup.
Why not more than that? Can't guess from here. That will depend on how many physical cores your machine has, and how much other stuff you're running at the same time. Each process will be 100% CPU-bound for this specific function, so the number of "logical" cores is pretty much irrelevant - there's scant opportunity for processor hyper-threading to help. So I'm guessing you have 4 physical cores.
On my box
Sample timing on my box, which has 8 logical cores but only 4 physical cores, and otherwise left the box pretty quiet:
Without multiprocessing 2.468580484390259
Sample multiprocessing 4.78624415397644
As above, none of that surprises me. In fact, I was a little surprised (but pleasantly) at how effectively the program used up the machine's true capacity.
#TimPeters already answered that you are actually just running the job 8 times across the 8 Pool subprocesses, so it is slower not faster.
That answers the issue but does not really answer what your real underlying question was. It is clear from your surprise at this result, that you were expecting that the single job to somehow be automatically split up and run in parts across the 8 Pool processes. That is not the way that it works. You have to build in/tell it how to split up the work.
Different kinds of jobs needs need to be subdivided in different ways, but to continue with your example you might do something like this:
import time
from multiprocessing.pool import Pool
def foo(_):
for _ in range(100000000):
a = 3
def foo2(job_desc):
start, stop = job_desc
print(f"{start}, {stop}")
for _ in range(start, stop):
a = 3
def main():
t1 = time.time()
foo(1)
print(f"Without multiprocessing {time.time() - t1}")
pool_size = 8
pool = Pool(pool_size)
t1 = time.time()
top_num = 100000000
size = top_num // pool_size
job_desc_list = [[size * j, size * (j+1)] for j in range(pool_size)]
# this is in case the the upper bound is not a multiple of pool_size
job_desc_list[-1][-1] = top_num
pool.map(foo2, job_desc_list)
print(f"Sample multiprocessing {time.time() - t1}")
if __name__ == "__main__":
main()
Which results in:
Without multiprocessing 3.080709171295166
0, 12500000
12500000, 25000000
25000000, 37500000
37500000, 50000000
50000000, 62500000
62500000, 75000000
75000000, 87500000
87500000, 100000000
Sample multiprocessing 1.5312283039093018
As this shows, splitting the job up does allow it to take less time. The speedup will depend on the number of CPUs. In a CPU bound job you should try to limit it the pool size to the number of CPUs. My laptop has plenty more CPU's but some of the benefit is lost to the overhead. If the jobs were longer this should look more useful.
I'm trying to parallelize some calculations that use numpy with the help of Python's multiprocessing module. Consider this simplified example:
import time
import numpy
from multiprocessing import Pool
def test_func(i):
a = numpy.random.normal(size=1000000)
b = numpy.random.normal(size=1000000)
for i in range(2000):
a = a + b
b = a - b
a = a - b
return 1
t1 = time.time()
test_func(0)
single_time = time.time() - t1
print("Single time:", single_time)
n_par = 4
pool = Pool()
t1 = time.time()
results_async = [
pool.apply_async(test_func, [i])
for i in range(n_par)]
results = [r.get() for r in results_async]
multicore_time = time.time() - t1
print("Multicore time:", multicore_time)
print("Efficiency:", single_time / multicore_time)
When I execute it, the multicore_time is roughly equal to single_time * n_par, while I would expect it to be close to single_time. Indeed, if I replace numpy calculations with just time.sleep(10), this is what I get — perfect efficiency. But for some reason it does not work with numpy. Can this be solved, or is it some internal limitation of numpy?
Some additional info which may be useful:
I'm using OSX 10.9.5, Python 3.4.2 and the CPU is Core i7 with (as reported by the system info) 4 cores (although the above program only takes 50% of CPU time in total, so the system info may not be taking into account hyperthreading).
when I run this I see n_par processes in top working at 100% CPU
if I replace numpy array operations with a loop and per-index operations, the efficiency rises significantly (to about 75% for n_par = 4).
It looks like the test function you're using is memory bound. That means that the run time you're seeing is limited by how fast the computer can pull the arrays from memory into cache. For example, the line a = a + b is actually using 3 arrays, a, b and a new array that will replace a. These three arrays are about 8MB each (1e6 floats * 8 bytes per floats). I believe the different i7s have something like 3MB - 8MB of shared L3 cache so you cannot fit all 3 arrays in cache at once. Your cpu adds the floats faster than the array can be loaded into cache so most of the time is spent waiting on the array to be read from memory. Because the cache is shared between the cores, you don't see any speedup by spreading the work onto multiple cores.
Memory bound operations are an issue for numpy in general and the only way I know to deal with them is to use something like cython or numba.
One easy thing that should bump efficiency up should be to do in-place array operations, if possible -- so add(a,b,a) will not create a new array, while a = a + b will. If your for loop over numpy arrays could be rewritten as vector operations, that should be more efficient as well. Another possibility would be to use numpy.ctypeslib to enable shared memory numpy arrays (see: https://stackoverflow.com/a/5550156/2379433).
I have been programming numerical methods for mathematics and having the same problem: I wasn't seeing any speed-up for a supposedly cpu bounded problem. It turns out my problem was reaching the CPU cache memory limit.
I have been using Intel PCM (Intel® Performance Counter Monitor) to see how the cpu cache memory was behaving (displaying it inside Linux ksysguard). I also disabled 2 of my processors to have clearer results (2 are active).
Here is what I have found out with this code:
def somethinglong(b):
n=200000
m=5000
shared=np.arange(n)
for i in np.arange(m):
0.01*shared
pool = mp.Pool(2)
jobs = [() for i in range(8)]
for i in range(5):
timei = time.time()
pool.map(somethinglong, jobs , chunksize=1)
#for job in jobs:
#somethinglong(job)
print(time.time()-timei)
Example that doesn't reach the cache memory limit:
n=10000
m=100000
Sequential execution: 15s
2 processor pool no cache memory limit: 8s
It can be seen that there is no cache misses (all cache hits), therefore the speed-up is almost perfect: 15/8.
Memory cache hits 2 pool
Example that reaches the cache memory limit:
n=200000
m=5000
Sequential execution: 14s
2 processor pool cache memory limit: 14s
In this case, I increased the size of the vector onto which we operate (and decreased the loop size, to see reasonable execution times). In this case we can see that the memory gets full and the processes always miss the cache memory. Therefore not getting any speedup: 15/15.
Memory cache misses 2 pool
Observation: assigning an operation to a variable (aux = 0.01*shared) also uses the cache memory and can bound the problem by memory (without increasing any vector size).
For a map task from a list src_list to dest_list, len(src_list) is of the level of thousands:
def my_func(elem):
# some complex work, for example a minimizing task
return new_elem
dest_list[i] = my_func(src_list[i])
I use multiprocessing.Pool
pool = Pool(4)
# took 543 seconds
dest_list = list(pool.map(my_func, src_list, chunksize=len(src_list)/8))
# took 514 seconds
dest_list = list(pool.map(my_func, src_list, chunksize=4))
# took 167 seconds
dest_list = [my_func(elem) for elem in src_list]
I am confused. Can someone explain why the multiprocessing version runs even slower?
And I wonder what are the considerations to the choice of chunksize and the choice between
multi-threads and multi-processes, especially for my problem. Also, currently, I measure time
by sum all time spent in the my_func method because directly using
t = time.time()
dest_list = pool.map...
print time.time() - t
doesn't work. However, in here, the document says map() blocks until the result is ready, it seems different to my result. Is there another way rather than simply sum the time? I have tried pool.close() with pool.join() which does not work.
src_list is of length around 2000. time.time()-t doesn't work because it does not sum up all the time spent in my_func in pool.map. And strange thing happended when I used timeit.
def wrap_func(src_list):
pool = Pool(4)
dest_list = list(pool.map(my_func, src_list, chunksize=4))
print timeit("wrap_func(src_list)", setup="import ...")
It ran into
OS Error Cannot allocate memory
guess I have used timeit in a wrong way...
I use python 2.7.6 under Ubuntu 14.04.
Thanks!
Multiprocessing requires overhead to pass the data between processes because processes do not share memory. Any object passed between processes must be pickled (represented as a string) and depickled. This includes objects passed to the function in you list src_list and any object returned to dest_list. This takes time. To illustrate this you might try timing the following function in a single process and in parallel.
def NothingButAPickle(elem):
return elem
If you loop over your src_list in a single process this should be extremely fast because Python only has to make one copy of each object in the list in memory. If instead you call this function in parallel with the multiprocessing package it has to (1) pickle each object to send it from the main process to a subprocess as a string (2) depickle each object in the subprocess to go from a string representation to an object in memory (3) pickle the object to return it to the main process represented as a string, and then (4) depickle the object to represent it in memory in the main process. Without seeing your data or the actual function, this overhead cost typically only exceeds the multiprocessing gains if the objects you are passing are extremely large and/or the function is actually not that computationally intensive.