Python: Parallel Processing in Joblib Makes the Code Run Even Slower - python

I want to integrate a parallel processing to make my for loops run faster.
However, I noticed that it has just made my code run slower. See below example where I am using joblib with a simple function on a list of random integers. Notice that without the parallel processing it runs faster than with.
Any insight as to what is happening?
def f(x):
return x**x
if __name__ == '__main__':
s = [random.randint(0, 100) for _ in range(0, 10000)]
# without parallel processing
t0 = time.time()
out1 = [f(x) for x in s]
t1 = time.time()
print("without parallel processing: ", t1 - t0)
# with parallel processing
t0 = time.time()
out2 = Parallel(n_jobs=8, batch_size=len(s), backend="threading")(delayed(f)(x) for x in s)
t1 = time.time()
print("with parallel processing: ", t1 - t0)
I am getting the following output:
without parallel processing: 0.0070569515228271484
with parallel processing: 0.10714387893676758

The parameter batch_size=len(s) effectively says give each process a batch of s jobs. This means you create 8 threads but then give all workload to 1 thread.
Also you might want to increase the workload to have a measurable advantage. I prefer to use time.sleep delays:
def f(x):
time.sleep(0.001)
return x**x
out2 = Parallel(n_jobs=8,
#batch_size=len(s),
backend="threading")(delayed(f)(x) for x in s)
without parallel processing: 11.562264442443848
with parallel processing: 1.412865400314331

Related

Thread Pool Executor using Concurrent: no improvement for various number of workers

I'm trying to implement a task in parallel using Concurrent. Please find below a piece of code for it:
import os
import time
from concurrent.futures import ProcessPoolExecutor as PE
import concurrent.futures
# num CPUs
cpu_num = len(os.sched_getaffinity(0))
print("Number of cpu available : ",cpu_num)
# max_Worker = cpu_num
max_Worker = 1
# A fake input array
n=1000000
array = list(range(n))
results = []
# A fake function being applied to each element of array
def task(i):
return i**2
x = time.time()
with concurrent.futures.ThreadPoolExecutor(max_workers=max_Worker) as executor:
features = {executor.submit(task, j) for j in array}
# the real function is heavy and we need to be sure of completeness of each run
for future in concurrent.futures.as_completed(features):
results.append(future.result())
results = [future.result() for future in features]
y = time.time()
print('=========================================')
print(f"Train data preparation time (s): {(y-x)}")
print('=========================================')
And now my questions,
Although there is no error, is it correct/optimized?
While playing with the number of workers, seems there is no
improvement in the speed (e.g., 1 vs 16, no difference). Then,
what's the problem and how can be solved?
Thanks in advance,
See my comment to your question. To the overhead I mentioned in that comment you need to also add the oberhead in just creating the process pool itself.
The following is a benchmark with several results. The first is a timing from just calling the worker function task 100000 times and creating a results list and printing out the last element of that list. It will become apparent why I have reduced the number of times I am calling task from 1000000 to 100000.
The next attempt is to use multiprocessing to accomplish the same thing using a ProcessPoolExecutor with the submit method and then processing the Future instances that are returned.
The next attempt is to instead use the map method with the default chunksize argument of 1 being used. It is important to understand this argument. With a chunksize value of 1, each element of the iterable that is passed to the map method is written individually to a queue of tasks as a chunk to be processed by the processes in the pool. When a pool process becomes idle looking for work, it pulls from the queue the next chunk of tasks to be performed, processes each task comprising the chunk and then becomes idle again. When there are a lot of submitted tasks being submitted via map, a chunksize value of 1 is inefficient. You would expect its performance to be equivalent to repeatedly issuing submit calls for each element of the iterable.
The next attempt specifies a chunksize value which approximates more or less the value that the map function used by the Pool class in the multiprocessing package would have used by default. As you can see, the improvement is dramatic, but still not an improvement over the non-multiprocessing case.
The final attempt uses the multiprocessing faciltity provided by package multiprocessing and its multiprocessing.pool.Pool class. The difference in this benchmark is that its map function uses a more intelligent default chunksize when no chunksize argument is specified.
import os
import time
from concurrent.futures import ProcessPoolExecutor as PE
from multiprocessing import Pool
# A fake function being applied to each element of array
def task(i):
return i**2
# required for Windows:
if __name__ == '__main__':
n=100000
t1 = time.time()
results = [task(i) for i in range(n)]
print('Non-multiprocessing time:', time.time() - t1, results[-1])
# num CPUs
cpu_num = os.cpu_count()
print("Number of CPUs available: ",cpu_num)
t1 = time.time()
with PE(max_workers=cpu_num) as executor:
futures = [executor.submit(task, i) for i in range(n)]
results = [future.result() for future in futures]
print('Multiprocessing time using submit:', time.time() - t1, results[-1])
t1 = time.time()
with PE(max_workers=cpu_num) as executor:
results = list(executor.map(task, range(n)))
print('Multiprocessing time using map:', time.time() - t1, results[-1])
t1 = time.time()
chunksize = n // (4 * cpu_num)
with PE(max_workers=cpu_num) as executor:
results = list(executor.map(task, range(n), chunksize=chunksize))
print(f'Multiprocessing time using map: {time.time() - t1}, chunksize: {chunksize}', results[-1])
t1 = time.time()
with Pool(cpu_num) as executor:
results = executor.map(task, range(n))
print('Multiprocessing time using Pool.map:', time.time() - t1, results[-1])
Prints:
Non-multiprocessing time: 0.027019739151000977 9999800001
Number of CPUs available: 8
Multiprocessing time using submit: 77.34723353385925 9999800001
Multiprocessing time using map: 79.52981925010681 9999800001
Multiprocessing time using map: 0.30500149726867676, chunksize: 3125 9999800001
Multiprocessing time using Pool.map: 0.2799997329711914 9999800001
Update
The following bechmarks use a version of task that is very CPU-intensive and shows the benefit of multiprocessing. It would also seem for this small iterable size (100), forcing a chunksize value of 1 for the Pool.map case (it would by default compute a chunksize value of 4), is slightly more performant.
import os
import time
from concurrent.futures import ProcessPoolExecutor as PE
from multiprocessing import Pool
# A fake function being applied to each element of array
def task(i):
for _ in range(1_000_000):
result = i ** 2
return result
def compute_chunksize(iterable_size, pool_size):
chunksize, remainder = divmod(iterable_size, pool_size * 4)
if remainder:
chunksize += 1
return chunksize
# required for Windows:
if __name__ == '__main__':
n = 100
cpu_num = os.cpu_count()
chunksize = compute_chunksize(n, cpu_num)
t1 = time.time()
results = [task(i) for i in range(n)]
t2 = time.time()
print('Non-multiprocessing time:', t2 - t1, results[-1])
# num CPUs
print("Number of CPUs available: ",cpu_num)
t1 = time.time()
with PE(max_workers=cpu_num) as executor:
futures = [executor.submit(task, i) for i in range(n)]
results = [future.result() for future in futures]
t2 = time.time()
print('Multiprocessing time using submit:', t2 - t1, results[-1])
t1 = time.time()
with PE(max_workers=cpu_num) as executor:
results = list(executor.map(task, range(n)))
t2 = time.time()
print('Multiprocessing time using map:', t2 - t1, results[-1])
t1 = time.time()
with PE(max_workers=cpu_num) as executor:
results = list(executor.map(task, range(n), chunksize=chunksize))
t2 = time.time()
print(f'Multiprocessing time using map: {t2 - t1}, chunksize: {chunksize}', results[-1])
t1 = time.time()
with Pool(cpu_num) as executor:
results = executor.map(task, range(n))
t2 = time.time()
print('Multiprocessing time using Pool.map:', t2 - t1, results[-1])
t1 = time.time()
with Pool(cpu_num) as executor:
results = executor.map(task, range(n), chunksize=1)
t2 = time.time()
print('Multiprocessing time using Pool.map (chunksize=1):', t2 - t1, results[-1])
Prints:
Non-multiprocessing time: 23.12758779525757 9801
Number of CPUs available: 8
Multiprocessing time using submit: 5.336004018783569 9801
Multiprocessing time using map: 5.364996671676636 9801
Multiprocessing time using map: 5.444890975952148, chunksize: 4 9801
Multiprocessing time using Pool.map: 5.400001287460327 9801
Multiprocessing time using Pool.map (chunksize=1): 4.698001146316528 9801

Multiprocessing in Python: Parallelize a for loop to fill a Numpy array

I've been reading threads like this one but any of them seems to work for my case. I'm trying to parallelize the following toy example to fill a Numpy array inside a for loop using Multiprocessing in Python:
import numpy as np
from multiprocessing import Pool
import time
def func1(x, y=1):
return x**2 + y
def func2(n, parallel=False):
my_array = np.zeros((n))
# Parallelized version:
if parallel:
pool = Pool(processes=6)
for idx, val in enumerate(range(1, n+1)):
result = pool.apply_async(func1, [val])
my_array[idx] = result.get()
pool.close()
# Not parallelized version:
else:
for i in range(1, n+1):
my_array[i-1] = func1(i)
return my_array
def main():
start = time.time()
my_array = func2(60000)
end = time.time()
print(my_array)
print("Normal time: {}\n".format(end-start))
start_parallel = time.time()
my_array_parallelized = func2(60000, parallel=True)
end_parallel = time.time()
print(my_array_parallelized)
print("Time based on multiprocessing: {}".format(end_parallel-start_parallel))
if __name__ == '__main__':
main()
The lines in the code based on Multiprocessing seem to work and give you the right results. However, it takes far longer than the non parallelized version. Here is the output of both versions:
[2.00000e+00 5.00000e+00 1.00000e+01 ... 3.59976e+09 3.59988e+09
3.60000e+09]
Normal time: 0.01605963706970215
[2.00000e+00 5.00000e+00 1.00000e+01 ... 3.59976e+09 3.59988e+09
3.60000e+09]
Time based on multiprocessing: 2.8775112628936768
My intuition tells me that it should be a better way of capturing results from pool.apply_async(). What am I doing wrong? What is the most efficient way to accomplish this? Thx.
Creating processes is expensive. On my machine it take at leas several hundred of microsecond per process created. Moreover, the multiprocessing module copy the data to be computed between process and then gather the results from the process pool. This inter-process communication is very slow too. The problem is that your computation is trivial and can be done very quickly, likely much faster than all the introduced overhead. The multiprocessing module is only useful when you are dealing with quite small datasets and perform intensive computation (compared to the amount of computed data).
Hopefully, when it comes to numericals computations using Numpy, there is a simple and fast way to parallelize your application: the Numba JIT. Numba can parallelize a code if you explicitly use parallel structures (parallel=True and prange). It uses threads and not heavy processes that are working in shared memory. Numba can overcome the GIL if your code does not deal with native types and Numpy arrays instead of pure Python dynamic object (lists, big integers, classes, etc.). Here is an example:
import numpy as np
import numba as nb
import time
#nb.njit
def func1(x, y=1):
return x**2 + y
#nb.njit('float64[:](int64)', parallel=True)
def func2(n):
my_array = np.zeros(n)
for i in nb.prange(1, n+1):
my_array[i-1] = func1(i)
return my_array
def main():
start = time.time()
my_array = func2(60000)
end = time.time()
print(my_array)
print("Numba time: {}\n".format(end-start))
if __name__ == '__main__':
main()
Because Numba compiles the code at runtime, it is able to fully optimize the loop to a no-op resulting in a time close to 0 second in this case.
Here is the solution proposed by #thisisalsomypassword that improves my initial proposal. That is, "collecting the AsyncResult objects in a list within the loop and then calling AsyncResult.get() after all processes have started on each result object":
import numpy as np
from multiprocessing import Pool
import time
def func1(x, y=1):
time.sleep(0.1)
return x**2 + y
def func2(n, parallel=False):
my_array = np.zeros((n))
# Parallelized version:
if parallel:
pool = Pool(processes=6)
####### HERE COMES THE CHANGE #######
results = [pool.apply_async(func1, [val]) for val in range(1, n+1)]
for idx, val in enumerate(results):
my_array[idx] = val.get()
#######
pool.close()
# Not parallelized version:
else:
for i in range(1, n+1):
my_array[i-1] = func1(i)
return my_array
def main():
start = time.time()
my_array = func2(600)
end = time.time()
print(my_array)
print("Normal time: {}\n".format(end-start))
start_parallel = time.time()
my_array_parallelized = func2(600, parallel=True)
end_parallel = time.time()
print(my_array_parallelized)
print("Time based on multiprocessing: {}".format(end_parallel-start_parallel))
if __name__ == '__main__':
main()
Now it works. Time is reduced considerably with Multiprocessing:
Normal time: 60.107836008071
Time based on multiprocessing: 10.049324989318848
time.sleep(0.1) was added in func1 to cancel out the effect of being a super trivial task.

Python Multiprocessing Query

I am learning about python's multiprocessing module. I want to make my code use all my CPU resources. This is the code I wrote:
from multiprocessing import Process
import time
def work():
for i in range(1000):
x=5
y=10
z=x+y
if __name__ == '__main__':
start1 = time.time()
for i in range(100):
p=Process(target=work)
p.start()
p.join()
end1=time.time()
start = time.time()
for i in range(100):
work()
end=time.time()
print(f'With Parallel {end1-start1}')
print(f'Without Parallel {end-start}')
The output I get is this:
With Parallel 0.8802454471588135
Without Parallel 0.00039649009704589844
I tried experimenting with different range values in the for loops or using print statement only in work function but everytime without parallel runs faster. Is there something I am missing?
Thanks in advance!
Your benchmark method is problematic:
for i in range(100):
p = Process(target=work)
p.start()
p.join()
I guess you want to run 100 processes in parallel, but Process.join() blocks until process exit, you effectively run in serial. Besides, run more busy processes than CPU cores count leads to high CPU contention which is a performance penalty. And as a comment pointed out, your work() function is too simple, compare to the overhead of Process creation.
A better version:
import multiprocessing
import time
def work():
for i in range(2000000):
pow(i, 10)
n_processes = multiprocessing.cpu_count() # 8
total_runs = n_processes * 4
ps = []
n = total_runs
start1 = time.time()
while n:
# ensure processes number limit
ps = [p for p in ps if p.is_alive()]
if len(ps) < n_processes:
p = multiprocessing.Process(target=work)
p.start()
ps.append(p)
n = n-1
else:
time.sleep(0.01)
# wait for all processes to finish
while any(p.is_alive() for p in ps):
time.sleep(0.01)
end1=time.time()
start = time.time()
for i in range(total_runs):
work()
end=time.time()
print(f'With Parallel {end1-start1:.4f}s')
print(f'Without Parallel {end-start:.4f}s')
print(f'Acceleration factor {(end-start)/(end1-start1):.2f}')
result:
With Parallel 4.2835s
Without Parallel 33.0244s
Acceleration factor 7.71

python and multiprocessing

I have three functions in the python that each one puts an image (image path) as input and makes a simple image processing and creates a new image (image path) as output.
in the example below, one function depends on the other, ie:
the function of alg2 takes as input the image that generates the function of alg and the function of alg3 assign as input the image that generates the function of alg2 which depends on the function of alg1.
(I hope you do not mind basically)
because of their relatively high execution time (image processing is that) I would like to ask if I can
to parallelize them using python multiprocessing.
I have read about multiprocessing map and pool but I was pretty confused .
whenever I summarize I have three interdependent functions and I would like to run them together if done.
I would also like to know how I would perform these three functions in a contemporary way if they were not interdependent, ie each was autonomous.
def alg1(input_path_image,output_path_image):
start = timeit.default_timer()
###processing###)
stop = timeit.default_timer()
print stop - start
return output_path_image
def alg1(output_path_image,output_path_image1):
start = timeit.default_timer()
###processing###
stop = timeit.default_timer()
print stop - start
return output_path_image1
def alg3(output_path_image1,output_path_image2):
start = timeit.default_timer()
###processing###
stop = timeit.default_timer()
print stop - start
return output_path_image2
if __name__ == '__main__':
alg1(input_path_image,output_path_image)
alg2(output_path_image,output_path_image1)
alg3(output_path_image1,output_path_image2)
Here is what I would do:
I would split the list of images into smaller parts. Then I would make one function out of those three functions (by making the other 2 functions as private - just for the sake of simplicity). Then you can speed up the whole process by doing:
from multiprocessing import Process
image_list = this_is_your_huge_image_list
# create smaller image lists e.g. [[1, 2, 3], [4, 5, 6], ..]
chunked_lists = [image_list[x:x+100] for x in xrange(0, len(image_list), 100)]
for img_list in chunked_lists:
p = Process(target=your_main_func, args=(img_list,))
p.start()
# without .join() here
It sounds like you're doing something CPU intensive, so you'll need to use the multiprocessing.Process object, rather than threading.Thread. Because of this, you can't return from multiprocessing.Process, and therefore will need to use a multiprocessing.Manager.
So this is an adaptation of your code which will work with multiprocessing.Process:
from multiprocessing import Process, Manager
def alg1(input_path_image,output_path_image, return_dict):
start = timeit.default_timer()
###processing###)
stop = timeit.default_timer()
print stop - start
return_dict['algo1'] = output_path_image
def alg2(output_path_image,output_path_image1, return_dict):
start = timeit.default_timer()
###processing###
stop = timeit.default_timer()
print stop - start
return_dict['algo2'] = output_path_image1
def alg3(output_path_image1,output_path_image2, return_dict):
start = timeit.default_timer()
###processing###
stop = timeit.default_timer()
print stop - start
return_dict['algo3'] = output_path_image2
if __name__ == '__main__':
manager = Manager()
return_dict = manager.dict()
a1 = Process(target=alg1, args=(output_path_image,output_path_image, return_dict))
a2 = Process(target=alg2, args=(output_path_image1,output_path_image1, return_dict))
a3 = Process(target=alg3, args=(output_path_image2,output_path_image2, return_dict))
jobs = [a1, a2, a3]
for job in jobs:
job.start()
for job in jobs:
job.join()
a1_return = return_dict['algo1']
a2_return = return_dict['algo2']
a3_return = return_dict['algo3']
You'll need to modify this further to give your print statements a little more distinction. At the moment, they will only print a number, and you won't be able to distinguish between them.

Pooled processes are slower when compared to non-pool

I am a newbie to python and I am trying to use multiprocessing for one my applications.
I actually have a very simple multiplication program and I was trying to asynchronously generate parallel processes to calculate the multiplication of a range of numbers. When I try to do this without pooling, the time is atleast twice or some times even 4 times faster. I am not sure what could the reason be for this behavior.
I am using python 2.7.1
Non-Pool.py
#!/usr/bin/python
import time
def f(x):
return x*x
st = time.time()
t = 10000000
f(t)
map(f, range(t))
et = time.time()
tt = (str((et-st)%60)+'--'+str((et-st/60)))
print tt
Pool.py
#!/usr/bin/python
from multiprocessing import Pool
import time
def f(x):
return x*x
st = time.time()
t = 10000000
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
result = pool.apply_async(f, [t]) # evaluate "f(10)" asynchronously
result.get(timeout=1) # prints "100" unless your computer is *very* slow
pool.map(f, range(t)) # prints "[0, 1, 4,..., 81]"
et = time.time()
tt = (str((et-st)%60)+'--'+str((et-st/60)))
print tt
exit(0)
Execution Times: (Format >> minutes--seconds)
Macha-MacBook-Pro:Downloads me$ ./nonpool.py
2.03456997871--1352551406.28
Macha-MacBook-Pro:Downloads me$ ./pool.py
4.69528508186--1352551417.28
You might check related answers, e.g., python prime crunching: processing pool is slower? -- the overhead of setting up a processing pool is high, but so is sending and receiving single integers in arguments and results.

Categories

Resources