I've a global NumPy array ys_final and have defined a function that generates an array ys. The ys array will be generated based on an input parameter and I want to add these ys arrays to the global array, i.e ys_final = ys_final + ys.
The order of addition doesn't matter so I want to use Pool.apply_async() from the multiprocessing library but I can't write to the global array. The code for reference is:
import multiprocessing as mp
ys_final = np.zeros(len)
def ys_genrator(i):
#code to generate ys array
return ys
pool = mp.Pool(mp.cpu_count())
for i in range(3954):
ys_final = ys_final + pool.apply_async(ys_genrator, args=(i)).get()
pool.close()
pool.join()
The above block of code keeps on running forever and nothing happens. I've tried *mp.Process also and still I face the same problem. There I defined a target function that directly adds to the global array but it is also not working as the block keeps running forever. Reference:
def func(i):
#code to generate ys
global ys_final
ys_final = ys_final + ys
for i in range(3954):
p = mp.Process(target=func, args=(i,))
p.start()
p.join()
Any suggestions will be really helpful.
EDIT:
My ys_genrator is a function for linear interpolation. Based on the parameter i which is an index for rows in a 2D image, the function creates an array of interpolated amplitudes that will be superimposed with all the interpolated amplitudes from the image, so ys need to be added to ys_final
The variable len is the length of the interpolated array, which is same for all rows.
For reference, a simpler version of ys_genrator(i) is as follows:
def ys_genrator(i):
ys = np.ones(10)*i
return ys
A few points:
pool.apply_async(ys_genrator, args=(i)) needs to be pool.apply_async(ys_genrator, args=(i,)). Note the comma after the i.
pool.apply_async(ys_genrator, args=(i,)).get() is exactly equivalent to pool.apply(ys.genrator, args=(i,)). That is, you will block because of your immediate call to get and you will have absolutely no parallism. You would need to do all your calls to pool.apply_async and save the returned AsyncResult instances and only then call get on these instances.
If you are running under Windows, you will have a problem. The code that creates new processes must be within a block governed by if __name__ == '__main__':
If you are running under something like Jupyter Notebook or iPython you will have a problem. The worker function, ys_genrator, would need to be in an external file and imported.
Using apply_async for submitting a lot of tasks is inefficient. You are better of using imap or imap_unordered where the tasks get submitted in "chunks" and you can process the results one by one as they become available. But you must choose a "suitable" chunksize argument.
Any code you have at the global level, such as ys_final = np.zeros(len) will be executed by every sub-process if you are running under Windows, and this can be wasteful if the subprocesses do not need to "see" this variable. If they do need to see this variable, be aware that each process in the pool will be working with its own copy of the variable so it better be a read-only usage. Even then, it can be very wasteful of storage if the variable is large. There are ways of sharing such a variable across the processes but it is not perfectly clear whether you need to (you haven't even defined variable len). So it is difficult to give you improved code. However, it appears that your worker function does not need to "see" ys_final, so I will take a shot at an improved solution.
But be aware that if your function ys_genrator is very trivial, nothing will be gained by using multiprocessing because there is overhead in both creating the processing pool and in passing arguments from one process to another. Also, if ys_genrator is using numpy, this can also be a source of problems since numpy uses multiprocessing for some of its own functions and you are better off not mixing numpy with your own multiprocessing.
import multiprocessing as mp
import numpy as np
SIZE = 3
def ys_genrator(i):
#code to generate ys array
# for this dummy example all SIZE entries will end up with the same result:
ys = [i] * SIZE # for example: [1, 1, 1]
return ys
def compute_chunksize(poolsize, iterable_size):
chunksize, remainder = divmod(iterable_size, 4 * poolsize)
if remainder:
chunksize += 1
return chunksize
if __name__ == '__main__':
ys_final = np.zeros(SIZE)
n_iterations = 3954
poolsize = min(mp.cpu_count(), n_iterations)
chunksize = compute_chunksize(poolsize, n_iterations)
print('poolsize =', poolsize, 'chunksize =', chunksize)
pool = mp.Pool(poolsize)
for result in pool.imap_unordered(ys_genrator, range(n_iterations), chunksize):
ys_final += result
print(ys_final)
Prints:
poolsize = 8 chunksize = 124
[7815081. 7815081. 7815081.]
Update
You can also just use:
for result in pool.map(ys_genrator, range(n_iterations)):
ys_final += result
The issue is that when you use method map, the method wants to compute an efficient chunksize argument based on the size of the iterable argument (see my compute_chunksize function above, which is essentially what pool.map will use). But to do this, is will have to first convert the iterable to a list to get its size. If n_iterations is very large, this is not very efficient, although it's probably not an major issue for a size of 3954. Still, you would be better off using my compute_chunksize function in this case since you know the size of the iterable and then pass the chunksize argument explicitly to map as I have done in the code using imap_unordered.
Related
In python2, I would like to fill a global array by filling with parallel processes (or threads) different sub-arrays (there is a total 16 blocks). I must precise that each block doesn't depend of the others, I mean when I perfom the assignement of each cells of the current block.
1) From what I have found, I would have a great benefit from a CPU multi-cores by using different "processes" but it seems a little bit complicated to share the global array by all others processes.
2) From another point of view, I can use "threads" instead of "processes" since the implementation is less hard. I have found out the libray "ThreadPool" from "multiprocessing.dummy" allows to share this global array by all others concurrent threads.
For example, with python2.7, the following code works :
from multiprocessing.dummy import Pool as ThreadPool
## discretization along x-axis and y-axis for each block
arrayCross_k = np.linspace(kMIN, kMAX, dimPoints)
arrayCross_mu = np.linspace(-1, 1, dimPoints)
# Build all big matrix with N total blocks = dimBlock*dimBlock = 16 here
arrayFullCross = np.zeros((dimBlocks, dimBlocks, arrayCross_k.size, arrayCross_mu.size))
dimBlocks = 4
# Size of dimension along k and mu axis
dimPoints = 100
# dimension along one dimension of global arrayFullCross
dimMatCovCross = dimBlocks*dimPoints
# Build cross-correlation matrix
def buildCrossMatrix_loop(params_array):
# rows indices
xb = params_array[0]
# columns indices
yb = params_array[1]
# Current redshift
z = zrange[params_array[2]]
# Loop inside block
for ub in range(dimPoints):
for vb in range(dimPoints):
# Diagonal blocs
if (xb == yb):
# Fill the (xb,yb) su-block of global array by
arrayFullCross[xb][xb][ub][vb] = 2*P_obs_cross(arrayCross_k[ub], arrayCross_mu[vb] , z, 10**P_m(np.log10(arrayCross_k[ub])),
...
...
# End of function buildCrossMatrix_loop
# Main loop
while i < len(zrange):
def generatorCrossMatrix(index):
for igen in range(dimBlocks):
for lgen in range(dimBlocks):
yield igen, lgen, index
if __name__ == '__main__':
# Use 20 threads
pool = ThreadPool(20)
pool.map(buildCrossMatrix_loop, generatorCrossMatrix(i))
# Increment index "i"
i = i+1
But unfortunately, even by using 20 threads, I realize that the cores of my CPU are not fully running (actually, with 'top' or 'htop' command, I only see a single process at 100%).
3) What is the strategy that I have to chose if I want to full exploit the 16 cores of my CPU (like this is the case with pool.map(function, generator)) but with also the sharing of global array ?
4) some people told me to do I/O for each sub-array (basically, write each block in a file and gather all sub-arrays by reading them and get the full array filled). This solution is handy but I would like to avoid I/O (unless there is really not other solutions).
5) I have practised MPI library with C language and the operation of filling sub-array and finally gather them to build a big array, is not very complicated. However, I wouldn't like to use MPI with Python language (I don't know even if it exists).
6) I tried also to use Process with target equal to my filling function (buildCrossMatrix_loop) like this into while Main loop above :
from multiprocessing import Process
# Main loop on z range
while i < len(zrange):
params_p = []
for ip in range(4):
for jp in range(4):
params_p.append(ip)
params_p.append(jp)
params_p.append(i)
p = Process(target=buildCrossMatrix_loop, args=(params_p,))
params_p = []
p.start()
# Finished : wait everybody
p.join()
...
...
i = i+1
# End of main while loop
But the final 2D global array is filled only of zeros. So I must deduce that Process function doesn't share the array to fill ?
7) So which strategy I have to look for ? :
1. The using of "pool processes" and find a way to share the global array knowing all my 16-cores will be running
2. The using of "Threads" and share the global array but performances, at first sight, seems to be less good than with "pool processes". Maybe there is a way to increase the power of each "Threads", I mean like with "pool processes" ?
I tried to follow the different examples on https://docs.python.org/2/library/multiprocessing.html but without success, this is to say, without relevant performances from a speed-up point of view.
I think that in my case, the major issue is the gathering of all sub-arrays OR the fact that the global array arrayFullCross is not shared by other processes or threads.
If someone had a simple example of the sharing of global variable in a multi-threading context (here an array), this would nice to put it here.
UPDATE 1: I made test with the Threading (and not multiprocessing) but performances remain rather bad. GIL is not apparently unlocked, i.e only one process appears in htop command (maybe the version of Threading library is not the right one).
So I am going to try to handle my issue with using the "return" method.
Naively, I tried to return the whole array at the end of the function on which I apply the map function, like this :
# Build cross-correlation matrix
def buildCrossMatrix_loop(params_array):
# rows indices
xb = params_array[0]
# columns indices
yb = params_array[1]
# Current redshift
z = zrange[params_array[2]]
# Loop inside block
for ub in range(dimPoints):
for vb in range(dimPoints):
# Diagonal blocs
if (xb == yb):
arrayFullCross[xb][xb][ub][vb] = 2*P_obs_cross(arrayCross_k[ub], arrayCross_mu[vb])
...
... #others assignments on arrayFullCross elements
# Return global array to main process
return arrayFullCross
Then, I tried to receive this global array from map like this :
if __name__ == '__main__':
pool = Pool(16)
outputArray = pool.map(buildCrossMatrix_loop, generatorCrossMatrix(i))
pool.terminate()
## Print outputArray
print 'outputArray = ', outputArray
## Reshape 4D outputArray to 2D array
arrayFullCross2D_swap = np.array(outputArray).swapaxes(1,2).reshape(dimMatCovCross,dimMatCovCross)
Unfortunately, when I print the outputArray, I get :
outputArray = [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
This is not the 4D outputArray expected, just a list of 16 None (I think that number of 16 correspond to the number of processes provided by generatorCrossMatrix(i)).
How could I get back the whole 4D array once map is launched and when it has finished ?
First of all I believe multiprocessing.ThreadPool is a private API so you should avoid it. Now multiprocessing.dummy is a useless module. It does not do any multithreading/processing that's why you don't see any benefit. You should use the "plain" multiprocessing module.
The second code does not work because it is using multiple processes. Processes do not share memory, so the changes you do in a subprocess are not reflected in the other subprocesses or the main process. You either want to:
Return the value and combine them together in the main process, for example using multiprocessing.Pool.map
Use threading instead of multiprocessing: just replaceimport multiprocessingwithimport threadingandmultiprocessing.Processwiththreading.Thread` and the code should work.
Note that the threading version will work only because numpy releases the GIL during computations, otherwise it would be stuck at 1 CPU.
You may want to look at this similar question which I answered a couple of minutes ago.
I have a function which I am applying to different chunks of my data. Since each chunk is independent of the rest, I wish to execute the function for all chunks in parallel.
I have a result dictionary which should hold the output of calculations for each chunk.
Here is how I did it:
from joblib import Parallel, delayed
import multiprocessing
cpu_count = multiprocessing.cpu_count()
# I have 8 cores, so I divide the data into 8 chunks.
endIndeces = divideIndecesUniformly(myData.shape[0], cpu_count) # e.g., [0, 125, 250, ..., 875, 1000]
# initialize result dictionary with empty lists.
result = dict()
for i in range(cpu_count):
result[i] = []
# Parallel execution for 8 chunks
Parallel(n_jobs=cpu_count)(delayed(myFunction)(myData, start_idx=endIndeces[i], end_idx=endIndeces[i+1]-1, result, i) for i in range(cpu_count))
However, when the execution is finished result has all initial empty lists. I figured that if I execute the function serially over each chunk of data, it works just fine. For example, if I replace the last line with the following, result will have all the calculated values.
# Instead of parallel execution, call the function in a for-loop.
for i in range(cpu_count):
myFunction(myData, start_idx=endIndeces[i], end_idx=endIndeces[i+1]-1, result, i)
In this case, result values are updated.
It seems that when the function is executed in parallel, it cannot write on the given dictionary (result). So, I was wondering how I can obtain the output of function for each chunk of data?
joblib, by default uses the multiprocessing module in python. According to this SO Answer, when arguments are passed to new Processes they create a fork, which copies the memory space of the current process. This means that myFunction is essentially working on a copy of result and does not modify the original.
My suggestion is to have myFunction return the desired data as a list. The call to Process will then return a list of the lists generated by myFunction. From there, it is simple to add them to results. It could look something like this:
from joblib import Parallel, delayed
import multiprocessing
if __name__ == '__main__':
cpu_count = multiprocessing.cpu_count()
endIndeces = divideIndecesUniformly(myData.shape[0], cpu_count)
# make sure myFunction returns the grouped results in a list
r = Parallel(n_jobs=cpu_count)(delayed(myFunction)(myData, start_idx=endIndeces[i], end_idx=endIndeces[i+1]-1, result, i) for i in range(cpu_count))
result = dict()
for i, data in enumerate(r): # cycles through each resultant chunk, numbered and in the original order
result[i] = data
I understand from simple examples that Pool.map is supposed to behave identically to the 'normal' python code below except in parallel:
def f(x):
# complicated processing
return x+1
y_serial = []
x = range(100)
for i in x: y_serial += [f(x)]
y_parallel = pool.map(f, x)
# y_serial == y_parallel!
However I have two bits of code that I believe should follow this example:
#Linear version
price_datas = []
for csv_file in loop_through_zips(data_directory):
price_datas += [process_bf_data_csv(csv_file)]
#Parallel version
p = Pool()
price_data_parallel = p.map(process_bf_data_csv, loop_through_zips(data_directory))
However the Parallel code doesn't work whereas the Linear code does. From what I can observe, the parallel version appears to be looping through the generator (it's printing out log lines from the generator function) but then not actually performing the "process_bf_data_csv" function. What am I doing wrong here?
.map tries to pull all values from your generator to form it into an iterable before actually starting the work.
Try waiting longer (till the generator runs out) or use multi threading and a queue instead.
I have a function
def dist_to_center(ra_center,dec_center):
# finding theta
cos_ra = np.cos(ra_center-var1['ra'])
cos_dec = np.cos(dec_center-var1['dec'])
sin_dec = np.sin(dec_center)*np.sin(var1['dec'])
theta = np.arccos((cos_ra*cos_dec)+sin_dec*(1-cos_ra))
numerator = theta*comoving_dist
denominator = 1+var1['zcosmo']
# THE FINAL CALCULATED DISTANCE TO CENTRE
dist_to_center = (numerator/denominator)
return dist_to_center
I want to make use of my processors, so I am using multiprocess pool like this:
if __name__ == '__main__':
pool = Pool(processes=6)
pool.map(dist_to_center, ra_center, dec_center) #calling the function with it's inputs
pool.close()
pool.join()
The code seems to be proper and is working, but only 1 processor is running instead of the 6 I have called. What am I doing wrong here?
You are passing a pair of one-dimensional arrays to the Pool. You need to slice the arrays yourself to make the Pool understand how to process them efficiently. For example:
def dist_to_center_mapper(arrays):
return dist_to_center(arrays[0], arrays[1])
ra = np.split(ra_center, 6)
dec = np.split(dec_center, 6)
pool = Pool(processes=6)
pool.map(dist_to_center_mapper, zip(ra, dec))
I think the "mapper" function is required because Pool.map() takes only a single iterable of arguments. So we zip together the two lists of array slices so they get doled out together to the multiple processes. Note that you could split the arrays into more pieces than the number of processes if you want, if some pieces may take different amounts of time etc.
I have a code like this
def plotFrame(n):
a = data[n, :]
do_something_with(a)
data = loadtxt(filename)
ids = data[:,0] # some numbers from the first column of data
map(plotFrame, ids)
That worked fine for me. Now I want to try replacing map() with pool.map() as follows:
pools = multiprocessing.Pool(processes=1)
pools.map(plotFrame, ids)
But that won't work, saying:
NameError: global name 'data' is not defined
The questions is: What is going on? Why map() does not complain about the data variable that is not passed to the function, but pool.map() does?
EDIT:
I' m using Linux.
EDIT 2:
Based on #Bill 's second suggestion, I now have the following code:
def plotFrame_v2(line):
plot_with(line)
if __name__ == "__main__":
ff = np.loadtxt(filename)
m = int( max(ff[:,-1]) ) # max id
l = ff.shape[0]
nfig = 0
pool = Pool(processes=1)
for i in range(0, l/m, 50):
data = ff[i*m:(i+1)*m, :] # data of one frame contains several ids
pool.map(plotFrame_v2, data)
nfig += 1
plt.savefig("figs_bot/%.3d.png"%nfig)
plt.clf()
That works just as expected. However, now I have another unexpected problem: The produced figures are blank, whereas the above code with map() produces figures with the content of data.
Using multiprocessing.pool, you are spawning individual processes to work with the shared (global) resource data. Typically, you can allow the processes to work with a shared resource in the parent process by make that resource explicitly global. However, it is better practice to explicitly pass all needed resources to the child processes as function arguments. This is required if you are working on Windows. Check out the multiprocessing guidelines here.
So you could try doing
data = loadtxt(filename)
def plotFrame(n):
global data
a = data[n, :]
do_something_with(a)
ids = data[:,0] # some numbers from the first column of data
pools = multiprocessing.Pool(processes=1)
pools.map(plotFrame, ids)
or even better see this thread about feeding multiple arguments to a function with multiprocessing.pool. A simple way could be
def plotFrameWrapper(args):
return plotFrame(*args)
def plotFrame(n, data):
a = data[n, :]
do_something_with(a)
if __name__ == "__main__":
from multiprocessing import Pool
data = loadtxt(filename)
pools = Pool(1)
ids = data[:,0]
pools.map(plotFrameWrapper, zip([data]*len(inds), inds))
print results
One last thing: since it looks like the only thing you are doing from your example is slicing the array, you can simply slice first then pass the sliced arrays to your function:
def plotFrame(sliced_data):
do_something_with(sliced_data)
if __name__ == "__main__":
from multiprocessing import Pool
data = loadtxt(filename)
pools = Pool(1)
ids = data[:,0]
pools.map(plotFrame, data[ids])
print results
To avoid "unexpected" problems, avoid globals.
To reproduce your first code example with builtin map that calls plotFrame:
def plotFrame(n):
a = data[n, :]
do_something_with(a)
using multiprocessing.Pool.map, the first thing is to deal with the global data. If do_something_with(a) also uses some global data then it should also be changed.
To see how to pass a numpy array to a child process, see Use numpy array in shared memory for multiprocessing. If you don't need to modify the array then it is even simpler:
import numpy as np
def init(data_): # inherit data
global data #NOTE: no other globals in the program
data = data_
def main():
data = np.loadtxt(filename)
ids = data[:,0] # some numbers from the first column of data
pool = Pool(initializer=init, initargs=[data])
pool.map(plotFrame, ids)
if __name__=="__main__":
main()
All arguments either should be explicitly passed as arguments to plotFrame or inherited via init().
Your second code example tries to manipulate global data again (via plt calls):
import matplotlib.pyplot as plt
#XXX BROKEN, DO NOT USE
pool.map(plotFrame_v2, data)
nfig += 1
plt.savefig("figs_bot/%.3d.png"%nfig)
plt.clf()
Unless you draw something in the main process this code saves blank figures. Either plot in the child processes or send data to be plotted to the parent processes explicitly e.g., by returning it from plotFrame and using pool.map() returned value. Here's a code example: how to plot in child processes.