1. I have a function var. I want to know the best possible way to run the loop within this function quickly by multiprocessing/parallel processing by utilizing all the processors, cores, threads, and RAM memory the system has.
import numpy
from pysheds.grid import Grid
xs = 82.1206, 72.4542, 65.0431, 83.8056, 35.6744
ys = 25.2111, 17.9458, 13.8844, 10.0833, 24.8306
a = r'/home/test/image1.tif'
b = r'/home/test/image2.tif'
def var(interest):
variable_avg = []
for (x,y) in zip(xs,ys):
grid = Grid.from_raster(interest, data_name='map')
grid.catchment(data='map', x=x, y=y, out_name='catch')
variable = grid.view('catch', nodata=np.nan)
variable = numpy.array(variable)
variablemean = (variable).mean()
variable_avg.append(variablemean)
return(variable_avg)
2. It would be great if I can run both function var and loop in it parallelly for the given multiple parameters of the function.
ex:var(a)and var(b) at the same time. Since it will consume much less time then just parallelizing the loop alone.
Ignore 2, if it does not make sense.
TLDR:
You can use the multiprocessing library to run your var function in parallel. However, as written you likely don't make enough calls to var for multiprocessing to have a performance benefit because of its overhead. If all you need to do is run those two calls, running in serial is likely the fastest you'll get. However, if you need to make a lot of calls, multiprocessing can help you out.
We'll need to use a process pool to run this in parallel, threads won't work here because Python's global interpreter lock will prevent us from true parallelism. The drawback of process pools is that processes are heavyweight to spin up. In the example of just running two calls to var the time to create the pool overwhelms the time spent running var itself.
To illiustrate this, let's use a process pool and use asyncio to run calls to var in parallel and compare it to just running things sequentially. Note to run this example I used an image from the Pysheds library https://github.com/mdbartos/pysheds/tree/master/data - if your image is much larger the below may not hold true.
import functools
import time
from concurrent.futures.process import ProcessPoolExecutor
import asyncio
a = 'diem.tif'
xs = 10, 20, 30, 40, 50
ys = 10, 20, 30, 40, 50
async def main():
loop = asyncio.get_event_loop()
pool_start = time.time()
with ProcessPoolExecutor() as pool:
task_one = loop.run_in_executor(pool, functools.partial(var, a))
task_two = loop.run_in_executor(pool, functools.partial(var, a))
results = await asyncio.gather(task_one, task_two)
pool_end = time.time()
print(f'Process pool took {pool_end-pool_start}')
serial_start = time.time()
result_one = var(a)
result_two = var(a)
serial_end = time.time()
print(f'Running in serial took {serial_end - serial_start}')
if __name__ == "__main__":
asyncio.run(main())
Running the above on my machine (a 2.4 GHz 8-Core Intel Core i9) I get the following output:
Process pool took 1.7581260204315186
Running in serial took 0.32335805892944336
In this example, a process pool is over five times slower! This is due to the overhead of creating and managing multiple processes. That said, if you need to call var more than just a few times, a process pool may make more sense. Let's adapt this to run var 100 times and compare the results:
async def main():
loop = asyncio.get_event_loop()
pool_start = time.time()
tasks = []
with ProcessPoolExecutor() as pool:
for _ in range(100):
tasks.append(loop.run_in_executor(pool, functools.partial(var, a)))
results = await asyncio.gather(*tasks)
pool_end = time.time()
print(f'Process pool took {pool_end-pool_start}')
serial_start = time.time()
for _ in range(100):
result = var(a)
serial_end = time.time()
print(f'Running in serial took {serial_end - serial_start}')
Running 100 times, I get the following output:
Process pool took 3.442288875579834
Running in serial took 13.769982099533081
In this case, running in a process pool is about 4x faster. You may also wish to try running each iteration of your loop concurrently. You can do this by creating a function that processes one x,y coordinate at a time and then run each point you want to examine in a process pool:
def process_poi(interest, x, y):
grid = Grid.from_raster(interest, data_name='map')
grid.catchment(data='map', x=x, y=y, out_name='catch')
variable = grid.view('catch', nodata=np.nan)
variable = np.array(variable)
return variable.mean()
async def var_loop_async(interest, pool, loop):
tasks = []
for (x,y) in zip(xs,ys):
function_call = functools.partial(process_poi, interest, x, y)
tasks.append(loop.run_in_executor(pool, function_call))
return await asyncio.gather(*tasks)
async def main():
loop = asyncio.get_event_loop()
pool_start = time.time()
tasks = []
with ProcessPoolExecutor() as pool:
for _ in range(100):
tasks.append(var_loop_async(a, pool, loop))
results = await asyncio.gather(*tasks)
pool_end = time.time()
print(f'Process pool took {pool_end-pool_start}')
serial_start = time.time()
In this case I get Process pool took 3.2950568199157715 - so not really any faster than our first version with one process per each call of var. This is likely because the limiting factor at this point is how many cores we have available on our CPU, splitting our work into smaller increments does not add much value.
That said, if you have 1000 x and y coordinates you wish to examine across two images, this last approach may yield a performance gain.
I think this is a reasonable and straightforward way of speeding up your code by merely parallelizing only the main loop. You can saturate your cores with this, so there is no need to parallelize also for the interest variable. I can't test the code, so I assume that your function is correct, I have just encoded the loop in a new function and parallelized it in var().
from multiprocessing import Pool
def var(interest,xs,ys):
grid = Grid.from_raster(interest, data_name='map')
with Pool(4) as p: #uses 4 cores, adjust this as you need
variable_avg = p.starmap(loop, [(x,y,grid) for x,y in zip(xs,ys)])
return variable_avg
def loop(x, y, grid):
grid.catchment(data='map', x=x, y=y, out_name='catch')
variable = grid.view('catch', nodata=np.nan)
variable = numpy.array(variable)
return variable.mean()
I'm new on python. I want to learn how to parallel processing in python. I saw the following example:
import multiprocessing as mp
np.random.RandomState(100)
arr = np.random.randint(0, 10, size=[20, 5])
data = arr.tolist()
def howmany_within_range_rowonly(row, minimum=4, maximum=8):
count = 0
for n in row:
if minimum <= n <= maximum:
count = count + 1
return count
pool = mp.Pool(mp.cpu_count())
results = pool.map(howmany_within_range_rowonly, [row for row in data])
pool.close()
print(results[:10])
but when I run it, this error happened:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
What should I do?
If you place everything in global scope inside this if __name__ == "__main__" block as follows, you should find that your program behaves as you expect:
def howmany_within_range_rowonly(row, minimum=4, maximum=8):
count = 0
for n in row:
if minimum <= n <= maximum:
count = count + 1
return count
if __name__ == "__main__":
np.random.RandomState(100)
arr = np.random.randint(0, 10, size=[20, 5])
data = arr.tolist()
pool = mp.Pool(mp.cpu_count())
results = pool.map(howmany_within_range_rowonly, [row for row in data])
pool.close()
print(results[:10])
Without this protection, if your current module was imported from a different module, your multiprocessing code would be executed. This could occur within a non-main process spawned in another Pool and spawning processes from sub-processes is not allowed, hence we protect against this problem.
I had a live example, where I faced the same RuntimeError issue when I executed a specific tool on MacOS-machines (on Linux machines it was fine though). However, I'm not sure about the exact cause for the problem, cause the if __name__ == "__main__" encapsulation seemed to be properly at place.
Following one comment on this Stack-Overflow entry, I suspected that using python>=3.8, which utilizes spawn as default method for calling subprocesses might be the problem.
My solution:
Using python=3.7 did the trick.
I'm currently working on Windows on jupyter notebook and have been struggling to get multiprocessing to work. It does not run all my async's in parallel it runs them singularly one at a time please provide some guidance where am I going wrong. I need to put the results into a variable for future use. What am I not understanding?
import multiprocessing as mp
import cylib
Pool = mp.Pool(processes=4)
result1 = Pool.apply_async(cylib.f, [v]) # evaluate asynchronously
result2 = Pool.apply_async(cylib.f, [x]) # evaluate asynchronously
result3 = Pool.apply_async(cylib.f, [y]) # evaluate asynchronously
result4 = Pool.apply_async(cylib.f, [z]) # evaluate asynchronously
vr = result1.get(timeout=420)
xr = result2.get(timeout=420)
yr = result3.get(timeout=420)
zr = result4.get(timeout=420)
The tasks are executing in parallel.
However, this is fetching the results synchronously i.e. "wait until result1 is ready, then wait until result2 is ready, .." and so on.
vr = result1.get(timeout=420)
xr = result2.get(timeout=420)
yr = result3.get(timeout=420)
zr = result4.get(timeout=420)
Consider the following example code, where each task is polled asynchronously
from time import sleep
import multiprocessing as mp
pool = mp.Pool(processes=4)
# Create tasks with longer wait first
tasks = {i: pool.apply_async(sleep, [t]) for i, t in enumerate(reversed(range(3)))}
done = set()
# Keep polling until all tasks complete
while len(done) < len(tasks):
for i, t in tasks.items():
# Skip completed tasks
if i in done:
continue
result = None
try:
result = t.get(timeout=0)
except mp.TimeoutError:
pass
else:
print("Task #:{} complete".format(i))
done.add(i)
You can replicate something like the above or use the callback argument on apply_async to perform some handling automatically as tasks complete.
I am trying to run a script with multiple threads to decrease the time taken by the script to complete.
I need to know how to implement multithreading in a program like this.
Script example :
def getnetworkdata():
data = ["somesite.com/1", "somesite.com/2", "somesite.com/3", "somesite.com/4"]
for url in data:
r = requests.get(url)
someOtherArray.append(r.text)
Threads should be running in sequence for the required task.
The output I am expecting :
someOtherArray = [1, 2, 3, 4]
I am using Python 2.x
from multiprocessing.dummy import Pool as ThreadPool
data = ["somesite.com/1", "somesite.com/2", "somesite.com/3", "somesite.com/4"]
# Make the Pool of workers
pool = ThreadPool(4)
# Open the urls in their own threads
# and return the results
results = pool.map(requests.get, data)
someOtherArray = map( lambda x: x.text, results )
#close the pool and wait for the work to finish
pool.close()
pool.join()
I have a for loop, which uses some binary conditions and finally writes a file accordingly. The problem I have is, the conditions are true for many files (sometimes around 1000 files need to be written). So writing them takes a long time (around 10 mins). I know I can somehow use Python's multiprocessing and utilise some of the cores.
This is the code that works, but only uses one core.
for i,n in enumerate(halo_param.strip()):
mask = var1['halo_id'] == n
newtbdata = tbdata1[mask]
hdu = pyfits.BinTableHDU(newtbdata)
hdu.writeto(('/home/Documments/file_{0}.fits').format(i))
I came across that it can be done using Pool from multiprocessing.
if __name__ == '__main__':
pool = Pool(processes=4)
I would like to know how to do it and utilise atleast 4 of my cores.
Restructure the for loop body as a function, and use Pool.map with the function.
def work(arg):
i, n = arg
mask = var1['halo_id'] == n
newtbdata = tbdata1[mask]
hdu = pyfits.BinTableHDU(newtbdata)
hdu.writeto(('/home/Documments/file_{0}.fits').format(i))
if __name__ == '__main__':
pool = Pool(processes=4)
pool.map(work, enumerate(halo_param.strip()))
pool.close()
pool.join()