I have one machine with two CPUs, and each CPU has different number of cores. I have one function in my python code. How can I run this function on each of the CPUs?
In this case, I need to run function two times because I have two CPUs.
I want this because I want to compare the performance of different CPU.
This can be part of code. Please let me know if the code it not written in correct way.
import multiprocessing
def my_function():
print ("This Function needs high computation")
# Add code of function
pool = multiprocessing.Pool()
jobs = []
for j in range(2): #how can I run function depends on the number of CPUs?
p = multiprocessing.Process(target = my_function)
jobs.append(p)
p.start()
I have read many posts, but have not found a suitable answer for my problem.
The concurrent package handles the allocation of resources in an easy way, so that you don't have to specify any particular process/thread IDs, something that is OS-specific anyway.
If you want to run a function using either multiple processes or multiple threads, you can have a class that does it for you:
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
from typing import Generator
class ConcurrentExecutor:
#staticmethod
def _concurrent_execution(executor, func, values):
with executor() as ex:
if isinstance(values, Generator):
return list(ex.map(lambda args: func(*args), values))
return list(ex.map(func, values))
#staticmethod
def concurrent_process_execution(func, values):
return ConcurrentExecutor._concurrent_execution(
ProcessPoolExecutor, func, values,
)
#staticmethod
def concurrent_thread_execution(func, values):
return ConcurrentExecutor._concurrent_execution(
ThreadPoolExecutor, func, values,
)
Then you can execute any function with it, even with arguments. If it's a single argument-function:
from concurrency import ConcurrentExecutor as concex
# Single argument function that prints the input
def single_arg_func(arg):
print(arg)
# Dummy list of 5 different input values
n_values = 5
arg_values = [x for x in range(n_values)]
# We want to run the function concurrently for each value in values
concex.concurrent_thread_execution(single_arg_func, arg_values)
Or with multiple arguments:
from concurrency import ConcurrentExecutor as concex
# Multi argument function that prints the input
def multi_arg_func(arg1, arg2):
print(arg1, arg2)
# Dummy list of 5 different input values per argument
n_values = 5
arg1_values = [x for x in range(n_values)]
arg2_values = [2*x for x in range(n_values)]
# Create a generator of combinations of values for the 2 arguments
args_values = ((arg1_values[i], arg2_values[i]) for i in range(n_values))
# We want to run the function concurrently for each value combination
concex.concurrent_thread_execution(multi_arg_func, args_values)
Related
I have a multiprocessing code, and each process have to analyse same data differently.
I have implemented:
with concurrent.futures.ProcessPoolExecutor() as executor:
res = executor.map(goal_fcn, p, [global_DataFrame], [global_String])
for f in concurrent.futures.as_completed(res):
fp = res
and function:
def goal_fcn(x, DataFrame, String):
return heavy_calculation(x, DataFrame, String)
the problem is goal_fcn is called only once, while should be multiple time
In debugger, I checked now the variable p is looking, and it has multiple columns and rows. Inside goal_fcn, variable x have only first row - looks good.
But the function is called only once. There is no error, the code just execute next steps.
Even if I modify variable p = [1,3,4,5], and of course code. goal_fcn is executed only once
I have to use map() because keeping the order between input and output is required
map works like zip. It terminates once at least one input sequence is at its end. Your [global_DataFrame] and [global_String] lists have one element each, so that is where map ends.
There are two ways around this:
Use itertools.product. This is the equivalent of running "for all data frames, for all strings, for all p". Something like this:
def goal_fcn(x_DataFrame_String):
x, DataFrame, String = x_DataFrame_String
...
executor.map(goal_fcn, itertools.product(p, [global_DataFrame], [global_String]))
Bind the fixed arguments instead of abusing the sequence arguments.
def goal_fcn(x, DataFrame, String):
pass
bound = functools.partial(goal_fcn, DataFrame=global_DataFrame, String=global_String)
executor.map(bound, p)
I have a function returning a tuple of two elements. The function is called with pool starmap to generate a list of tuples which are unpacked to two lists.
def func():
#...some operations
return (x,y)
def MP_a_func(func,iterable,proc,chunk):
pool=multiprocessing.Pool(processes=proc)
Result=pool.starmap(func,iterable,chunksize=chunk)
pool.close()
return Result
##
if __name__ == '__main__':
results=MP_a_func(func,iterable,proc,chunk)
a,b=zip(*results)
I now wish to use dask delayed API as the following
if __name__ == '__main__':
results=delayed(MP_a_func(func,iterable,proc,chunk))
is it possible to unpack tuples in the delayed object without using results.compute() ?
Thank your for your help
It is possible for another delayed function to unpack the tuple, in the example below, the delayed value of return_tuple(1) was not computed, but passed as a delayed object:
import dask
#dask.delayed
def return_tuple(x):
return x+1, x-1
#dask.delayed
def process_first_item(some_tuple):
return some_tuple[0]+10
result = process_first_item(return_tuple(1))
dask.compute(result)
As per #mdurant's answer, it turns out delayed function/decorator has nout parameter, also see this answer.
If you know the number of outputs, the delayed function (or decorator) takes an optional nout arguments, and this will split the single delayed into that many delayed outputs. This sounds like exactly what you need.
I'm programming an optimizer that has to run through several possible variations. The team wants to implement multithreading to get through those variants faster. This means I've had to put all my functions inside a thread-class. My problem is with my call of the wrapper function
class variant_thread(threading.Thread):
def __init__(self, name, variant, frequencies, fit_vals):
threading.Thread.__init__(self)
self.name = name
self.elementCount = variant
self.frequencies = frequencies
self.fit_vals = fit_vals
def run(self):
print("Running Variant:", self.elementCount) # display thread currently running
fitFunction = self.Wrapper_Function(self.elementCount)
self.popt, pcov, self.infoRes = curve_fit_my(fitFunction, self.frequencies, self.fit_vals)
def Optimize_Wrapper(self, frequencies, *params): # wrapper which returns values in manner which optimizer can work with
cut = int(len(frequencies)/2) <---- ERROR OCCURS HERE
freq = frequencies[:cut]
vals = (stuff happens here)
return (stuff in proper form for optimizer)
I've cut out as much as I could to simplify the example, and I hope you can understand what's going on. Essentially, after the thread is created it calls the optimizer. The optimizer sends the list of frequencies and the parameters it wants to change to the Optimize_Wrapper function.
The problem is that Optimize-Wrapper takes the frequencies-list and saves them to "self". This means that the "frequencies" variable becomes a single float value, as opposed to the list of floats it should be. Of course this throws an errorswhen I try to take len(frequencies). Keep in mind I also need to use self later in the function, so I can't just create a static method.
I've never had the problem that a class method saved any values to "self". I know it has to be declared explicitly in Python, but anything I've ever passed to the class method always skips "self" and saves to my declared variables. What's going on here?
Don't pass instance variables to methods. They are already accessible through self. And be careful about which variable is which. The first parameter to Wrapper_function is called "frequency", but you call it as self.Wrapper_Function(self.elementCount) - so you have a self.frequency and a frequency ... and they are different things. Very confusing!
class variant_thread(threading.Thread):
def __init__(self, name, variant, frequencies, fit_vals):
threading.Thread.__init__(self)
self.name = name
self.elementCount = variant
self.frequencies = frequencies
self.fit_vals = fit_vals
def run(self):
print("Running Variant:", self.elementCount) # display thread currently running
fitFunction = self.Wrapper_Function()
self.popt, pcov, self.infoRes = curve_fit_my(fitFunction, self.frequencies, self.fit_vals)
def Optimize_Wrapper(self): # wrapper which returns values in manner which optimizer can work with
cut = int(len(self.frequencies)/2) <---- ERROR OCCURS HERE
freq = self.frequencies[:cut]
vals = (stuff happens here)
return (stuff in proper form for optimizer)
You don't have to subclass Thread to run a thread. Its frequently easier to define a function and have Thread call that function. In your case, you may be able to put the variant processing in a function and use a thread pool to run them. This would save all the tedious handling of the thread object itself.
def run_variant(name, variant, frequencies, fit_vals):
cut = int(len(self.frequencies)/2) <---- ERROR OCCURS HERE
freq = self.frequencies[:cut]
vals = (stuff happens here)
proper_form = (stuff in proper form for optimizer)
return curve_fit_my(fitFunction, self.frequencies, self.fit_vals)
if __name__ == "__main__":
variants = (make the variants)
name = "name"
frequencies = (make the frequencies)
fit_vals = (make the fit_vals)
from multiprocessing.pool import ThreadPool
with ThreadPool() as pool:
for popt, pcov, infoRes in pool.starmap(run_variant,
((name, variant, frequencies, fit_vals) for variant in variants)):
# do the other work here
I'm trying to get my function that makes web-requests for all elements of a list:
import myClass
def getForAll(elements):
myList = []
for element in elements:
myList.append([element] + myClass.doThing(element))
return myList
I have tried the following, but it always times out:
import myClass
from multiprocessing import Pool
def getForAll(elements):
pool = Pool()
queries = []
for element in elements:
queries.append(pool.apply_async(myClass.doThing, element))
myList = []
for query in queries:
myList.append(query.get(timeout=10))
return myList
It is not a time-issue however, because removing the timeout just causes it to run for ever and ever.
queries.append(pool.apply_async(myClass.doThing, [element]))
Also didn't make anything different.
For clarification: I call getForAll() with a List of Strings, and the function doThing() returns a List of Lists of strings. I don't need the order to stay the same, but it would be nice if possible. Also, I don't need to seperate the "CPU-Work" onto several cores, I just don't want to wait about one second per element because doThing calls requests.get() twice and I believe that can be done for all elements at the same time, without waiting for the response, making the code run at the same speed regardless of the number of elements?
import myClass
from multiprocessing import Pool
def getForAll(elements):
pool = Pool()
return pool.map(myClass.doThing, elements)
This works perfectly for my usecase. I also had to split my elements up into smaller groups because of the limitations of the API that I was using, with this code from SO:
f = lambda A, n=5: [A[i:i + n] for i in range(0, len(A), n)]
queries = f(elements)
I made some tests about this setting, that appeared unexpectedly as a quick fix for my problem:
I want to call a multiprocessing.Pool.map() from inside a main function (that sets up the parameters). However it is simpler for me to give a locally defined function as one of the args. Since the latter can't be pickled, I tried the laziest solution of declaring it as global. Should I expect some weird results? Would you advise a different strategy?
Here is an example (dummy) code:
#!/usr/bin/env python3
import random
import multiprocessing as mp
def processfunc(arg_and_func):
arg, func = arg_and_func
return "%7.4f:%s" %(func(arg), arg)
def main(*args):
# the content of var depends of main:
var = random.random()
# Now I need to pass a func that uses `var`
global thisfunc
def thisfunc(x):
return x+var
# Test regular use
for x in range(-5,0):
print(processfunc((x, thisfunc)))
# Test parallel runs.
with mp.Pool(2) as pool:
for r in pool.imap_unordered(processfunc, [(x, thisfunc) for x in range(20)]):
print(r)
if __name__=='__main__':
main()
PS: I know I could define thisfunc at module level, and pass the var argument through processfunc, but since my actual processfunc in real life already takes a lot of arguments, it seemed more readable to pass a single object thisfunc instead of many parameters...
What you have now looks OK, but might be fragile for later changes.
I might use partial in order to simplify the explicit passing of var to a globally defined function.
import random
import multiprocessing as mp
from functools import partial
def processfunc(arg_and_func):
arg, func = arg_and_func
return "%7.4f:%s" %(func(arg), arg)
def thisfunc(var, x):
return x + var
def main(*args):
# the content of var depends of main:
var = random.random()
f = partial(thisfunc, var)
# Test regular use
for x in range(-5,0):
print(processfunc((x, thisfunc)))
# Test parallel runs.
with mp.Pool(2) as pool:
for r in pool.imap_unordered(processfunc, [(x, f) for x in range(20)]):
print(r)
if __name__=='__main__':
main()