So I create a class variable called executor in my class
executor = ThreadPoolExecutor(100)
and instead of having functions and methods and using decorators, I simply use following line to handle my blocking tasks(like io and hash creation and....) in my async methods
result = await to_tornado_future(self.executor.submit(blocking_method, param1, param2)
I decided to use this style cause
1- decorators are slower by nature
2- there is no need for extra methods and functions
3- it workes as expected and creates no threads before it needed
Am I right ? Please use reasons(I want to know if the way I use, is slower or uses more resources or....)
Update
Based on Ben answer, my above approach was not correct
so I ended up using following function as needed, I think it's the best way to go
def pool(pool_executor, fn, *args, **kwargs):
new_future = Future()
result_future = pool_executor.submit(fn, *args, **kwargs)
result_future.add_done_callback(lambda f: new_future.set_result(f.result()))
return new_future
usage:
result = await pool(self.executor, time.sleep, 3)
This is safe as long as all your blocking methods are thread-safe. Since you mentioned doing IO in these threads, I'll point out that doing file IO here is fine but all network IO in Tornado must occur on the IOLoop's thread.
Why do you say "decorators are slower by nature"? Which decorators are slower than what? Some decorators have no performance overhead at all (although most do have some runtime cost). to_tornado_future(executor.submit()) isn't free either. (BTW, I think you want tornado.gen.convert_yielded instead of tornado.platform.asyncio.to_tornado_future. executor.submit doesn't return an asyncio.Future).
As a general rule, running blocking_method on a thread pool is going to be slower than just calling it directly. You should do this only when blocking_method is likely to block for long enough that you want the main thread free to do other things in the meantime.
Related
According to a number of sources, including this question, passing a runnable as the target parameter in __init__ (with or without args and kwargs) is preferable to extending the Thread class.
If I create a runnable, how can I pass the thread it is running on as self to it without extending the Thread class? For example, the following would work fine:
class MyTask(Thread):
def run(self):
print(self.name)
MyTask().start()
However, I can't see a good way to get this version to work:
def my_task(t):
print(t.name)
Thread(target=my_task, args=(), kwargs={}).start()
This question is a followup to Python - How can I implement a 'stoppable' thread?, which I answered, but possibly incompletely.
Update
I've thought of a hack to do this using current_thread():
def my_task():
print(current_thread().name)
Thread(target=my_task).start()
Problem: calling a function to get a parameter that should ideally be passed in.
Update #2
I have found an even hackier solution that makes current_thread seem much more attractive:
class ThreadWithSelf(Thread):
def __init__(self, **kwargs):
args = kwargs.get('args', ())
args = (self,) + tuple(args)
kwargs[args] = args
super().__init__(**kwargs)
ThreadWithSelf(target=my_task).start()
Besides being incredibly ugly (e.g. by forcing the user to use keywords only, even if that is the recommended way in the documentation), this completely defeats the purpose of not extending Thread.
Update #3
Another ridiculous (and unsafe) solution: to pass in a mutable object via args and to update it afterwards:
def my_task(t):
print(t[0].name)
container = []
t = Thread(target=my_task, args=(container,))
container[0] = t
t.start()
To avoid synchronization issues, you could kick it up a notch and implement another layer of ridiculousness:
def my_task(t, i):
print(t[i].name)
container = []
container[0] = Thread(target=my_task, args=(container, 0))
container[1] = Thread(target=my_task, args=(container, 1))
for t in container:
t.start()
I am still looking for a legitimate answer.
It seems like your goal is to get access to the thread currently executing a task from within the task itself. You can't add the thread as an argument to the threading.Thread constructor, because it's not yet constructed. I think there are two real options.
If your task runs many times, potentially on many different threads, I think the best option is to use threading.current_thread() from within the task. This gives you access directly to the thread object, with which you can do whatever you want. This seems to be exactly the kind of use-case this function was designed for.
On the other hand, if your goal is implement a thread with some special characteristics, the natural choice is to subclass threading.Thread, implementing whatever extended behavior you wish.
Also, as you noted in your comment, insinstance(current_thread(), ThreadSubclass) will return True, meaning you can use both options and be assured that your task will have access to whatever extra behavior you've implemented in your subclass.
The simplest and most readable answer here is: use current_thread(). You can use various weird ways to pass the thread as a parameter, but there's no good reason for that. Calling current_thread() is the standard approach which is shorter than all the alternatives and will not confuse other developers. Don't try to overthink/overengineer this:
def runnable():
thread = threading.current_thread()
Thread(target=runnable).start()
If you want to hide it a bit and change for aesthetic reasons, you can try:
def with_current_thread(f):
return f(threading.current_thread())
#with_current_thread
def runnable(thread):
print(thread.name)
Thread(target=runnable).start()
If this is not good enough, you may get better answers by describing why you think the parameter passing is better / more correct for your use case.
I'm writing a library which is using Tornado Web's tornado.httpclient.AsyncHTTPClient to make requests which gives my code a async interface of:
async def my_library_function():
return await ...
I want to make this interface optionally serial if the user provides a kwarg - something like: serial=True. Though you can't obviously call a function defined with the async keyword from a normal function without await. This would be ideal - though almost certain imposible in the language at the moment:
async def here_we_go():
result = await my_library_function()
result = my_library_function(serial=True)
I'm not been able to find anything online where someones come up with a nice solution to this. I don't want to have to reimplement basically the same code without the awaits splattered throughout.
Is this something that can be solved or would it need support from the language?
Solution (though use Jesse's instead - explained below)
Jesse's solution below is pretty much what I'm going to go with. I did end up getting the interface I originally wanted by using a decorator. Something like this:
import asyncio
from functools import wraps
def serializable(f):
#wraps(f)
def wrapper(*args, asynchronous=False, **kwargs):
if asynchronous:
return f(*args, **kwargs)
else:
# Get pythons current execution thread and use that
loop = asyncio.get_event_loop()
return loop.run_until_complete(f(*args, **kwargs))
return wrapper
This gives you this interface:
result = await my_library_function(asynchronous=True)
result = my_library_function(asynchronous=False)
I sanity checked this on python's async mailing list and I was lucky enough to have Guido respond and he politely shot it down for this reason:
Code smell -- being able to call the same function both asynchronously
and synchronously is highly surprising. Also it violates the rule of
thumb that the value of an argument shouldn't affect the return type.
Nice to know it's possible though if not considered a great interface. Guido essentially suggested Jesse's answer and introducing the wrapping function as a helper util in the library instead of hiding it in a decorator.
When you want to call such a function synchronously, use run_until_complete:
asyncio.get_event_loop().run_until_complete(here_we_go())
Of course, if you do this often in your code, you should come up with an abbreviation for this statement, perhaps just:
def sync(fn, *args, **kwargs):
return asyncio.get_event_loop().run_until_complete(fn(*args, **kwargs))
Then you could do:
result = sync(here_we_go)
I have an alogirithm that I am trying to parallelize, because of very long run times in serial. However, the function that needs to be parallelized is inside a class. multiprocessing.Pool seems to be the best and fastest way to do this, but there is a problem. It's target function can not be a function of an object instance. Meaning this; you declare a Pool in the following way:
import multiprocessing as mp
cpus = mp.cpu_count()
poolCount = cpus*2
pool = mp.Pool(processes = poolCount, maxtasksperchild = 2)
And then actually use it as so:
pool.map(self.TargetFunction, args)
But this throws an error, because object instances cannot be pickled, as the Pool function does to pass information to all of its child processes. But I have to use self.TargetFunction
So I had an idea, I would create a new Python file named parallel and simply write a couple of functions without putting them in a class, and call those functions from within my original class (of whose function I want to parallelize)
So I tried this:
import multiprocessing as mp
def MatrixHelper(args):
WM = args[0][0]
print(WM.CreateMatrixMp(*args))
return WM.CreateMatrixMp(*args)
def Start(sigmaI, sigmaX, numPixels, WM):
cpus = mp.cpu_count()
poolCount = cpus * 2
args = [(WM, sigmaI, sigmaX, i) for i in range(numPixels)]
print('Number of cpu\'s to process WM:%d'%cpus)
pool = mp.Pool(processes = poolCount, maxtasksperchild = 2)
tempData = pool.map(MatrixHelper, args)
return tempData
These functions are not part of a class, using MatrixHelper in Pools map function works fine. But I realized while doing this that it was no way out. The function in need of parallelization (CreateMatrixMp) expects an object to be passed to it (it is declared as def CreateMatrixMp(self, sigmaI, sigmaX, i))
Since it is not being called from within its class, it doesn't get a self passed to it. To solve this, I passed the Start funtion the calling object itself. As in, I say parallel.Start(sigmaI, sigmaX, self.numPixels, self). The object self then becomes WM so that I will be able to finally call the desired function as WM.CreateMatrixMp().
I'm sure that that is a very sloppy way of coding, but I just wanted to see if it would work. But nope, pickling error again, the map function cannot handle any objects instances at all.
So my question is, why is it designed this way? It seems useless, it seems to be completely disfunctional in any program that uses classes at all.
I tried using Process rather than Pool, but this requires the array that I am ultimately writing to to be shared, which requires processes waiting for eachother. If I don't want it to be shared, then I have each process write its own smaller array, and do one big write at the end. But both of these result in slower run times than when I was doing this serially! Pythons builtin multiprocessing seems absolutely useless!
Can someone please give me some guidance as to how to actually save time with multiprocessing, in the context of my tagret function being inside a class? I have read on posts here to use pathos.multiprocessing instead, but I am on Windows, and am working on this project with multiple people who all have different set ups. Having everyone try to install it would be inconveinient.
I was having a similar issue with trying to use multiprocessing within a class. I was able to solve it with a relatively easy workaround I found online. Basically you use a function outside of your class that unwraps/unpacks the method inside your function that you're trying to parallelize. Here are the two websites I found that explain how to do it.
Website 1 (joblib example)
Website 2 (multiprocessing module example)
For both, the idea is to do something like this:
rom multiprocessing import Pool
import time
def unwrap_self_f(arg, **kwarg):
return C.f(*arg, **kwarg)
class C:
def f(self, name):
print 'hello %s,'%name
time.sleep(5)
print 'nice to meet you.'
def run(self):
pool = Pool(processes=2)
names = ('frank', 'justin', 'osi', 'thomas')
pool.map(unwrap_self_f, zip([self]*len(names), names))
if __name__ == '__main__':
c = C()
c.run()
The essence of how multiprocessing works is that it spawns sub-processes that receive parameters to run a certain function. In order to pass these arguments, it needs that they are, well, passable: non-exclusive to the main process, s.a. sockets, file descriptors and other low-level, OS related stuff.
This translates into "need to be pickleable or serializable".
On the same topic, parallel processing works best when you (can) have self-contained divisions of a problem. I can tell you want to share some sort of input/stream/database source, but this will probably create a bottleneck that you'll have to tackle at some point (at least, from the "python script" side, rather than the "OS/database" side. Fortunately, you have to tackle it early now.
You can re-code your classes to spawn/create these non-pickable resources when neeeded rather than at start
def targetFunction(self, range_params):
if not self.ready():
self._init_source()
#rest of the code
You kinda tackled the problem the other way around (initialized an object based on params). And yes, parallel processing comes with a cost.
You can see the multiprocessing programming guidelines for an even more thorough insight on this matter.
this is an old post but it still is one of the top results when you search for the topic. Some good info for this question can be found at this stack overflow: python subclassing multiprocessing.Process
I tried some workarounds to try calling pool.starmap from inside of a class to another function in the class. Making it a staticmethod or having a function on the outside call it didn't work and gave the same error. A class instance just can't be pickled so we need to create the instance after we start the multiprocessing.
What I ended up doing that worked for me was to separate my class into two classes. Basically, the function you are calling the multiprocessing on needs to be called right after you instantiate a new object for the class it belongs to.
Something like this:
from multiprocessing import Pool
class B:
...
def process_feature(idx, feature):
# do stuff in the new process
pass
...
def multiprocess_feature(process_args):
b_instance = B()
return b_instance.process_feature(*process_args)
class A:
...
def process_stuff():
...
with Pool(processes=num_processes, maxtasksperchild=10) as pool:
results = pool.starmap(
multiprocess_feature,
[
(idx, feature)
for idx, feature in enumerate(features)
],
chunksize=100,
)
...
...
...
I have an embarrassingly parallelizable problem consisting on a bunch of tasks that get solved independently of each other. Solving each of the tasks is quite lengthy, so this is a prime candidate for multi-processing.
The problem is that solving my tasks requires creating a specific object that is very time consuming on its own but can be reused for all the tasks (think of an external binary program that needs to be launched), so in the serial version I do something like this:
def costly_function(task, my_object):
solution = solve_task_using_my_object
return solution
def solve_problem():
my_object = create_costly_object()
tasks = get_list_of_tasks()
all_solutions = [costly_function(task, my_object) for task in tasks]
return all_solutions
When I try to parallelize this program using multiprocessing, my_object cannot be passed as a parameter for a number of reasons (it cannot be pickled, and it should not run more than one task at the same time), so I have to resort to create a separate instance of the object for each task:
def costly_function(task):
my_object = create_costly_object()
solution = solve_task_using_my_object
return solution
def psolve_problem():
pool = multiprocessing.Pool()
tasks = get_list_of_tasks()
all_solutions = pool.map_async(costly_function, tasks)
return all_solutions.get()
but the added costs of creating multiple instances of my_object makes this code only marginally faster than the serialized one.
If I could create a separate instance of my_object in each process and then reuse them for all the tasks that get run in that process, my timings would significantly improve. Any pointers on how to do that?
I found a simple way of solving my own problem without bringing in any tools besides the standard library, I thought I'd write it down here in case somebody else has a similar problem.
multiprocessing.Pool accepts an initializer function (with arguments) that gets run when each process is launched. The return value of this function is not stored anywhere, but one can take advantage of the function to set up a global variable:
def init_process():
global my_object
my_object = create_costly_object()
def costly_function(task):
global my_object
solution = solve_task_using_my_object
return solution
def psolve_problem():
pool = multiprocessing.Pool(initializer=init_process)
tasks = get_list_of_tasks()
all_solutions = pool.map_async(costly_function, tasks)
return all_solutions.get()
Since each process has a separate global namespace, the instantiated objects do not clash, and they are created only once per process.
Probably not the most elegant solution, but it's simple enough and gives me a near-linear speedup.
you can have celery project handle all this for you, among many other features it also have a way to run some task initialization that can be used latter by all tasks
You are right that you are constrained to pickable objects when using multiprocessing. Are you absolutely sure that your object is un-pickleable?
Have you tried dill? If you import it in, anytime pickle is called it will use the dill bindings. It worked for me, when I was trying to use multiprocessing on sympy equations.
I'm trying to make a memoize decorator that works with multiple threads.
I understood that I need to use the cache as a shared object between the threads, and acquire/lock the shared object. I'm of course launching the threads:
for i in range(5):
thread = threading.Thread(target=self.worker, args=(self.call_queue,))
thread.daemon = True
thread.start()
where worker is:
def worker(self, call):
func, args, kwargs = call.get()
self.returns.put(func(*args, **kwargs))
call.task_done()
The problem starts, of course, when I'm sending a function decorated with a memo function (like this) to many threads at the same time.
How can I implement the memo's cache as a shared object among threads?
The most straightforward way is to employ a single lock for the entire cache, and require that any writes to the cache grab the lock first.
In the example code you posted, at line 31, you would acquire the lock and check to see if the result is still missing, in which case you would go ahead and compute and cache the result. Something like this:
lock = threading.Lock()
...
except KeyError:
with lock:
if key in self.cache:
v = self.cache[key]
else:
v = self.cache[key] = f(*args,**kwargs),time.time()
The example you posted stores a cache per function in a dictionary, so you'd need to store a lock per function as well.
If you were using this code in a highly contentious environment, though, it would probably be unacceptably inefficient, since threads would have to wait on each other even if they weren't calculating the same thing. You could probably improve on this by storing a lock per key in your cache. You'll need to globally lock access to the lock storage as well, though, or else there's a race condition in creating the per-key locks.