I have a loop (all this is being done in Python 3.10) that is running relatively fast compared to a function that needs to consume data from the loop. I don't want to slow down the data and am trying to find a way to run the function asynchronously but only execute the function again after completion... basically:
queue=[]
def flow():
thing=queue[0]
time.sleep(.5)
print(str(thing))
delete=queue.pop(0)
p1 = multiprocessing.Process(target=flow)
while True:
print('stream')
time.sleep(.25)
if len(queue)<1:
print('emptyQ')
queue.append('flow')
p1.start()
I've tried running the function in a thread and a process and both seem to try to start another while the function is still running. I tired using a queue to pass the data and as a semaphore by not removing the item until the end of the function and only adding an item and starting the thread or process if the queue was empty, but that didn't seem to work either.
EDIT : to add an explicit question...
Is there a way to execute a function asynchronously without executing it multiple time simultaneously?
EDIT2 : Updated with functional test code (accurately reproduces failure) since real code is a bit more substantial... I have noticed that it seems to work on the first execution of the function (the print doesn't work inside the function...) but the next execution it fails for whatever reason. I assume it tires to load the process twice?
The error I get is - RuntimeError : An attempt has been made to start a new process before the current process has finished its bootstrapping phase...
Related
I'm developing a standard python script file (no servers, no async, no multiprocessing, ...) i.e. a classic data science program where I load data, process it as dataframes, and so on. Everything is synchronous.
At some point, I need to call a function of an external library which is totally external to me (I have no control on it, I don't know how it does what it does), like
def inside_my_function(...):
# My code
result = the_function(params)
# Other code
Now, this the_function sometimes never terminates (I don't know why, probably there are bugs or some conditions which makes it stuck, but it's completely random), and when it happens my program gets stuck as well.
Since I have to use it and it cannot be modified, I would like to know if there is a way for example to wrap it in another function which calls the_function, waits for some timeout, and if the_function returns before the timeout the result is returned, otherwise the_function is somehow killed, aborted, skipped, whatever, and retried up to n times.
I realise that in order to execute the_function and check for timeout at the same time for example multithreading will be needed, but I'm not sure if it makes sense and how to implement it correctly without doing bad practices.
How would you proceed?
EDIT: I would avoid multiprocessing because of the great overhead and because I don't want to overcomplicate things with serializability and so on.
Thank you
import time
import random
import threading
def func_that_waits():
t1 = time.time()
while (time.time() - t1) <= 3:
time.sleep(1)
if check_unreliable_func.worked:
break
if not check_unreliable_func.worked:
print("unreliable function has been working for too long, it's killed.")
def check_unreliable_func(func):
check_unreliable_func.worked = False
def inner(*args,**qwargs):
func(*args,**qwargs)
check_unreliable_func.worked = True
return inner
def unreliable_func():
working_time = random.randint(1,6)
time.sleep(working_time)
print(f"unreliable_func has been working for {working_time} seconds")
to_wait = threading.Thread(target=func_that_waits)
main_func = threading.Thread(target=check_unreliable_func(unreliable_func), daemon=True)
main_func.start()
to_wait.start()
Unreliable_func - the function we do not know if it works
check_unreliable_func(func) - decorator the only purpose of which is to make to_wait thread know that unreliable_func returned something and so there is no sense for to_wait to work further
the main thing to understand is that main_func thread is daemon one so it means that when to_wait thread is terminated all daemon threads are terminated automatically and no matter what they've been doing in the moment
Of course it's really far from being best practice, I just show how it can be done. And how it should be done - I myself would be glad to see it too.
This is not the first time I have problems with Pool() while trying to debug something.
Please note, that I do not try to debug the code which is paralellized. I want to debug code which gets executed much later. The problem is, that this code is never reached and the cause for this is that pool.map() is not terminating.
I have created a simple pipeline which does a few things. Among those things is a very simple preprocessing step for textual data.
To speed things up I am running:
print('Preprocess text')
with Pool() as pool:
df_preprocessed['text'] = pool.map(preprocessor.preprocess, df.text)
But here is the thing:
For some reason this code runs only one time. The second time I am ending up in an endless loop in _handle_workers() of the pool.py module:
#staticmethod
def _handle_workers(pool):
thread = threading.current_thread()
# Keep maintaining workers until the cache gets drained, unless the pool
# is terminated.
while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
pool._maintain_pool()
time.sleep(0.1)
# send sentinel to stop workers
pool._taskqueue.put(None)
util.debug('worker handler exiting')
Note the while-loop. My script simply ends up there when calling my preprocessing function a second time.
Please note: This only happens during a debugging session! If the script is executed without a debugger everything is working fine.
What could be the reason for this?
Environment
$ python --version
Python 3.5.6 :: Anaconda, Inc.
Update
This could be a known bug
infinite waiting in multiprocessing.Pool
I have a loop with highly time-consuming process and instead of waiting for each process to complete to move to next iteration, is it possible to run the process and just move to next iteration without waiting for it to complete?
Example : Given a text, the script should try to find the matching links from the Web and files from the local disk. Both return simply a list of links or paths.
for proc in (web_search, file_search):
results = proc(text)
yield from results
What I have as a solution is, using a timer while doing the job. And if the time exceeds the waiting time, the process should be moved to a tray and asked to work from there. Now I will go to next iteration and repeat the same. After my loop is over, I will collect the results from the process moved to the tray.
For simple cases, where the objective is to let each process run simultaneously, we can use Thread of threading module.
So we can tackle the issue like this, we make each process as a Thread and ask it to put its results in a list or some other collection. The code is given below:
from threading import Thread
results = []
def add_to_collection(proc, args, collection):
'''proc is the function, args are the arguments to pass to it.
collection is our container (here it is the list results) for
collecting results.'''
result = proc(*args)
collection.append(result)
print("Completed":, proc)
# Now we do our time consuming tasks
for proc in (web_search, file_search):
t = Thread(target=add_to_collection, args=(proc, ()))
# We assume proc takes no arguments
t.start()
For complex tasks, as mentioned in comments, its better to go with multiprocessing.pool.Pool.
So I have a Python code running with one very expensive function that gets executed at times on demand, but it's result is not needed straight away (it can be delayed by a few cycles).
def heavy_function(arguments):
return calc_obtained_from_arguments
def main():
a = None
if some_condition:
a = heavy_function(x)
else:
do_something_with(a)
The thing is that whenever I calculate the heavy_function, the rest of the program hangs. However, I need it to run with empty a value, or better make it know that a is being processed separately and thus should not be accessed. How can I move the heavy_function to separate process and keep calling the main function all the time until heavy_function is done executing, then read the obtained a value and use it in main function?
You could use a simple queue.
Put your heavy_function inside a separate process that idles as long as there is no input in the input queue. Use Queue.get(block=True) to do so. Put the result of the computation in another queue.
Run your normal process with the empty a-value and check emptiness of the output queue from time to time. Maybe use while Queue.empty(): here.
If an item becomes available, because your heavy_functionhas finished, switch to a calculation with the value a from your output queue.
I am using the multiprocessing module in python. Here is a sample of the code I am using:
import multiprocessing as mp
def function(fun_var1, fun_var2):
b = fun_var1 + fun_var2
# and more computationally intensive stuff happens here
return b
# my program freezes after the return command
class Worker(mp.Process):
def __init__(self, queue_obj, func_var1, func_var2):
mp.Process.__init__(self)
self.queue_obj = queue_obj
self.func_var1 = func_var1
self.func_var2 = func_var2
def run(self):
self.var = function( self.func_var1, self.func_var2 )
self.queue_obj.put(self.var)
if __name__ == '__main__':
mp.freeze_support()
queue_list = []
processes = []
result = []
for i in range(2):
queue_list.append(mp.Queue())
processes.append( Worker(queue_list[i], i, var1, var2 )
processes[i].start()
for i in range(2):
processes[i].join()
result.append(queue_list[i].get())
During runtime of the program two instances of the worker class are generated which work simultaneously. One instance finishes after about 2 minutes and the other would take about 7 minutes. The first instance returns its results fine. However, the second instance freezes the program when the function() that is called within the run() method returns its value. No error is being thrown, the program just does not continue to execute. The console also indicates that it is busy but not displaying the >>> prompt. I am completely clueless why this behavior occurs. The same code works fine for slightly different inputs in the two Worker instances. The only difference I can make out is that the work loads are more equal when it executes correctly. Could the time difference cause trouble? Does anyone have experience with this kind of behavior? Also note that if I run a serial setup of the program in which function() is just called twice by the main program, the code executes flawlessly. Could there be some timeout involved in the worker instance that makes it impossible for function() to return its value to the Worker instance? The return value of function() is actually a list that is fairly small. It contains about 100 float values.
Any suggestions are welcomed!
This is a bit of an educated guess without actually seeing what's going on in worker, but is it possible that your child has put items into the Queue that haven't been consumed? The documentation has a warning about this:
Warning
As mentioned above, if a child process has put items on a queue (and
it has not used JoinableQueue.cancel_join_thread), then that process
will not terminate until all buffered items have been flushed to the
pipe.
This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.
Note that a queue created using a manager does not have this issue.
See Programming guidelines.
It might be worth trying to create your Queue object using mp.Manager.Queue and see if the issue goes away.