Python asynchronous method - python

I'm having trouble with async methods. I've already tried many examples from stack overflow, such as:
loop = asyncio.get_event_loop()
loop.run_until_complete(method())
loop.close()
-----------
loop = asyncio.get_event_loop()
tasks = [
asyncio.ensure_future(method()),
asyncio.ensure_future(method()),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
The problem with those approaches is that they need to specify the number/methods before executing. What I need is a way to call a specific method many times from the terminal, but it can't be stuck on the last call.

You can use the multiprocessing module:
import multiprocessing
import random
import time
def printRandomNumber():
print(random.randint(0, 100)) # output a random number 0-100
time.sleep(30) # pause 30 seconds
FUNCS = {
'foo': printRandomNumber
}
def createWorker(name):
func = FUNCS.get(name, lambda: None)
process = multiprocessing.Process(target = func, name = name)
process.start()
while True:
func_name = input('next function name: ')
createWorker(func_name)
When executed, you can type "foo" and printRandomNumber will run in the background. You can, of course, change "foo" to whatever you like (like "printRandomNumber"), and with a little work, you could even get some arguments working.

Related

How to run multiple functions sequentialy with multiprocessing python?

I'm trying to run multiple functions with multiprocessing and running into a bit of wall. I want to run an initial function to completion on all processes/inputs and then run 2 or 3 other functions in parallel on the output of the first function. I've already got my search function. the code is there for the sake of explanation.
I'm not sure how to continue the code from here. I've put my initial attempt below. I want all instances of process1 to finish and then process2 and process3 to start in parallel.
Code is something like:
from multiprocessing import Pool
def init(*args):
global working_dir
[working_dir] = args
def process1(InFile):
python.DoStuffWith.InFile
Output.save.in(working_dir)
def process2(queue):
inputfiles2 = []
python.searchfunction.appendOutputof.process1.to.inputfiles2
python.DoStuffWith.process1.Output
python.Output
def process3(queue):
inputfiles2 = []
python.searchfunction.appendOutputof.process1.to.inputfiles2
python.DoStuffWith.process1.Output
python.Output
def MCprocess():
working_dir = input("enter input: ")
inputfiles1 = []
python.searchfunction.appendfilesin.working_dir.to.inputfiles1
with Pool(initializer=init, initargs=[working_dir], processes=16) as pool:
pool.map(process1, inputfiles1)
pool.close()
#Editted Code
queue = multiprocessing.Queue
queue.put(working_dir)
queue.put(working_dir)
ProcessTwo = multiprocessing.Process(target=process2, args=(queue,))
ProcessThree = multiprocessing.Process(target=process3, args=(queue,))
ProcessTwo.start()
ProcessThree.start()
#OLD CODE
#with Pool(initializer=init, initargs=[working_dir], processes=16) as pool:
#pool.map_async(process2)
#pool.map_async(process3)
if __name__ == '__main__':
MCprocess()
Your best bet is to use an Event. The first process calls event.set() when it is done to indicate that the event has happened. The waiting processes use event.wait() or one of its variants to wait to be awoken that the event has been set.

How do I have a forever loop running in the background [duplicate]

I researched first and couldn't find an answer to my question. I am trying to run multiple functions in parallel in Python.
I have something like this:
files.py
import common #common is a util class that handles all the IO stuff
dir1 = 'C:\folder1'
dir2 = 'C:\folder2'
filename = 'test.txt'
addFiles = [25, 5, 15, 35, 45, 25, 5, 15, 35, 45]
def func1():
c = common.Common()
for i in range(len(addFiles)):
c.createFiles(addFiles[i], filename, dir1)
c.getFiles(dir1)
time.sleep(10)
c.removeFiles(addFiles[i], dir1)
c.getFiles(dir1)
def func2():
c = common.Common()
for i in range(len(addFiles)):
c.createFiles(addFiles[i], filename, dir2)
c.getFiles(dir2)
time.sleep(10)
c.removeFiles(addFiles[i], dir2)
c.getFiles(dir2)
I want to call func1 and func2 and have them run at the same time. The functions do not interact with each other or on the same object. Right now I have to wait for func1 to finish before func2 to start. How do I do something like below:
process.py
from files import func1, func2
runBothFunc(func1(), func2())
I want to be able to create both directories pretty close to the same time because every min I am counting how many files are being created. If the directory isn't there it will throw off my timing.
You could use threading or multiprocessing.
Due to peculiarities of CPython, threading is unlikely to achieve true parallelism. For this reason, multiprocessing is generally a better bet.
Here is a complete example:
from multiprocessing import Process
def func1():
print 'func1: starting'
for i in xrange(10000000): pass
print 'func1: finishing'
def func2():
print 'func2: starting'
for i in xrange(10000000): pass
print 'func2: finishing'
if __name__ == '__main__':
p1 = Process(target=func1)
p1.start()
p2 = Process(target=func2)
p2.start()
p1.join()
p2.join()
The mechanics of starting/joining child processes can easily be encapsulated into a function along the lines of your runBothFunc:
def runInParallel(*fns):
proc = []
for fn in fns:
p = Process(target=fn)
p.start()
proc.append(p)
for p in proc:
p.join()
runInParallel(func1, func2)
If your functions are mainly doing I/O work (and less CPU work) and you have Python 3.2+, you can use a ThreadPoolExecutor:
from concurrent.futures import ThreadPoolExecutor
def run_io_tasks_in_parallel(tasks):
with ThreadPoolExecutor() as executor:
running_tasks = [executor.submit(task) for task in tasks]
for running_task in running_tasks:
running_task.result()
run_io_tasks_in_parallel([
lambda: print('IO task 1 running!'),
lambda: print('IO task 2 running!'),
])
If your functions are mainly doing CPU work (and less I/O work) and you have Python 2.6+, you can use the multiprocessing module:
from multiprocessing import Process
def run_cpu_tasks_in_parallel(tasks):
running_tasks = [Process(target=task) for task in tasks]
for running_task in running_tasks:
running_task.start()
for running_task in running_tasks:
running_task.join()
run_cpu_tasks_in_parallel([
lambda: print('CPU task 1 running!'),
lambda: print('CPU task 2 running!'),
])
This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
To parallelize your example, you'd need to define your functions with the #ray.remote decorator, and then invoke them with .remote.
import ray
ray.init()
dir1 = 'C:\\folder1'
dir2 = 'C:\\folder2'
filename = 'test.txt'
addFiles = [25, 5, 15, 35, 45, 25, 5, 15, 35, 45]
# Define the functions.
# You need to pass every global variable used by the function as an argument.
# This is needed because each remote function runs in a different process,
# and thus it does not have access to the global variables defined in
# the current process.
#ray.remote
def func1(filename, addFiles, dir):
# func1() code here...
#ray.remote
def func2(filename, addFiles, dir):
# func2() code here...
# Start two tasks in the background and wait for them to finish.
ray.get([func1.remote(filename, addFiles, dir1), func2.remote(filename, addFiles, dir2)])
If you pass the same argument to both functions and the argument is large, a more efficient way to do this is using ray.put(). This avoids the large argument to be serialized twice and to create two memory copies of it:
largeData_id = ray.put(largeData)
ray.get([func1(largeData_id), func2(largeData_id)])
Important - If func1() and func2() return results, you need to rewrite the code as follows:
ret_id1 = func1.remote(filename, addFiles, dir1)
ret_id2 = func2.remote(filename, addFiles, dir2)
ret1, ret2 = ray.get([ret_id1, ret_id2])
There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.
Seems like you have a single function that you need to call on two different parameters. This can be elegantly done using a combination of concurrent.futures and map with Python 3.2+
import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def sleep_secs(seconds):
time.sleep(seconds)
print(f'{seconds} has been processed')
secs_list = [2,4, 6, 8, 10, 12]
Now, if your operation is IO bound, then you can use the ThreadPoolExecutor as such:
with ThreadPoolExecutor() as executor:
results = executor.map(sleep_secs, secs_list)
Note how map is used here to map your function to the list of arguments.
Now, If your function is CPU bound, then you can use ProcessPoolExecutor
with ProcessPoolExecutor() as executor:
results = executor.map(sleep_secs, secs_list)
If you are not sure, you can simply try both and see which one gives you better results.
Finally, if you are looking to print out your results, you can simply do this:
with ThreadPoolExecutor() as executor:
results = executor.map(sleep_secs, secs_list)
for result in results:
print(result)
In 2021 the easiest way is to use asyncio:
import asyncio, time
async def say_after(delay, what):
await asyncio.sleep(delay)
print(what)
async def main():
task1 = asyncio.create_task(
say_after(4, 'hello'))
task2 = asyncio.create_task(
say_after(3, 'world'))
print(f"started at {time.strftime('%X')}")
# Wait until both tasks are completed (should take
# around 2 seconds.)
await task1
await task2
print(f"finished at {time.strftime('%X')}")
asyncio.run(main())
References:
[1] https://docs.python.org/3/library/asyncio-task.html
If you are a windows user and using python 3, then this post will help you to do parallel programming in python.when you run a usual multiprocessing library's pool programming, you will get an error regarding the main function in your program. This is because the fact that windows has no fork() functionality. The below post is giving a solution to the mentioned problem .
http://python.6.x6.nabble.com/Multiprocessing-Pool-woes-td5047050.html
Since I was using the python 3, I changed the program a little like this:
from types import FunctionType
import marshal
def _applicable(*args, **kwargs):
name = kwargs['__pw_name']
code = marshal.loads(kwargs['__pw_code'])
gbls = globals() #gbls = marshal.loads(kwargs['__pw_gbls'])
defs = marshal.loads(kwargs['__pw_defs'])
clsr = marshal.loads(kwargs['__pw_clsr'])
fdct = marshal.loads(kwargs['__pw_fdct'])
func = FunctionType(code, gbls, name, defs, clsr)
func.fdct = fdct
del kwargs['__pw_name']
del kwargs['__pw_code']
del kwargs['__pw_defs']
del kwargs['__pw_clsr']
del kwargs['__pw_fdct']
return func(*args, **kwargs)
def make_applicable(f, *args, **kwargs):
if not isinstance(f, FunctionType): raise ValueError('argument must be a function')
kwargs['__pw_name'] = f.__name__ # edited
kwargs['__pw_code'] = marshal.dumps(f.__code__) # edited
kwargs['__pw_defs'] = marshal.dumps(f.__defaults__) # edited
kwargs['__pw_clsr'] = marshal.dumps(f.__closure__) # edited
kwargs['__pw_fdct'] = marshal.dumps(f.__dict__) # edited
return _applicable, args, kwargs
def _mappable(x):
x,name,code,defs,clsr,fdct = x
code = marshal.loads(code)
gbls = globals() #gbls = marshal.loads(gbls)
defs = marshal.loads(defs)
clsr = marshal.loads(clsr)
fdct = marshal.loads(fdct)
func = FunctionType(code, gbls, name, defs, clsr)
func.fdct = fdct
return func(x)
def make_mappable(f, iterable):
if not isinstance(f, FunctionType): raise ValueError('argument must be a function')
name = f.__name__ # edited
code = marshal.dumps(f.__code__) # edited
defs = marshal.dumps(f.__defaults__) # edited
clsr = marshal.dumps(f.__closure__) # edited
fdct = marshal.dumps(f.__dict__) # edited
return _mappable, ((i,name,code,defs,clsr,fdct) for i in iterable)
After this function , the above problem code is also changed a little like this:
from multiprocessing import Pool
from poolable import make_applicable, make_mappable
def cube(x):
return x**3
if __name__ == "__main__":
pool = Pool(processes=2)
results = [pool.apply_async(*make_applicable(cube,x)) for x in range(1,7)]
print([result.get(timeout=10) for result in results])
And I got the output as :
[1, 8, 27, 64, 125, 216]
I am thinking that this post may be useful for some of the windows users.
There's no way to guarantee that two functions will execute in sync with each other which seems to be what you want to do.
The best you can do is to split up the function into several steps, then wait for both to finish at critical synchronization points using Process.join like #aix's answer mentions.
This is better than time.sleep(10) because you can't guarantee exact timings. With explicitly waiting, you're saying that the functions must be done executing that step before moving to the next, instead of assuming it will be done within 10ms which isn't guaranteed based on what else is going on on the machine.
(about How can I simultaneously run two (or more) functions in python?)
With asyncio, sync/async tasks could be run concurrently by:
import asyncio
import time
def function1():
# performing blocking tasks
while True:
print("function 1: blocking task ...")
time.sleep(1)
async def function2():
# perform non-blocking tasks
while True:
print("function 2: non-blocking task ...")
await asyncio.sleep(1)
async def main():
loop = asyncio.get_running_loop()
await asyncio.gather(
# https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor
loop.run_in_executor(None, function1),
function2(),
)
if __name__ == '__main__':
asyncio.run(main())

How to call an async coroutine periodically using an RxPY interval observable?

I need to create an Observable stream which emits the result of a async coroutine at regular intervals.
intervalRead is a function which returns an Observable, and takes as parameters the interval rate and an async coroutine function fun, which needs to be called at the defined interval.
My first aproach was to create an observable with the interval factory method, and then use map to call the coroutine, using from_future to wrap it in a Observable, and then get the value returned by the coroutine.
async def foo():
await asyncio.sleep(1)
return 42
def intervalRead(rate, fun) -> Observable:
loop = asyncio.get_event_loop()
return rx.interval(rate).pipe(
map(lambda i: rx.from_future(loop.create_task(fun()))),
)
async def main():
obs = intervalRead(5, foo)
obs.subscribe(
on_next= lambda item: print(item)
)
loop = asyncio.get_event_loop()
loop.create_task(main())
loop.run_forever()
Yet the output I get is not the result of the coroutine, but the Observable returned by from_future, emited at the specified interval
output: <rx.core.observable.observable.Observable object at 0x033B5650>
How could I could get the actual value returned by that Observable? I would expect 42
My second aproach was to create a custom observable:
def intervalRead(rate, fun) -> rx.Observable:
interval = rx.interval(rate)
def subs(observer: Observer, scheduler = None):
loop = asyncio.get_event_loop()
def on_timer(i):
task = loop.create_task(fun())
from_future(task).subscribe(
on_next= lambda i: observer.on_next(i),
on_error= lambda e: observer.on_error(e),
on_completed= lambda: print('coro completed')
)
interval.subscribe(on_next= on_timer, on_error= lambda e: print(e))
return rx.create(subs)
However, on subscription from_future(task) never emits a value, why does this happen?
Yet if i write intervalRead like this:
def intervalRead(rate, fun):
loop = asyncio.get_event_loop()
task = loop.create_task(fun())
return from_future(task)
I get the expected result: 42. Obviously this doesn´t solve my issue, but it confuses me why it doesn´t work in my second approach?
Finally, I experimented with a third approch using the rx.concurrency CurrentThreadScheduler and schedule an action perdiocally with the schedule_periodic method. Yet i'm facing the same issue I get with the second approach.
def funWithScheduler(rate, fun):
loop = asyncio.get_event_loop()
scheduler = CurrentThreadScheduler()
subject = rx.subjects.Subject()
def action(param):
obs = rx.from_future(loop.create_task(fun())).subscribe(
on_next= lambda item: subject.on_next(item),
on_error= lambda e: print(f'error in action {e}'),
on_completed= lambda: print('action completed')
)
obs.dispose()
scheduler.schedule_periodic(rate,action)
return subject
Would appreciate any insight into what am I missing or any other suggestions to accomplish what I need. This is my first project with asyncio and RxPY, I have only use RxJS in the context of an angular project so any help is welcome.
Your first example almost works. There are only two changes needed to get it working:
First the result of from_future is an observable that emits a single item (the value of the future when it completes). So the output of map is a higher order observable (an observable that emits observables). These children observables can be flattened by using the merge_all operator after map, or by using flat_map instead of map.
Then the interval operator must schedule its timer on the AsyncIO loop, which is not the case by default: The default scheduler is the TimeoutScheduler, and it spawns a new thread. So in the original code, the task cannot be scheduled on the AsyncIO event loop because create_task is called from another thread. Using the scheduler parameter on the call to subscribe declares the default scheduler to use for the whole operator chain.
The following code works (42 is printed every 5 seconds):
import asyncio
import rx
import rx.operators as ops
from rx.scheduler.eventloop import AsyncIOScheduler
async def foo():
await asyncio.sleep(1)
return 42
def intervalRead(rate, fun) -> rx.Observable:
loop = asyncio.get_event_loop()
return rx.interval(rate).pipe(
ops.map(lambda i: rx.from_future(loop.create_task(fun()))),
ops.merge_all()
)
async def main(loop):
obs = intervalRead(5, foo)
obs.subscribe(
on_next=lambda item: print(item),
scheduler=AsyncIOScheduler(loop)
)
loop = asyncio.get_event_loop()
loop.create_task(main(loop))
loop.run_forever()

How to stop a function at a specific time and continue with next function in python?

I have a code:
function_1()
function_2()
Normally, function_1() takes 10 hours to end.
But I want function_1() to run for 2 hours, and after 2 hours, function_1 must return and program must continue with function_2(). It shouldn't wait for function_1() to be completed. Is there a way to do this in python?
What makes functions in Python able to interrupt their execution and resuming is the use of the "yield" statement -- your function then will work as a generator object. You call the "next" method on this object to have it start or continue after the last yield
import time
def function_1():
start_time = time.time()
while True:
# do long stuff
running_time = time.time() -start_time
if running_time > 2 * 60 * 60: # 2 hours
yield #<partial results can be yield here, if you want>
start_time = time.time()
runner = function_1()
while True:
try:
runner.next()
except StopIteration:
# function_1 had got to the end
break
# do other stuff
If you don't mind leaving function_1 running:
from threading import Thread
import time
Thread(target=function_1).start()
time.sleep(60*60*2)
Thread(target=function_2).start()
You can try to use module Gevent: start function in thread and kill that thread after some time.
Here is example:
import gevent
# function which you can't modify
def func1(some_arg)
# do something
pass
def func2()
# do something
pass
if __name__ == '__main__':
g = gevent.Greenlet(func1, 'Some Argument in func1')
g.start()
gevent.sleep(60*60*2)
g.kill()
# call the rest of functions
func2()
from multiprocessing import Process
p1 = Process(target=function_1)
p1.start()
p1.join(60*60*2)
if p1.is_alive():p1.terminate()
function_2()
I hope this helps
I just tested this using the following code
import time
from multiprocessing import Process
def f1():
print 0
time.sleep(10000)
print 1
def f2():
print 2
p1 = Process(target=f1)
p1.start()
p1.join(6)
if p1.is_alive():p1.terminate()
f2()
Output is as expected:
0
2
You can time the execution using the datetime module. Probably your optimizer function has a loop somewhere. Inside the loop you can test how much time has passed since you started the function.
def function_1():
t_end = datetime.time.now() + datetime.timedelta(hours=2)
while not converged:
# do your thing
if datetime.time.now() > t_end:
return

How to run functions in parallel?

I researched first and couldn't find an answer to my question. I am trying to run multiple functions in parallel in Python.
I have something like this:
files.py
import common #common is a util class that handles all the IO stuff
dir1 = 'C:\folder1'
dir2 = 'C:\folder2'
filename = 'test.txt'
addFiles = [25, 5, 15, 35, 45, 25, 5, 15, 35, 45]
def func1():
c = common.Common()
for i in range(len(addFiles)):
c.createFiles(addFiles[i], filename, dir1)
c.getFiles(dir1)
time.sleep(10)
c.removeFiles(addFiles[i], dir1)
c.getFiles(dir1)
def func2():
c = common.Common()
for i in range(len(addFiles)):
c.createFiles(addFiles[i], filename, dir2)
c.getFiles(dir2)
time.sleep(10)
c.removeFiles(addFiles[i], dir2)
c.getFiles(dir2)
I want to call func1 and func2 and have them run at the same time. The functions do not interact with each other or on the same object. Right now I have to wait for func1 to finish before func2 to start. How do I do something like below:
process.py
from files import func1, func2
runBothFunc(func1(), func2())
I want to be able to create both directories pretty close to the same time because every min I am counting how many files are being created. If the directory isn't there it will throw off my timing.
You could use threading or multiprocessing.
Due to peculiarities of CPython, threading is unlikely to achieve true parallelism. For this reason, multiprocessing is generally a better bet.
Here is a complete example:
from multiprocessing import Process
def func1():
print 'func1: starting'
for i in xrange(10000000): pass
print 'func1: finishing'
def func2():
print 'func2: starting'
for i in xrange(10000000): pass
print 'func2: finishing'
if __name__ == '__main__':
p1 = Process(target=func1)
p1.start()
p2 = Process(target=func2)
p2.start()
p1.join()
p2.join()
The mechanics of starting/joining child processes can easily be encapsulated into a function along the lines of your runBothFunc:
def runInParallel(*fns):
proc = []
for fn in fns:
p = Process(target=fn)
p.start()
proc.append(p)
for p in proc:
p.join()
runInParallel(func1, func2)
If your functions are mainly doing I/O work (and less CPU work) and you have Python 3.2+, you can use a ThreadPoolExecutor:
from concurrent.futures import ThreadPoolExecutor
def run_io_tasks_in_parallel(tasks):
with ThreadPoolExecutor() as executor:
running_tasks = [executor.submit(task) for task in tasks]
for running_task in running_tasks:
running_task.result()
run_io_tasks_in_parallel([
lambda: print('IO task 1 running!'),
lambda: print('IO task 2 running!'),
])
If your functions are mainly doing CPU work (and less I/O work) and you have Python 2.6+, you can use the multiprocessing module:
from multiprocessing import Process
def run_cpu_tasks_in_parallel(tasks):
running_tasks = [Process(target=task) for task in tasks]
for running_task in running_tasks:
running_task.start()
for running_task in running_tasks:
running_task.join()
run_cpu_tasks_in_parallel([
lambda: print('CPU task 1 running!'),
lambda: print('CPU task 2 running!'),
])
This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
To parallelize your example, you'd need to define your functions with the #ray.remote decorator, and then invoke them with .remote.
import ray
ray.init()
dir1 = 'C:\\folder1'
dir2 = 'C:\\folder2'
filename = 'test.txt'
addFiles = [25, 5, 15, 35, 45, 25, 5, 15, 35, 45]
# Define the functions.
# You need to pass every global variable used by the function as an argument.
# This is needed because each remote function runs in a different process,
# and thus it does not have access to the global variables defined in
# the current process.
#ray.remote
def func1(filename, addFiles, dir):
# func1() code here...
#ray.remote
def func2(filename, addFiles, dir):
# func2() code here...
# Start two tasks in the background and wait for them to finish.
ray.get([func1.remote(filename, addFiles, dir1), func2.remote(filename, addFiles, dir2)])
If you pass the same argument to both functions and the argument is large, a more efficient way to do this is using ray.put(). This avoids the large argument to be serialized twice and to create two memory copies of it:
largeData_id = ray.put(largeData)
ray.get([func1(largeData_id), func2(largeData_id)])
Important - If func1() and func2() return results, you need to rewrite the code as follows:
ret_id1 = func1.remote(filename, addFiles, dir1)
ret_id2 = func2.remote(filename, addFiles, dir2)
ret1, ret2 = ray.get([ret_id1, ret_id2])
There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.
Seems like you have a single function that you need to call on two different parameters. This can be elegantly done using a combination of concurrent.futures and map with Python 3.2+
import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def sleep_secs(seconds):
time.sleep(seconds)
print(f'{seconds} has been processed')
secs_list = [2,4, 6, 8, 10, 12]
Now, if your operation is IO bound, then you can use the ThreadPoolExecutor as such:
with ThreadPoolExecutor() as executor:
results = executor.map(sleep_secs, secs_list)
Note how map is used here to map your function to the list of arguments.
Now, If your function is CPU bound, then you can use ProcessPoolExecutor
with ProcessPoolExecutor() as executor:
results = executor.map(sleep_secs, secs_list)
If you are not sure, you can simply try both and see which one gives you better results.
Finally, if you are looking to print out your results, you can simply do this:
with ThreadPoolExecutor() as executor:
results = executor.map(sleep_secs, secs_list)
for result in results:
print(result)
In 2021 the easiest way is to use asyncio:
import asyncio, time
async def say_after(delay, what):
await asyncio.sleep(delay)
print(what)
async def main():
task1 = asyncio.create_task(
say_after(4, 'hello'))
task2 = asyncio.create_task(
say_after(3, 'world'))
print(f"started at {time.strftime('%X')}")
# Wait until both tasks are completed (should take
# around 2 seconds.)
await task1
await task2
print(f"finished at {time.strftime('%X')}")
asyncio.run(main())
References:
[1] https://docs.python.org/3/library/asyncio-task.html
If you are a windows user and using python 3, then this post will help you to do parallel programming in python.when you run a usual multiprocessing library's pool programming, you will get an error regarding the main function in your program. This is because the fact that windows has no fork() functionality. The below post is giving a solution to the mentioned problem .
http://python.6.x6.nabble.com/Multiprocessing-Pool-woes-td5047050.html
Since I was using the python 3, I changed the program a little like this:
from types import FunctionType
import marshal
def _applicable(*args, **kwargs):
name = kwargs['__pw_name']
code = marshal.loads(kwargs['__pw_code'])
gbls = globals() #gbls = marshal.loads(kwargs['__pw_gbls'])
defs = marshal.loads(kwargs['__pw_defs'])
clsr = marshal.loads(kwargs['__pw_clsr'])
fdct = marshal.loads(kwargs['__pw_fdct'])
func = FunctionType(code, gbls, name, defs, clsr)
func.fdct = fdct
del kwargs['__pw_name']
del kwargs['__pw_code']
del kwargs['__pw_defs']
del kwargs['__pw_clsr']
del kwargs['__pw_fdct']
return func(*args, **kwargs)
def make_applicable(f, *args, **kwargs):
if not isinstance(f, FunctionType): raise ValueError('argument must be a function')
kwargs['__pw_name'] = f.__name__ # edited
kwargs['__pw_code'] = marshal.dumps(f.__code__) # edited
kwargs['__pw_defs'] = marshal.dumps(f.__defaults__) # edited
kwargs['__pw_clsr'] = marshal.dumps(f.__closure__) # edited
kwargs['__pw_fdct'] = marshal.dumps(f.__dict__) # edited
return _applicable, args, kwargs
def _mappable(x):
x,name,code,defs,clsr,fdct = x
code = marshal.loads(code)
gbls = globals() #gbls = marshal.loads(gbls)
defs = marshal.loads(defs)
clsr = marshal.loads(clsr)
fdct = marshal.loads(fdct)
func = FunctionType(code, gbls, name, defs, clsr)
func.fdct = fdct
return func(x)
def make_mappable(f, iterable):
if not isinstance(f, FunctionType): raise ValueError('argument must be a function')
name = f.__name__ # edited
code = marshal.dumps(f.__code__) # edited
defs = marshal.dumps(f.__defaults__) # edited
clsr = marshal.dumps(f.__closure__) # edited
fdct = marshal.dumps(f.__dict__) # edited
return _mappable, ((i,name,code,defs,clsr,fdct) for i in iterable)
After this function , the above problem code is also changed a little like this:
from multiprocessing import Pool
from poolable import make_applicable, make_mappable
def cube(x):
return x**3
if __name__ == "__main__":
pool = Pool(processes=2)
results = [pool.apply_async(*make_applicable(cube,x)) for x in range(1,7)]
print([result.get(timeout=10) for result in results])
And I got the output as :
[1, 8, 27, 64, 125, 216]
I am thinking that this post may be useful for some of the windows users.
There's no way to guarantee that two functions will execute in sync with each other which seems to be what you want to do.
The best you can do is to split up the function into several steps, then wait for both to finish at critical synchronization points using Process.join like #aix's answer mentions.
This is better than time.sleep(10) because you can't guarantee exact timings. With explicitly waiting, you're saying that the functions must be done executing that step before moving to the next, instead of assuming it will be done within 10ms which isn't guaranteed based on what else is going on on the machine.
(about How can I simultaneously run two (or more) functions in python?)
With asyncio, sync/async tasks could be run concurrently by:
import asyncio
import time
def function1():
# performing blocking tasks
while True:
print("function 1: blocking task ...")
time.sleep(1)
async def function2():
# perform non-blocking tasks
while True:
print("function 2: non-blocking task ...")
await asyncio.sleep(1)
async def main():
loop = asyncio.get_running_loop()
await asyncio.gather(
# https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor
loop.run_in_executor(None, function1),
function2(),
)
if __name__ == '__main__':
asyncio.run(main())

Categories

Resources