Nothing is printed while using concurrent.futures - python

I want to make a process run parallelly, so I am using concurrent.futures . The problem is that it does not execute the function hello().
import time
import concurrent.futures
def hello(name):
print(f'hello {name}')
sleep(1)
if __name__ == "__main__":
t1=time.perf_counter()
names=["Jack","John","Lily","Stephen"]
with concurrent.futures.ProcessPoolExecutor() as executor:
executor.map(hello,names)
t2=time.perf_counter()
print(f'{t2-t1} seconds')
Output
0.5415315 seconds

After going through the concurrent.futures documentation I found that ProcessPoolExecutor does not work in the interactive interpreter. So you need to make a file and run it via command prompt/bash shell.

Related

Capture / redirect all output of ProcessPoolExecutor

I am trying to capture all output from a ProcessPoolExecutor.
Imagine you have a file func.py:
print("imported") # I do not want this print in subprocesses
def f(x):
return x
then you run that function with a ProcessPoolExecutor like
from concurrent.futures import ProcessPoolExecutor
from func import f # ⚠️ the import will print! ⚠️
if __name__ == "__main__":
with ProcessPoolExecutor() as ex: # ⚠️ the import will happen here again and print! ⚠️
futs = [ex.submit(f, i) for i in range(15)]
for fut in futs:
fut.result()
Now I can capture the output of the first import using e.g., contextlib.redirect_stdout, however, I want to capture all output from the subprocesses too and redirect them to the stdout of the main process.
In my real use case, I get warnings that I want to capture, but a simple print reproduces the problem.
This is relevant to prevent the following bug https://github.com/Textualize/rich/issues/2371.

Get current process id when using `concurrent.futures.ProcessPoolExecutor`

I am using concurent.futures.ProcessPoolExecutor has a high level API to multiprocessing.
I want to identify the current process in the worker functions.
With the low level API multiprocessing I can do it like this.
import multiprocessing
def worker():
print(multiprocessing.current_process())
Is there a current_process() pendant when using workers with the ProcessPoolExecutor()?
Since each execution happens in a separate process, you can simply do
import os
def worker():
# Get the process ID of the current process
pid = os.getpid()
..
.. do something with pid
For example,
from concurrent.futures import ProcessPoolExecutor
import os
import time
def task():
time.sleep(1)
print("Executing on Process {}".format(os.getpid()))
def main():
with ProcessPoolExecutor(max_workers=3) as executor:
for i in range(3):
executor.submit(task)
if __name__ == '__main__':
main()
➜ python3.9 so.py
Executing on Process 71137
Executing on Process 71136
Executing on Process 71138
Note that if the task in hand is small and executed fast enough, your pid might stay the same. Try it out by removing the time.sleep call from my example.

How could a function look like, that can add an other function to a list of functions that run parallel in python?

def update():
while True:
loadData()
def main():
doStuff()
addToParallelFunctions(update)
doOtherStuff()
if __name__ == '__main__':
addToParallelFunctions(main)
How could that addToParallelFunctions function look like, so that update runs parallel to main and that I can add other functions to run also parallel to the main?
I've tried that, but it paused the main function and ran the other function until she was finished.
from multiprocessing import Process
def addProcess(*funcs):
for func in funcs:
Process(target=func).start()
I enhanced your example:
It works for me. Main script and function continue both.
import time
from multiprocessing import Process
def addProcess(*funcs):
for func in funcs:
Process(target=func).start()
def update():
for i in range(20):
print(i)
time.sleep(2)
def main():
print("start")
addProcess(update)
print("Main function continues")
if __name__ == '__main__':
addProcess(main)
print("main script continues")
Next time, please post an example that is reproducible.

How do I run a task in the background with a delay?

I have the following code:
import time
def wait10seconds():
for i in range(10):
time.sleep(1)
return 'Counted to 10!'
print(wait10seconds())
print('test')
Now my question is how do you make print('test') run before the function wait10seconds() is executed without exchanging the 2 lines.
I want the output to be the following:
test
Counted to 10!
Anyone know how to fix this?
You can use Threads for this
like:
from threading import Thread
my_thread = Thread(target=wait10seconds) # Create a new thread that exec the function
my_thread.start() # start it
print('test') # print the test
my_thread.join() # wait for the function to end
You can use a Timer. Taken from the Python docs page:
def hello():
print("hello, world")
t = Timer(30.0, hello)
t.start() # after 30 seconds, "hello, world" will be printed
if you are using python 3.5+ you can use asyncio:
import asyncio
async def wait10seconds():
for i in range(10):
await asyncio.sleep(1)
return 'Counted to 10!'
print(asyncio.run(wait10seconds()))
asyncio.run is new to python 3.7, for python 3.5 and 3.6 you won't be able to use asyncio.run but you can achieve the same thing by working with the event_loop directly

How to run 10 python programs simultaneously?

I have a_1.py~a_10.py
I want to run 10 python programs in parallel.
I tried:
from multiprocessing import Process
import os
def info(title):
I want to execute python program
def f(name):
for i in range(1, 11):
subprocess.Popen(['python3', f'a_{i}.py'])
if __name__ == '__main__':
info('main line')
p = Process(target=f)
p.start()
p.join()
but it doesn't work
How do I solve this?
I would suggest using the subprocess module instead of multiprocessing:
import os
import subprocess
import sys
MAX_SUB_PROCESSES = 10
def info(title):
print(title, flush=True)
if __name__ == '__main__':
info('main line')
# Create a list of subprocesses.
processes = []
for i in range(1, MAX_SUB_PROCESSES+1):
pgm_path = f'a_{i}.py' # Path to Python program.
command = f'"{sys.executable}" "{pgm_path}" "{os.path.basename(pgm_path)}"'
process = subprocess.Popen(command, bufsize=0)
processes.append(process)
# Wait for all of them to finish.
for process in processes:
process.wait()
print('Done')
If you just need to call 10 external py scripts (a_1.py ~ a_10.py) as a separate processes - use subprocess.Popen class:
import subprocess, sys
for i in range(1, 11):
subprocess.Popen(['python3', f'a_{i}.py'])
# sys.exit() # optional
It's worth to look at a rich subprocess.Popen signature (you may find some useful params/options)
You can use a multiprocessing pool to run them concurrently.
import multiprocessing as mp
def worker(module_name):
""" Executes a module externally with python """
__import__(module_name)
return
if __name__ == "__main__":
max_processes = 5
module_names = [f"a_{i}" for i in range(1, 11)]
print(module_names)
with mp.Pool(max_processes) as pool:
pool.map(worker, module_names)
The max_processes variable is the maximum number of workers to have working at any given time. In other words, its the number of processes spawned by your program. The pool.map(worker, module_names) uses the available processes and calls worker on each item in your module_names list. We don't include the .py because we're running the module by importing it.
Note: This might not work if the code you want to run in your modules is contained inside if __name__ == "__main__" blocks. If that is the case, then my recommendation would be to move all the code in the if __name__ == "__main__" blocks of the a_{} modules into a main function. Additionally, you would have to change the worker to something like:
def worker(module_name):
module = __import__(module_name) # Kind of like 'import module_name as module'
module.main()
return

Categories

Resources