If I use print() function in subprocess, then subprocess will terminate as soone as the main process terminated.
The following programs terminate at the same time.
# main.py
import time
from subprocess import Popen
if __name__ == '__main__':
proc = Popen(['python', 'sub.py'])
# sub.py
for i in range(10):
time.sleep(1)
print(i)
However if I comment the print() in sub.py, then sub process continues after main terminates.
Also, If I redirect it's stdout in main.py (see following) , the sub process continues as well.
# main.py
import time
from subprocess import Popen
if __name__ == '__main__':
with open('a.txt", 'w') as out:
proc = Popen(['python', 'sub.py'],stdout=out)
Related
I am trying to use multiprocessing from inside another process that was spawned with Popen. I want to be able to communicate between this process and a new child process, but this "middle" process has a polling read on the pipe with its parent, which seems to block execution of its child process.
Here is my file structure:
entry.py
import subprocess, threading, time, sys
def start():
# Create process 2
worker = subprocess.Popen([sys.executable, "-u", "mproc.py"],
# When creating the subprocess with an open pipe to stdin and
# subsequently polling that pipe, it blocks further communication
# between subprocesses
stdin=subprocess.PIPE,
close_fds=False,)
t = threading.Thread(args=(worker))
t.start()
time.sleep(4)
if __name__ == '__main__':
start()
mproc.py
import multiprocessing as mp
import time, sys, threading
def exit_on_stdin_close():
try:
while sys.stdin.read():
pass
except:
pass
def put_hello(q):
# We never reach this line if exit_poll.start() is uncommented
q.put("hello")
time.sleep(2.4)
def start():
exit_poll = threading.Thread(target=exit_on_stdin_close, name="exit-poll")
exit_poll.daemon = True
# This daemon thread polling stdin blocks execution of subprocesses
# But ONLY if running in another process with stdin connected
# to its parent by PIPE
exit_poll.start()
ctx = mp.get_context('spawn')
q = ctx.Queue()
p = ctx.Process(target=put_hello, args=(q,))
# Create process 3
p.start()
p.join()
print(f"result: {q.get()}")
if __name__ == '__main__':
start()
My desired behavior is that when running entry.py, mproc.py should run on a subprocess and be able to communicate with its own subprocess to get the Queue output, and this does happen if I don't start the exit-poll daemon thread:
$ python -u entry.py
result: hello
but if exit-poll is running, then process 3 blocks as soon as it's started. The put_hello method isn't even entered until the exit-poll thread ends.
Is there a way to create a process 3 from process 2 and communicate between the two, even while the pipe between processes 1 and 2 is being used?
Edit: I can only consistently reproduce this problem on Windows. On Linux (Ubuntu 20.04 WSL) the Queues are able to communicate even with exit-poll running, but only if I'm using the spawn multiprocessing context. If I change it to fork then I get the same behavior that I see on Windows.
I'm trying to capture the stdout of a Popen object while it's running and display this data on a gui and log it. However whenever I try and read from the stdout attribute my program freezes. Minimal working code below. 'here' prints, then the process string representation, but then it hangs when it tries to read the first byte of stdout. Why is this the case?
Main script
import subprocess
import os
from threading import Thread
def print_to_terminal(process):
print(process)
print(process.stdout.read(1), flush=True)
sys.stdout.flush()
runner = subprocess.Popen(['python', 'print_and_wait.py'], env=os.environ, stdout=subprocess.PIPE)
print('here')
t = Thread(target=print_to_terminal, args=[runner]).run()
print('there')
runner.wait()
script Popen is calling
from time import sleep
for _ in range(10):
print('hello')
sleep(1)
After comments: This did work if I added a flush to the print in the print_and_wait function. See below
from time import sleep
for _ in range(10):
print('hello', flush=True)
sleep(1)
I run this python file to spawn a process:
import os
import pwd
import subprocess
import sys
p = subprocess.Popen(['python', 'process_script.py'],
cwd="/execute",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
process_script.py looks like this:
import time
import random
import string
import helper
#
def run():
while True:
filename = "/execute/" + "".join([random.choice(string.ascii_letters) for j in range(8)]) + ".txt"
helper.execute(f"echo foo > {filename}")
time.sleep(10)
#
run()
[EDIT] In fact ps shows no other processess, so it looks like the thread terminates... but how and why?
If I run process_script.py directly, the files are created.
in Popen child process dies when the parent exits you can add p.wait() at the end of your first script to prevent parent from exiting.
also this link is useful check it out!
subprocess gets killed even with nohup
I have a_1.py~a_10.py
I want to run 10 python programs in parallel.
I tried:
from multiprocessing import Process
import os
def info(title):
I want to execute python program
def f(name):
for i in range(1, 11):
subprocess.Popen(['python3', f'a_{i}.py'])
if __name__ == '__main__':
info('main line')
p = Process(target=f)
p.start()
p.join()
but it doesn't work
How do I solve this?
I would suggest using the subprocess module instead of multiprocessing:
import os
import subprocess
import sys
MAX_SUB_PROCESSES = 10
def info(title):
print(title, flush=True)
if __name__ == '__main__':
info('main line')
# Create a list of subprocesses.
processes = []
for i in range(1, MAX_SUB_PROCESSES+1):
pgm_path = f'a_{i}.py' # Path to Python program.
command = f'"{sys.executable}" "{pgm_path}" "{os.path.basename(pgm_path)}"'
process = subprocess.Popen(command, bufsize=0)
processes.append(process)
# Wait for all of them to finish.
for process in processes:
process.wait()
print('Done')
If you just need to call 10 external py scripts (a_1.py ~ a_10.py) as a separate processes - use subprocess.Popen class:
import subprocess, sys
for i in range(1, 11):
subprocess.Popen(['python3', f'a_{i}.py'])
# sys.exit() # optional
It's worth to look at a rich subprocess.Popen signature (you may find some useful params/options)
You can use a multiprocessing pool to run them concurrently.
import multiprocessing as mp
def worker(module_name):
""" Executes a module externally with python """
__import__(module_name)
return
if __name__ == "__main__":
max_processes = 5
module_names = [f"a_{i}" for i in range(1, 11)]
print(module_names)
with mp.Pool(max_processes) as pool:
pool.map(worker, module_names)
The max_processes variable is the maximum number of workers to have working at any given time. In other words, its the number of processes spawned by your program. The pool.map(worker, module_names) uses the available processes and calls worker on each item in your module_names list. We don't include the .py because we're running the module by importing it.
Note: This might not work if the code you want to run in your modules is contained inside if __name__ == "__main__" blocks. If that is the case, then my recommendation would be to move all the code in the if __name__ == "__main__" blocks of the a_{} modules into a main function. Additionally, you would have to change the worker to something like:
def worker(module_name):
module = __import__(module_name) # Kind of like 'import module_name as module'
module.main()
return
I'm struggling with some processes I started with Popen and which start subprocesses. When I start these processes manually in a terminal every process terminates as expected if I send CTRL+C. But running inside a python program using subprocess.Popen any attempt to terminate the process only gets rid of the parent but not of its children.
I tried .terminate() ..kill() as well as ..send_signal() with signal.SIGBREAK, signal.SIGTERM, but in every case I just terminate the parent process.
With this parent process I can reproduce the misbehavior:
#!/usr/bin/python
import time
import sys
import os
import subprocess
import signal
if __name__ == "__main__":
print os.getpid(), "MAIN: start a process.."
p = subprocess.Popen([sys.executable, 'process_to_shutdown.py'])
print os.getpid(), "MAIN: started process", p.pid
time.sleep(2)
print os.getpid(), "MAIN: kill the process"
# these just terminate the parent:
#p.terminate()
#p.kill()
#os.kill(p.pid, signal.SIGINT)
#os.kill(p.pid, signal.SIGTERM)
os.kill(p.pid, signal.SIGABRT)
p.wait()
print os.getpid(), "MAIN: job done - ciao"
The real life child process is manage.py from Django which spawns a few subprocesses and waits for CRTL-C. But the following example seems to work, too:
#!/usr/bin/python
import time
import sys
import os
import subprocess
if __name__ == "__main__":
timeout = int(sys.argv[1]) if len(sys.argv) >= 2 else 0
if timeout == 0:
p = subprocess.Popen([sys.executable, '-u', __file__, '13'])
print os.getpid(), "just waiting..."
p.wait()
else:
for i in range(timeout):
time.sleep(1)
print os.getpid(), i, "alive!"
sys.stdout.flush()
print os.getpid(), "ciao"
So my question in short: how do I kill the process in the first example and get rid of the child processes as well? On windows os.kill(p.pid, signal.CTRL_C_EVENT) seems to work in some cases, but what's the right way to do it? And how does a Terminal do it?
Like Henri Korhonen mentioned in a comment, grouping processes should help. Additionally, if you are on Windows and this is Cygwin Python that starts Windows applications, it appears Cygwin Python can not kill the children. For those cases you would need to run TASKKILL. TASKKILL also takes a group parameter.