Python child process doesn't seem to function - python

I run this python file to spawn a process:
import os
import pwd
import subprocess
import sys
p = subprocess.Popen(['python', 'process_script.py'],
cwd="/execute",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
process_script.py looks like this:
import time
import random
import string
import helper
#
def run():
while True:
filename = "/execute/" + "".join([random.choice(string.ascii_letters) for j in range(8)]) + ".txt"
helper.execute(f"echo foo > {filename}")
time.sleep(10)
#
run()
[EDIT] In fact ps shows no other processess, so it looks like the thread terminates... but how and why?
If I run process_script.py directly, the files are created.

in Popen child process dies when the parent exits you can add p.wait() at the end of your first script to prevent parent from exiting.
also this link is useful check it out!
subprocess gets killed even with nohup

Related

Real time multipocess stdout monitoring

Right now, I'm using subprocess to run a long-running job in the background. For multiple reasons (PyInstaller + AWS CLI) I can't use subprocess anymore.
Is there an easy way to achieve the same thing as below ? Running a long running python function in a multiprocess pool (or something else) and do real time processing of stdout/stderr ?
import subprocess
process = subprocess.Popen(
["python", "long-job.py"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
)
while True:
out = process.stdout.read(2000).decode()
if not out:
err = process.stderr.read().decode()
else:
err = ""
if (out == "" or err == "") and process.poll() is not None:
break
live_stdout_process(out)
Thanks
getting it cross platform is messy .... first of all windows implementation of non-blocking pipe is not user friendly or portable.
one option is to just have your application read its command line arguments and conditionally execute a file, and you get to use subprocess since you will be launching yourself with different argument.
but to keep it to multiprocessing :
the output must be logged to queues instead of pipes.
you need the child to execute a python file, this can be done using runpy to execute the file as __main__.
this runpy function should run under a multiprocessing child, this child must first redirect its stdout and stderr in the initializer.
when an error happens, your main application must catch it .... but if it is too busy reading the output it won't be able to wait for the error, so a child thread has to start the multiprocess and wait for the error.
the main process has to create the queues and launch the child thread and read the output.
putting it all together:
import multiprocessing
from multiprocessing import Queue
import sys
import concurrent.futures
import threading
import traceback
import runpy
import time
class StdoutQueueWrapper:
def __init__(self,queue:Queue):
self._queue = queue
def write(self,text):
self._queue.put(text)
def flush(self):
pass
def function_to_run():
# runpy.run_path("long-job.py",run_name="__main__") # run long-job.py
print("hello") # print something
raise ValueError # error out
def initializer(stdout_queue: Queue,stderr_queue: Queue):
sys.stdout = StdoutQueueWrapper(stdout_queue)
sys.stderr = StdoutQueueWrapper(stderr_queue)
def thread_function(child_stdout_queue,child_stderr_queue):
with concurrent.futures.ProcessPoolExecutor(1, initializer=initializer,
initargs=(child_stdout_queue, child_stderr_queue)) as pool:
result = pool.submit(function_to_run)
try:
result.result()
except Exception as e:
child_stderr_queue.put(traceback.format_exc())
if __name__ == "__main__":
child_stdout_queue = multiprocessing.Queue()
child_stderr_queue = multiprocessing.Queue()
child_thread = threading.Thread(target=thread_function,args=(child_stdout_queue,child_stderr_queue),daemon=True)
child_thread.start()
while True:
while not child_stdout_queue.empty():
var = child_stdout_queue.get()
print(var,end='')
while not child_stderr_queue.empty():
var = child_stderr_queue.get()
print(var,end='')
if not child_thread.is_alive():
break
time.sleep(0.01) # check output every 0.01 seconds
Note that a direct consequence of running as a multiprocess is that if the child runs into a segmentation fault or some unrecoverable error the parent will also die, hencing running yourself under subprocess might seem a better option if segfaults are expected.

Deleting NamedTemporaryFile in Python after a subprocess call

I don't want to delete the temp file until the subprocess execution completes and hence, I invoke the subprocess script as:
import os
import tempfile
import subprocess
def main():
with tempfile.NamedTemporaryFile("w", delete=False) as temp:
temp.write("Hello World")
temp.flush()
print(f"Temp file is: {temp.name}")
args = ["python3",
os.path.dirname(__file__) + "/hello_world.py",
"--temp-file", temp.name]
subprocess.Popen(args)
return
main()
hello_world.py
import argparse
import sys
def print_hello():
print("Hello World")
return
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="""Test case""")
parser.add_argument('--temp-file',
required=True,
help='For test')
args = parser.parse_args()
print(args)
print_hello()
sys.exit(0)
I was hoping the temp file to be deleted once subprocess execution finishes.
Do I need to manually delete the temp file in this case?
Calling subprocess.Popen() starts the process but does not wait for it to finish.
If you want to wait for the process to finish before exiting the with block, you can use subprocess.run() instead.
Edit: Per your comment, you don't want to wait for the process to finish. Since you are creating the file with delete=False, it won't be deleted when the file pointer is closed at the end of the with block, so you will need to manually delete the path, either in the parent or child process.

python kill parent process but child process left

While I try to kill a python process, the child process started via os.system won't be terminated at the same time.
Killing child process when parent crashes in python and
Python Process won't call atexit
(atexit looks like not work with signal)
Does that mean I need to handle this situation by myself? If so, what is the preferred way to do so?
> python main.py
> ps
4792 ttys002 0:00.03 python run.py
4793 ttys002 0:00.03 python loop.py
> kill -15 4792
> ps
4793 ttys002 0:00.03 python loop.py
Sample Code:
main.py
import os
os.system('python loop.py')
loop.py
import time
while True:
time.sleep(1000)
UPDATE1
I did some experiment, and find out a workable version but still confuse about the logic.
import os
import sys
import signal
import subprocess
def sigterm_handler(_signo, _stack_frame):
# it raises SystemExit(0):
print 'go die'
sys.exit(0)
signal.signal(signal.SIGTERM, sigterm_handler)
try:
# os.system('python loop.py')
# use os.system won't work, it will even ignore the SIGTERM entirely for some reason
subprocess.call(['python', 'loop.py'])
except:
os.killpg(0, signal.SIGKILL)
kill -15 4792 sends SIGTERM to run.py in your example -- it sends nothing to loop.py (or its parent shell). SIGTERM is not propagated to other processes in the process tree by default.
os.system('python loop.py') starts at least two processes the shell and python process. You don't need it; use subprocess.check_call(), to run a single child process without the implicit shell. btw, if your subprocess is a Python script; consider importing it and running corresponding functions instead.
os.killpg(0, SIGKILL) sends SIGKILL signal to the current process group. A shell creates a new process group (a job) for each pipeline and therefore the os.killpg() in the parent has no effect on the child (see the update). See How to terminate a python subprocess launched with shell=True.
#!/usr/bin/env python
import subprocess
import sys
try:
p = subprocess.Popen([executable, 'loop'])
except EnvironmentError as e: #
sys.exit('failed to start %r, reason: %s' % (executable, e))
else:
try: # wait for the child process to finish
p.wait()
except KeyboardInterrupt: # on Ctrl+C (SIGINT)
#NOTE: the shell sends SIGINT (on CtrL+C) to the executable itself if
# the child process is in the same foreground process group as its parent
sys.exit("interrupted")
Update
It seems os.system(cmd) doesn't create a new process group for cmd:
>>> import os
>>> os.getpgrp()
16180
>>> import sys
>>> cmd = sys.executable + ' -c "import os; print(os.getpgrp())"'
>>> os.system(cmd) #!!! same process group
16180
0
>>> import subprocess
>>> import shlex
>>> subprocess.check_call(shlex.split(cmd))
16180
0
>>> subprocess.check_call(cmd, shell=True)
16180
0
>>> subprocess.check_call(cmd, shell=True, preexec_fn=os.setpgrp) #!!! new
18644
0
and therefore os.system(cmd) in your example should be killed by the os.killpg() call.
Though if I run it in bash; it does create a new process group for each pipeline:
$ python -c "import os; print(os.getpgrp())"
25225
$ python -c "import os; print(os.getpgrp())"
25248

Python subprocess communicate kills my process

Why does communicate kill my process? I want an interactive process but communicate does something so that I cannot take raw_input any more in my process.
from sys import stdin
from threading import Thread
from time import sleep
if __name__ == '__main__':
print("Still Running\n")
x = raw_input()
i = 0
while ('n' not in x ) :
print("Still Running " + str(i) + " \r\n")
x = raw_input()
i += 1
print("quit")
print(aSubProc.theProcess.communicate('y'))
print(aSubProc.theProcess.communicate('y'))
exception!
self.stdin.write(input)
ValueError: I/O operation on closed file
communicate and wait methods of Popen objects, close the PIPE after the process returns. If you want stay in communication with the process try something like this:
import subprocess
proc = subprocess.Popen("some_process", stdout=subprocess.PIPE, stdin=subprocess.PIPE)
proc.stdin.write("input")
proc.stdout.readline()
Why does communicate kill my process?
From the docs for Popen.communicate(input=None, timeout=None):
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
emphasize mine
You may call .communicate() only once. It means that you should provide all input at once:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
p = Popen([sys.executable, 'child.py'], stdin=PIPE, stdout=PIPE)
print p.communicate(os.linesep.join('yyn'))[0]
Output
Still Running
Still Running 0
Still Running 1
quit
Notice the doubled newlines: one from '\r\n' and another from print statement itself in your script for the child process.
Output shows that the child process received three input lines successfully ('y', 'y', and 'n').
Here's a similar code using subprocess.check_output()'s input parameter from Python3.4:
#!/usr/bin/env python3.4
import os
import sys
from subprocess import check_output
output = check_output(['python2', 'child.py'], universal_newlines=True,
input='\n'.join('yyn'))
print(output, end='')
It produces the same output.
If you want to provide a different input depending on responses from the child processes then use pexpect module or its analogs to avoid issues mentioned in Why not just use a pipe (popen())?

Process closing

can I use Popen from python subprocess to close started process? For example, from popen I run some application. In some part of my code I have to close that ran app.
For example, from console in Linux I do:
./some_bin
... It works and logs stdout here ...
Ctrl + C and it breaks
I need something like Ctrl + C but in my program code.
from subprocess import Popen
process = Popen(['slow', 'running', 'program'])
while process.poll():
if raw_input() == 'Kill':
if process.poll(): process.kill()
kill() will kill a process. See more here: Python subprocess module
Use the subprocess module.
import subprocess
# all arguments must be passed one at a time inside a list
# they must all be string elements
arguments = ["sleep", "3600"] # first argument is the program's name
process = subprocess.Popen(arguments)
# do whatever you want
process.terminate()
Some time ago I needed a 'gentle' shutdown for a process by sending CTRL+C in Windows console.
Here's what I have:
import win32api
import win32con
import subprocess
import time
import shlex
cmdline = 'cmd.exe /k "timeout 60"'
args = shlex.split(cmdline)
myprocess = subprocess.Popen(args)
pid = myprocess.pid
print(myprocess, pid)
time.sleep(5)
win32api.GenerateConsoleCtrlEvent(win32con.CTRL_C_EVENT, pid)
# ^^^^^^^^^^^^^^^^^^^^ instead of myprocess.terminate()

Categories

Resources