Terminate subprocess in Python - python

I have a wrapper script, inside that there are many other test script. Inside of one of the test script I make a subprocess using Popen class. The problem is that I don't know how to terminate that child process and return to main process and continue with the next test script. My wrapper stops at the test script that has the child process and never continue. Can you give a hint? Thx.
P.S. kill() or terminate() or anyother function that I consider usefull, doesn't put me back to the main process. I want to terminate the subprocess and continue with the main process.

Keep a reference to the child in the main script. With that reference call the terminate()
from subprocess import Popen
class TestApp(object):
app = None
def start(self):
self.app = Popen(['your command'])
def stop(self):
self.app.terminate()
In the main script:
app1 = TestApp()
app1.start()
app2 = TestApp()
app2.start()
#do something here
app1.stop()
app2.stop()
#do more here

Related

Real time multipocess stdout monitoring

Right now, I'm using subprocess to run a long-running job in the background. For multiple reasons (PyInstaller + AWS CLI) I can't use subprocess anymore.
Is there an easy way to achieve the same thing as below ? Running a long running python function in a multiprocess pool (or something else) and do real time processing of stdout/stderr ?
import subprocess
process = subprocess.Popen(
["python", "long-job.py"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
)
while True:
out = process.stdout.read(2000).decode()
if not out:
err = process.stderr.read().decode()
else:
err = ""
if (out == "" or err == "") and process.poll() is not None:
break
live_stdout_process(out)
Thanks
getting it cross platform is messy .... first of all windows implementation of non-blocking pipe is not user friendly or portable.
one option is to just have your application read its command line arguments and conditionally execute a file, and you get to use subprocess since you will be launching yourself with different argument.
but to keep it to multiprocessing :
the output must be logged to queues instead of pipes.
you need the child to execute a python file, this can be done using runpy to execute the file as __main__.
this runpy function should run under a multiprocessing child, this child must first redirect its stdout and stderr in the initializer.
when an error happens, your main application must catch it .... but if it is too busy reading the output it won't be able to wait for the error, so a child thread has to start the multiprocess and wait for the error.
the main process has to create the queues and launch the child thread and read the output.
putting it all together:
import multiprocessing
from multiprocessing import Queue
import sys
import concurrent.futures
import threading
import traceback
import runpy
import time
class StdoutQueueWrapper:
def __init__(self,queue:Queue):
self._queue = queue
def write(self,text):
self._queue.put(text)
def flush(self):
pass
def function_to_run():
# runpy.run_path("long-job.py",run_name="__main__") # run long-job.py
print("hello") # print something
raise ValueError # error out
def initializer(stdout_queue: Queue,stderr_queue: Queue):
sys.stdout = StdoutQueueWrapper(stdout_queue)
sys.stderr = StdoutQueueWrapper(stderr_queue)
def thread_function(child_stdout_queue,child_stderr_queue):
with concurrent.futures.ProcessPoolExecutor(1, initializer=initializer,
initargs=(child_stdout_queue, child_stderr_queue)) as pool:
result = pool.submit(function_to_run)
try:
result.result()
except Exception as e:
child_stderr_queue.put(traceback.format_exc())
if __name__ == "__main__":
child_stdout_queue = multiprocessing.Queue()
child_stderr_queue = multiprocessing.Queue()
child_thread = threading.Thread(target=thread_function,args=(child_stdout_queue,child_stderr_queue),daemon=True)
child_thread.start()
while True:
while not child_stdout_queue.empty():
var = child_stdout_queue.get()
print(var,end='')
while not child_stderr_queue.empty():
var = child_stderr_queue.get()
print(var,end='')
if not child_thread.is_alive():
break
time.sleep(0.01) # check output every 0.01 seconds
Note that a direct consequence of running as a multiprocess is that if the child runs into a segmentation fault or some unrecoverable error the parent will also die, hencing running yourself under subprocess might seem a better option if segfaults are expected.

multiprocessing.Process target executing only 2 out of 3 times

I'm using the multiprocessing library to launch a Process in parallel with the main one. I use the target argument at the initialisation to specify a function to execute. But the function is not executed approximatively 1 out of 3 times.
After digging into the multiprocessing library and using monkey patches to debug, I found out that the method _bootstrap of BaseProcess (the Process class inherits from BaseProcess), that is supposed to call the function specified in the target parameters at the initialisation, was not called when the method start() of the Process was called.
As my OS is Ubuntu 18.04, the default method to start the process is fork. So the Popen used to launch the process is in the file popen_fork.py of the multiprocessing library. And in this Popen class, the method _launch is calling os.fork() and then calling the Process's _bootstrap method.
With a monkey patch, I found out that the code supposed to be executed in the child process is not executed at all, and this is why the function specified in the target parameter when initializing the process was not executed when the method start() was called.
It is not possible to reproduce the problem in a simpler environment than the one I am working on. But here is some code that represents what I am doing, and what is my problem :
import time
from multiprocessing import Process
from multiprocessing.managers import BaseManager
class A:
def __init__(self, manager):
# manager is an object created by registering it in
# multiprocessing.managers.BaseManager, so it is made for interprocess
# communication
self.manager = manager
self.p = Process(target=self.process_method, args=(self.manager, ))
def start(self):
self.p.start()
def process_method(self, manager):
# This is the method that is not executed 2 out of 3 times
print("(A.process_method) Entering method")
c = 0
while True:
print(f"(A.process_method) Sending message : c = {c}")
manager.on_reception(f"c = {c}")
time.sleep(5)
class Manager:
def __init__(self):
self.msg = None
self.unread_msg = False
def on_reception(self, msg):
self.msg = msg
self.unread_msg = True
def get_last_msg(self):
if self.unread_msg:
self.unread_msg = False
return self.msg
else:
return None
if __name__ == "__main__":
BaseManager.register("Manager", Manager)
bm = BaseManager()
bm.start()
manager = bm.Manager()
a = A(manager)
a.start()
while True:
msg = manager.get_last_msg()
if msg is not None:
print(msg)
The method that should be executed every time is A.process_method. In this example, it is executed every time, but in my environment, it is not.
Does anyone ever had this problem and knows how to fix it ?
After digging more, I found out that a flask server was launched in Thread and not in a Process. I changed it to run in a Process instead of a Thread, and now everything is running as it is supposed to.
Both Flask and my Process are using the logging package. And this can cause a deadlock when launching a new Process.

Python Threading - Make threads start without waiting for previous thread to finish

I want all of my threads to start at the same time, but in my code, it waits for the previous thread to finish it's process before starting a new one. I want all of the threads to start in parallel.
My Code:
class Main(object):
start = True
config = True
givenName = True
def obscure(self, i):
i = i
while self.start:
Config.userInfo(i)
break
while self.config:
Config.open()
break
while self.givenName:
Browser.openSession()
break
Main = Main()
while __name__=='__main__':
Config.userInfo()
Config.open()
for i in range(len(Config.names)):
Task = Thread(target=Main.obscure(i))
Task.start()
break
This line is the main problem:
Task = Thread(target=Main.obscure(i))
target is passed the result of calling Main.obscure(i), not the function to be run in the thread. You are currently running the function in the main thread then passing, essentially, target=None.
You want:
Task = Thread(target=Main.obscure, args=(i,))
Then, Thread will arrange to call Main.obscure with the listed arguments inside the thread.
Also, Main = Main() overwrites the class Main declaration with an instance of Main...but you'll never be able to make another instance since you've lost the reference to the class. Use another name, such as main = Main().

Exit while-looped child process when parent process is exited?

I'm trying to close child process(which is doing while loop) when parent process is exited (Whenever parent process is clean-exit, forced-exit or exited because of exception) not to make child process a zombie process.
I'm making a game that communicates with Arduino (using serial), and main process is running Panda3D's ShowBase instance(Game engine, do render and another many things) so main must not be stopped.
So, I created subprocess using multiprocessing module so that main process is safe from stopping to wait serial in.
But the problem is, when i close Panda3D window, call sys.exit() or exited because of exception, main process exits immediately, and can't join or give false to subprocess, so subprocess becomes zombie.
I have no idea how to solve this. What should i do to make it work as i expected?
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from multiprocessing import Process, Queue
from panda3d.core import *
class HW_support:
def hardware_event_handler(self, process_status):
self.process_alive = True
while self.process_alive:
print('Working!')
self.process_alive = process_status.get()
return
if __name__ == '__main__':
from direct.showbase.ShowBase import ShowBase
import sys
class TestApp(ShowBase):
def __init__(self):
ShowBase.__init__(self)
self.process_status_argv = Queue()
self.HW_sub_process = Process(target = HW_support().hardware_event_handler, args=(self.process_status_argv,))
self.HW_sub_process.start()
base.messenger.toggleVerbose()
taskMgr.add(self.task_make_alive, 'task_make_alive')
base.accept('escape', self.exit_taskloop)
def exit_taskloop(self, task=None):
taskMgr.stop()
def task_make_alive(self, task=None):
self.process_status_argv.put(True)
return task.cont
app = TestApp()
app.run()
#app.HW_sub_process.join()
app.process_status_argv.put(False)
in the main program add this at the top (well below import multiprocessing)
if multiprocessing.current_process().name == 'MainProcess':
import atexit
atexit.register(lambda *a : os.remove("running.txt"))
open("running.txt","wb").close()
in the subprocces change your while True loop to while os.path.exists("running.txt"):
alternatively you could have atexit place a message in the queue or do whatever to signal to the subprocess that it should exit.
Multiple processes makes things a lot more complicated.
To shut down the HW_support process cleanly, you need to send it the message via your Queue object, then the parent needs to join() it (wait for it to exit) before exiting itself.
Anything that could make the parent exit unexpectedly (console interrupt, thrown exception, sys.exit, etc etc) will need to be carefully caught and managed so that you can still cleanly shut down the child before exiting.

python daemon thread exits but process still run in the background

I am using python 2.7 and Python thread doesn't kill its process after the main program exits. (checking this with the ps -ax command on ubuntu machine)
I have the below thread class,
import os
import threading
class captureLogs(threading.Thread):
'''
initialize the constructor
'''
def __init__(self, deviceIp, fileTag):
threading.Thread.__init__(self)
super(captureLogs, self).__init__()
self._stop = threading.Event()
self.deviceIp = deviceIp
self.fileTag = fileTag
def stop(self):
self._stop.set()
def stopped(self):
return self._stop.isSet()
'''
define the run method
'''
def run(self):
'''
Make the thread capture logs
'''
cmdTorun = "adb logcat > " + self.deviceIp +'_'+self.fileTag+'.log'
os.system(cmdTorun)
And I am creating a thread in another file sample.py,
import logCapture
import os
import time
c = logCapture.captureLogs('100.21.143.168','somefile')
c.setDaemon(True)
c.start()
print "Started the log capture. now sleeping. is this a dameon?", c.isDaemon()
time.sleep(5)
print "Sleep tiime is over"
c.stop()
print "Calling stop was successful:", c.stopped()
print "Thread is now completed and main program exiting"
I get the below output from the command line:
Started the log capture. now sleeping. is this a dameon? True
Sleep tiime is over
Calling stop was successful: True
Thread is now completed and main program exiting
And the sample.py exits.
But when I use below command on a terminal,
ps -ax | grep "adb"
I still see the process running. (I am killing them manually now using the kill -9 17681 17682)
Not sure what I am missing here.
My question is,
1) why is the process still alive when I already killed it in my program?
2) Will it create any problem if I don't bother about it?
3) is there any other better way to capture logs using a thread and monitor the logs?
EDIT: As suggested by #bug Killer, I added the below method in my thread class,
def getProcessID(self):
return os.getpid()
and used os.kill(c.getProcessID(), SIGTERM) in my sample.py . The program doesn't exit at all.
It is likely because you are using os.system in your thread. The spawned process from os.system will stay alive even after the thread is killed. Actually, it will stay alive forever unless you explicitly terminate it in your code or by hand (which it sounds like you are doing ultimately) or the spawned process exits on its own. You can do this instead:
import atexit
import subprocess
deviceIp = '100.21.143.168'
fileTag = 'somefile'
# this is spawned in the background, so no threading code is needed
cmdTorun = "adb logcat > " + deviceIp +'_'+fileTag+'.log'
proc = subprocess.Popen(cmdTorun, shell=True)
# or register proc.kill if you feel like living on the edge
atexit.register(proc.terminate)
# Here is where all the other awesome code goes
Since all you are doing is spawning a process, creating a thread to do it is overkill and only complicates your program logic. Just spawn the process in the background as shown above and then let atexit terminate it when your program exits. And/or call proc.terminate explicitly; it should be fine to call repeatedly (much like close on a file object) so having atexit call it again later shouldn't hurt anything.

Categories

Resources