I have an environment variable that allows a suite of applications to run under certain conditions, and the applications cannot run with the environment variable off.
My python script uses
p = subprocess.Popen(cmdStr3,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
to open the subprocess.
To test if the application is running, I have tried
try:
os.kill(pid, 0)
return True
except OSError:
return False
and also checking p.returncode. However, these always return true, because even if the application doesn't pull up on the screen, there are mini processes of ~1 MB that still run without the application fully running, so the os sees these and returns true. Is there a way around this?
Another issue is that os.kill doesn't work, the only way I have found to terminate the application is os.killpg at the end.
What I've learned from the comments is that what actually happens is that the subprocess I call is a starter that calls a child which is another application. My subprocess always runs, but with the environment variable off, the child application does not run. Is there a way to see if the child is running?
you can use p.poll() to check if the process is still running.
Another issue is that os.kill doesn't work, the only way I have found to terminate the application is os.killpg at the end.
If you want to kill a process launched using subprocess, you can use p.terminate().
Finally, if you want to wait until the process child dies, you can use p.wait().
Related
I am running some Python code using a SLURM script on a remote server accessed through SSH. At some point, issues related to licenses on the SLURM platform may happen, generating errors in Python and ending the subprocess. I want to use try-except to let the Python subprocess wait until the issue is fixed, after that it can keep running from where it stopped.
What are some smart implementations for that?
My most obvious solution is just keeping Python inside a loop if the error occurs and letting it read a file every X seconds, when I finally fix the error and want it to keep running from where it stopped, I would write something on the file and break the loop. I wonder if there is a smarter way to provide input to the Python subprocess while it is running through the SLURM script.
One idea might be to add a signal handler for signal USR1 to your Python script like this.
In the signal handler function, you can set a global variable or send a message or set a threading.Event that the main process is waiting on.
Then you can signal the process with:
kill -USR1 <PID>
or with the Python os.kill() equivalent.
Though I do have to agree there is something to be said for the simplicity of your process doing:
touch /tmp/blocked.$$
and your program waiting in a loop with a 1s sleep for that file to be removed. This way you can tell which process id is blocked.
index.js
const childProcess = require("child_process");
childProcess.spawn("python", ["main.py"]);
main.py
import time
while True:
time.sleep(1)
When running the NodeJS process with node index.js, it runs forever since the Python child process it spawns runs forever.
When the NodeJS process is ended by x-ing out of the Command Prompt, the Python process ends as well, which is desired, but how can you run some cleanup code in the Python code before it exits?
Previous attempts
Look in documentation for child_process.spawn for how this termination is communicated from parent to child, perhaps by a signal. Didn't find it.
In Python use signal.signal(signal.SIGTERM, handler) (as well as with signal.SIGINT). Didn't get handler to run (though, ending the NodeJS process with Ctrl-C instead of closing the window did get the SIGINT handler to run even though I'm not explicitly forwarding input from the NodeJS process to the Python child process).
Lastly, though this reproducible example is also valid and much more simple, my real-life use case involves Electron, so in case that introduces a complication, or a solution, figured I'd mention it.
Windows 10, NodeJS 12.7.0, Python 3.8.3
I have two scripts: "autorun.py" and "main.py". I added "autorun.py" as service to the autorun in my linux system. works perfectly!
Now my question is: When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well! So when I do
sudo service autorun-test start
the command also never finishes!
How can I run "main.py" and then exit, and to finish it up, how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ? (this is how all other services work I think)
EDIT:
Solution:
if sys.argv[1] == "start":
print "Starting..."
with daemon.DaemonContext(working_directory="/home/pi/python"):
execfile("main.py")
else:
pid = int(open("/home/pi/python/main.pid").read())
try:
os.kill(pid, 9)
print "Stopped!"
except:
print "No process with PID "+str(pid)
First, if you're trying to create a system daemon, you almost certainly want to follow PEP 3143, and you almost certainly want to use the daemon module to do that for you.
When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well!
You didn't say how you're running it. If you're doing anything that launches main.py as a child and waits (or, worse, tries to import/execfile/etc. in the same process), you can't do that. Either autorun.py has to launch and detach main.py (or do so indirectly via some external tool), or main.py has to daemonize when launched.
how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ?
You need some form of inter-process communication (IPC), and some way for autorun to find the right IPC channel to use.
If you're building a network server, the right answer might be to connect to it as a client. But otherwise, the simplest thing to do is kill the process with a signal.
If you're using the daemon module, it can easily map signals to callbacks. Or, if you don't need any cleanup, just use SIGTERM, which by default will abruptly terminate. If neither of those applies, you will have to set up a custom signal handler (and within that handler do something useful—e.g., set a flag that your main code checks periodically).
How do you know what process to send the signal to? The standard way to do this is to have main.py record its PID in a pidfile at startup. You read that pidfile, and signal whatever process is specified there. (If you get an error because there is no process with that PID, that just means the daemon already quit for some reason—possibly because of an unhandled exception, or even a segfault. You may want to log that, but treat the "stop" as successful otherwise.) Again, if you're using daemon, it does the pidfile stuff for you; if not, you have to do it yourself.
You may want to take a look at the service scripts for daemons that came with your computer. They're probably all written in bash rather than Python, but it's not that hard to figure out what they're doing. Or… just use one of them as a skeleton, in which case you don't really need any bash knowledge; it's just search-and-replace on the name.
If your distro has LSB-style init functions, you can use something like this example. That one does a whole lot more than you need to, but it's a good example of all of the details. Or do it all from scratch with something like this example. This one is doing the pidfile management and the backgrounding from the service script (turning a non-daemon program into a daemon), which you don't need if you're using daemon properly, and it's using SIGHUP instead of SIGTERM. You can google yourself for other examples of init.d service scripts.
But again, if you're just trying to do this for your own system, the best thing to do is look inside the /etc/init.d on your distro. There will be dozens of examples there, and 90% of them will be exactly the same except for the name of the daemon.
My application creates subprocesses. Usually, these processeses run and terminate without any problems. However, sometimes, they crash.
I am currently using the python subprocess module to create these subprocesses. I check if a subprocess crashed by invoking the Popen.poll() method. Unfortunately, since my debugger is activated at the time of a crash, polling doesn't return the expected output.
I'd like to be able to see the debugging window(not terminate it) and still be able to detect if a process is crashed in the python code.
Is there a way to do this?
When your debugger opens, the process isn't finished yet - and subprocess only knows if a process is running or finished. So no, there is not a way to do this via subprocess.
I found a workaround for this problem. I used the solution given in another question Can the "Application Error" dialog box be disabled?
Items of consideration:
subprocess.check_output() for your child processes return codes
psutil for process & child analysis (and much more)
threading library, to monitor these child states in your script as well once you've decided how you want to handle the crashing, if desired
import psutil
myprocess = psutil.Process(process_id) # you can find your process id in various ways of your choosing
for child in myprocess.children():
print("Status of child process is: {0}".format(child.status()))
You can also use the threading library to load your subprocess into a separate thread, and then perform the above psutil analyses concurrently with your other process.
If you find more, let me know, it's no coincidence I've found this post.
I have a python web application that needs to launch a long running process. The catch is I don't want it to wait around for the process to finish. Just launch and finish.
I'm running on windows XP, and the web app is running under IIS (if that matters).
So far I tried popen but that didn't seem to work. It waited until the child process finished.
Ok, I finally figured this out! This seems to work:
from subprocess import Popen
from win32process import DETACHED_PROCESS
pid = Popen(["C:\python24\python.exe", "long_run.py"],creationflags=DETACHED_PROCESS,shell=True).pid
print pid
print 'done'
#I can now close the console or anything I want and long_run.py continues!
Note: I added shell=True. Otherwise calling print in the child process gave me the error "IOError: [Errno 9] Bad file descriptor"
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying WINAPI CreateProcess function.
Instead of directly starting processes from your webapp, you could write jobs into a message queue. A separate service reads from the message queue and runs the jobs. Have a look at Celery, a Distributed Task Queue written in Python.
This almost works (from here):
from subprocess import Popen
pid = Popen(["C:\python24\python.exe", "long_run.py"]).pid
print pid
print 'done'
'done' will get printed right away. The problem is that the process above keeps running until long_run.py returns and if I close the process it kills long_run.py's process.
Surely there is some way to make a process completely independent of the parent process.
subprocess.Popen does that.