Python: Exiting python.exe after Popen? - python

Small nagging issue:
I have a python script that is working as expected, except when I select a menu option to Popen another python script:
myPath = r"c:\Python27\myScript.py"
cmd = r"c:\Python27\python.exe '{}'".format(myPath)
py_process = Popen(cmd, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
When I run that snippet (in windows), the child process is kicked-off in the background as expected, but when I attempt to exit the primary script, but leave the child process running in the background:
raise SystemExit
...an empty window "c:\python27\python.exe" remains. I've tried other EXIT methods with a similar result. Note: When I exit the primary script without running that snippet, the python window disappears as desired.
My goal is to leave no trace/window once the primary script is exited in all cases, but child process should remain running in background.
Any suggestions to accomplish this goal?
Thanks!

If you want to first communicate to the started process and then leave it alone to run further, you have a few options:
Handle SIGPIPE in your long-running process, do not die on it. Live without stdin after the launcher process exits.
Pass whatever you wanted using arguments, environment, or a temporary file.
If you want bidirectional communication, consider using a named pipe (man mkfifo) or a socket, or writing a proper server.
Make the long-running process fork after the initial bi-direcional communication phase is done.
It does not create "a completely independent process" (that what python-daemon package does). In other cases you should redirect to os.devnull child's stdin/stdout/stderr to avoid waiting for input and/or a spurious output to the terminal
Source

Related

Do subprocess.Popen calls inside a function get garbage collected when the function exits scope?

I have a Flask application where there are links that open Jupyter notebooks. In the function that handles the url, the Jupyter notebooks are opened by a call to subprocess.Popen. Especially on Windows, after some time, the notebooks seems to be dead, i.e. they have lost connection to the kernel, and I can only get them to work again by clicking on the Flask link again. I have not noticed this behavior on a Mac. This makes me think that maybe the subprocess is getting closed. It isn't stored in a variable or anything, so once the function exits, there is no scope for it to be in. Does anyone know if this happens, and if so what happens to the process that should be running?
Here is an example of what one of these functions looks like. When you click on a link it calls the open_lecture function, constructs a cmd and Popens it. Then the function exits.
#app.route("/lecture/<label>")
def open_lecture(label):
fname = 'lectures/{}.ipynb'.format(label)
# Now open the notebook.
cmd = [JUPYTER]
cmd += [fname]
print(cmd)
subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
return redirect(url_for('hello'))
Is there a way to keep this from happening? Or a better way to programmatically open a jupyter notebook?
executed processes should continue after any Popen object has been garbage collected, but in your case you have asked for the output to be returned to Python but then you aren't processing the output. this could do different things depending on how things get cleaned up in Python and how the invoked process handles errors.
as an example, if I run:
proc = subprocess.Popen(["yes"], stdout=subprocess.PIPE)
(yes writes lots of stuff to its stdout) it freezes after writing 64KiB of output, which is what my OS/kernel (Linux 5.3) gives to pipes between processes. you can confirm by waiting a second, calling proc.terminate() then print(len(proc.communicate()[0])).
as you're invoking Jupyter, it'll probably just write status and other informational messages to stdout, so will take a while to fill this buffer which is why you're seeing a sporadic timeout.
Python's garbage collector only works on its own heap (i.e. not other processes, each Python process is independent), so invoked processes will just run as they would according to the semantics of your OS

Kill program run with exec(open(file).read()) in python

In my code I am calling a pipeline from a tkinter gui. When the user presses the Run button the entire pipeline starts running. If certain settings are selected a toplevel of the main GUI is called which asks for an aditional file. This all works except when the cancel button or the close window X is pressed. The toplevel closes but the program keeps running. Eventually it will crash because the file is absent. calling sys.exit() isn't the solution because then the entire gui shuts down and I only want the specific toplevel to close and to stop the running file.
How do I kill a file running with exec(open(file).read()) without killing the entire program?
Honestly, you probably shouldn't use exec at all, but assuming you do, exec is still running in the same process and thread as your main program, there's no exiting it without killing the main program.
You should open it in another process, subprocess or thread. Since your exec seem to be running python code, you could simply use:
from subprocess import Popen
p = Popen(['python', filename'])
And then it runs in the background, your normal process continues, and you can kill it at any point with .
p.kill()
It gets more complicated than that if you want to give that process input or read out its output, but that's a matter for a different question. You can start here to see how to read the output: Store output of subprocess.Popen call in a string
A small example to get the output would be something like this:
from subprocess import Popen, PIPE
p = Popen(['python', filename], stdout=PIPE, stderr=PIPE)
output, errors = p.communicate()
However this will wait until the process completes its run, so maybe start all that from another thread, or find another way to get the output (perhaps to a log file)
Notice I used just 'python' in Popen, if the python executable is not in your path or has a different name, you should replace that with the full path to the executable

Windows equivalent for spawning and killing separate process group in Python 3?

I have a web server that needs to manage a separate multi-process subprocess (i.e. starting it and killing it).
For Unix-based systems, the following works fine:
# save the pid as `pid`
ps = subprocess.Popen(cmd, preexec_fn=os.setsid)
# elsewhere:
os.killpg(os.getpgid(pid), signal.SIGTERM)
I'm doing it this way (with os.setsid) because otherwise killing the progress group will also kill the web server.
On Windows these os functions are not available -- so if I wanted to accomplish something similar on Windows, how would I do this?
I'm using Python 3.5.
THIS ANSWER IS PROVIDED BY eryksun IN COMMENT. I PUT IT HERE TO HIGHLIGHT IT FOR SOMEONE MAY ALSO GET INVOLVED IN THIS PROBLEM。
Here is what he said:
You can create a new process group via ps = subprocess.Popen(cmd, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP). The group ID is the process ID of the lead process. That said, it's only useful for processes in the tree that are attached to the same console (conhost.exe instance) as your process, if your process even has a console. In this case, you can send the group a Ctrl+Break via ps.send_signal(signal.CTRL_BREAK_EVENT). Processes shouldn't ignore Ctrl+Break. They should either exit gracefully or let the default handler execute, which calls ExitProcess(STATUS_CONTROL_C_EXIT)
I tried it with this and succeed:
process = Popen(args=shlex.split(command), shell=shell, cwd=cwd, stdout=PIPE, stderr=PIPE,creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
/*...*/
process .send_signal(signal.CTRL_BREAK_EVENT)
process .kill()

Use python subprocess module like a command line simulator

I am writing a test framework in Python for a command line application. The application will create directories, call other shell scripts in the current directory and will output on the Stdout.
I am trying to treat {Python-SubProcess, CommandLine} combo as equivalent to {Selenium, Browser}. The first component plays something on the second and checks if the output is expected. I am facing the following problems
The Popen construct takes a command and returns back after that command is completed. What I want is a live handle to the process so I can run further commands + verifications and finally close the shell once done
I am okay with writing some infrastructure code for achieveing this since we have a lot of command line applications that need testing like this.
Here is a sample code that I am running
p = subprocess.Popen("/bin/bash", cwd = test_dir)
p.communicate(input = "hostname") --> I expect the hostname to be printed out
p.communicate(input = "time") --> I expect current time to be printed out
but the process hangs or may be I am doing something wrong. Also how do I "grab" the output of that sub process so I can assert that something exists?
subprocess.Popen allows you to continue execution after starting a process. The Popen objects expose wait(), poll() and many other methods to communicate with a child process when it is running. Isn't it what you need?
See Popen constructor and Popen objects description for details.
Here is a small example that runs Bash on Unix systems and executes a command:
from subprocess import Popen, PIPE
p = Popen (['/bin/sh'], stdout=PIPE, stderr=PIPE, stdin=PIPE)
sout, serr = p.communicate('ls\n')
print 'OUT:'
print sout
print 'ERR:'
print serr
UPD: communicate() waits for process termination. If you do not need that, you may use the appropriate pipes directly, though that usually gives you rather ugly code.
UPD2: You updated the question. Yes, you cannot call communicate twice for a single process. You may either give all commands you need to execute in a single call to communicate and check the whole output, or work with pipes (Popen.stdin, Popen.stdout, Popen.stderr). If possible, I strongly recommend the first solution (using communicate).
Otherwise you will have to put a command to input and wait for some time for desired output. What you need is non-blocking read to avoid hanging when there is nothing to read. Here is a recipe how to emulate a non-blocking mode on pipes using threads. The code is ugly and strangely complicated for such a trivial purpose, but that's how it's done.
Another option could be using p.stdout.fileno() for select.select() call, but that won't work on Windows (on Windows select operates only on objects originating from WinSock). You may consider it if you are not on Windows.
Instead of using plain subprocess you might find Python sh library very useful:
http://amoffat.github.com/sh/
Here is an example how to build in an asynchronous interaction loop with sh:
http://amoffat.github.com/sh/tutorials/2-interacting_with_processes.html
Another (old) library for solving this problem is pexpect:
http://www.noah.org/wiki/pexpect

Executing multiple commands using Popen.stdin

I'd like to execute multiple commands in a standalone application launched from a python script, using pipes. The only way I could reliably pass the commands to the stdin of the program was using Popen.communicate but it closes the program after the command gets executed. If I use Popen.stdin.write than the command executes only 1 time out of 5 or so, it does not work reliable. What am I doing wrong?
To elaborate a bit :
I have an application that listens to stdin for commands and executes them line by line.
I'd like to be able to run the application and pass various commands to it, based on the users interaction with a GUI.
This is a simple test example:
import os, string
from subprocess import Popen, PIPE
command = "anApplication"
process = Popen(command, shell=False, stderr=None, stdin=PIPE)
process.stdin.write("doSomething1\n")
process.stdin.flush()
process.stdin.write("doSomething2\n")
process.stdin.flush()
I'd expect to see the result of both commands but I don't get any response. (If I execute one of the Popen.write lines multiple times it occasionally works.)
And if I execute:
process.communicate("doSomething1")
it works perfectly but the application terminates.
If I understand your problem correctly, you want to interact (i.e. send commands and read the responses) with a console application.
If so, you may want to check an Expect-like library, like pexpect for Python: http://pexpect.sourceforge.net
It will make your life easier, because it will take care of synchronization, the problem that ddaa also describes. See also:
http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F
The real issue here is whether the application is buffering its output, and if it is whether there's anything you can do to stop it. Presumably when the user generates a command and clicks a button on your GUI you want to see the output from that command before you require the user to enter the next.
Unfortunately there's nothing you can do on the client side of subprocess.Popen to ensure that when you have passed the application a command the application is making sure that all output is flushed to the final destination. You can call flush() all you like, but if it doesn't do the same, and you can't make it, then you are doomed to looking for workarounds.
Your code in the question should work as is. If it doesn't then either your actual code is different (e.g., you might use stdout=PIPE that may change the child buffering behavior) or it might indicate a bug in the child application itself such as the read-ahead bug in Python 2 i.e., your input is sent correctly by the parent process but it is stuck in the child's internal input buffer.
The following works on my Ubuntu machine:
#!/usr/bin/env python
import time
from subprocess import Popen, PIPE
LINE_BUFFERED = 1
#NOTE: the first argument is a list
p = Popen(['cat'], bufsize=LINE_BUFFERED, stdin=PIPE,
universal_newlines=True)
with p.stdin:
for cmd in ["doSomething1\n", "doSomethingElse\n"]:
time.sleep(1) # a delay to see that the commands appear one by one
p.stdin.write(cmd)
p.stdin.flush() # use explicit flush() to workaround
# buffering bugs on some Python versions
rc = p.wait()
It sounds like your application is treating input from a pipe in a strange way. This means it won't get all of the commands you send until you close the pipe.
So the approach I would suggest is just to do this:
process.stdin.write("command1\n")
process.stdin.write("command2\n")
process.stdin.write("command3\n")
process.stdin.close()
It doesn't sound like your Python program is reading output from the application, so it shouldn't matter if you send the commands all at once like that.

Categories

Resources