python forked processes not executing with os.execlp - python

I've got this simple python script that ought to fork new processes and then have each execute a command using os.execlp, but the execution only occurs once. I'm curious if there's a timing issue going on that is preventing the additional forks from executing:
import os
for n in range(5):
PID = os.fork()
if PID == 0: #the child...
print("This child's PID is: %s" % os.getpid())
os.execlp('open','-n','-a','Calculator')
# os.popen('open -n -a Calculator')
# os._exit(0)
else:
print("new child forked: %d" % PID)
For this, the "open -n -a Appname" command in OS X launches a new instance of the specified application, so the above code should replace the forked process with the "open" command, and this should run 5 times. However, it only runs once so only one instance of Calculator is opened. Despite this, the parent lists 5 child PIDs forked.
If I comment out the os.execlp line and uncomment the os.popen and os._exit lines following it, then this works properly and the child processes all run the "open" command; however, I am wondering why the approach of replacing the forked process using execlp (or execvp and other similar variants) is not working? Clearly the child process is running, as I can use piping to run the "open" command just fine.
This is with python 3.4.3.

The first argument, after the executable, is arg[0] which is by convention the name of the executable. This is useful, if you have symbolic links, which determine the behavior of the programm. In your case, you name the programm '-n' and the real arguments are only -a and Calculator. So you have to repeat 'open':
os.execlp('open', 'open', '-n', '-a', 'Calculator')

Related

How to run a background process and do *not* wait?

My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)
Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)
Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.
I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

Launch a single python script as different processes differing by command line arguments

I have python script that takes command line arguments. The way I get the command line arguments is by reading a mongo database. I need to iterate over the mongo query and launch a different process for the single script with different command line arguments from the mongo query.
Key is, I need the launched processes to be:
separate processes share nothing
when killing the process, I need to be able to kill them all easily.
I think the command killall -9 script.py would work and satisfies the second constraint.
Edit 1
From the answer below, the launcher.py program looks like this
def main():
symbolPreDict = initializeGetMongoAllSymbols()
keys = sorted(symbolPreDict.keys())
for symbol in keys:
# Display key.
print(symbol)
command = ['python', 'mc.py', '-s', str(symbol)]
print command
subprocess.call(command)
if __name__ == '__main__':
main()
The problem is that mc.py has a call that blocks
receiver = multicast.MulticastUDPReceiver ("192.168.0.2", symbolMCIPAddrStr, symbolMCPort )
while True:
try:
b = MD()
data = receiver.read() # This blocks
...
except Exception, e:
print str(e)
When I run the launcher, it just executes one of the mc.py (there are at least 39). How do I modify the launcher program to say "run the launched script in background" so that the script returns to the launcher to launch more scripts?
Edit 2
The problem is solved by replacing subprocess.call(command) with subprocess.Popen(command)
One thing I noticed though, if I say ps ax | grep mc.py, the PID seem to be all different. I don't think I care since I can kill them all pretty easily with killall.
[Correction] kill them with pkill -f xxx.py
There are several options for launching scripts from a script. The easiest are probably to use the subprocess or os modules.
I have done this several times to launch things to separate nodes on a cluster. Using os it might look something like this:
import os
for i in range(len(operations)):
os.system("python myScript.py {:} {:} > out.log".format(arg1,arg2))
using killall you should have no problem terminating processes spawned this way.
Another option is to use subprocess which has got a wide range of features and is much more flexible than os.system. An example might look like:
import subprocess
for i in range(len(operations)):
command = ['python','myScript.py','arg1','arg2']
subprocess.call(command)
In both of these methods, the processes are independent and share nothing other than a parent PID.

How do I fork a new process with independent stdout, stderr, and stdin?

I have read most of the related questions about subprocess and os.fork(), including all the discussions about the double forking trick. However, none of the those solutions appear to work correctly for my scenario.
I want to fork a new process and allow the parent to terminate (normally) without screwing up the child's stdin, stdout, and stderr and without killing the child.
My first attempt was to use subprocess.Popen().
#!/usr/bin/python
from subprocess import call,Popen
Popen("/bin/bash", shell=True)
call("echo Hello > /tmp/FooBar", shell=True)
This fails because the child process is killed once the parent is exits. I am aware of creationflags but that is Windows-specific and I am running on Linux. Note that the above code works beautifully if we simply keep the parent process alive by adding an infinite loop to the end of it. This is undesirable because the parent is already finished with its job and there's no real reason for it to stick around.
The second attempt was to use os.fork().
#!/usr/bin/python
from subprocess import call
from os import fork
try:
pid = fork()
if pid > 0:
pass
else: # child process will start interactive process
call("/bin/bash", shell=True)
except:
print "Forking failed!"
call("echo Hello > /tmp/FooBar", shell=True)
Here, the child process no longer dies with the parent, but after the parent's death the child can no longer read input and write output.
Thus, I want to know how I fork a new process with utterly independent stdout, stderr, and stdin. Independence means that the parent process can terminate (normally), and the child process (whether it is bash or tmux or any other interactive program) behaves exactly as though the parent program had not terminated. To be even more precise, consider the following variation of the original program.
#!/usr/bin/python
from subprocess import call,Popen
Popen("/bin/bash", shell=True)
call("echo Hello > /tmp/FooBar", shell=True)
while True:
pass
The above code has all the behaviors I seek, but it keeps the Python process alive artificially. I am trying to achieve this exact behavior, without the Python process being alive.
Caveat: I am running these applications over ssh, so spawning a new GUI window is not a viable solution.
Desired Behavior:
I run the python code.
I get a shiny new bash shell that works exactly like the bash shell I started with.
The file /tmp/FooBar is created.
The original Python script finishes.
I continue on with my shiny new bash shell, and the output of ps aux | grep python does not include python script I just ran.
Your first example has an extra unnecessary call to Popen(), as the call convenience function will just execute its commands in a shell and exit, so it would work if you just ran:
from subprocess import call, Popen
call("echo Hello > /tmp/FooBar", shell=True)
However if you want to send a series of commands to a shell then the shell needs to opened with stdin attached to pipe so it doesn't get mixed up with the parents stdin (which is effectively what you're asking for in terms of obtaining independent stdio):
p = Popen("/bin/bash", shell=True, stdin = subprocess.PIPE)
p.stdin.write("echo Hello > /tmp/FooBar\n")
p.stdin.write("date >> /tmp/FooBar\n")
p.terminate()
If you want to also control the output then you redirect stdout and stderr to PIPE (i.e. stdout = subprocess.PIPE, stderr = subprocess.PIPE) and then call p.stdout.read() to obtain output as needed.
To allow the process to continue running after exiting python one can add & operator to the end of the command so it continues running in the background e.g.:
call("nc -l 2000 > /tmp/nc < &",shell=True)
To have a process run so that both stdin and stdout and still connected one can use shell redirects. To maintain access to stdin one can create a named pipe using mkfifo e.g:
call("mkfifo /tmp/pipe",shell=True)
call("tail -f /tmp/pipe > /tmp/out &",shell=True)
To provide input stdin one just send data to the pipe .e.g. from the shell:
$ echo 'test' > /tmp/pipe
I recently encountered this problem, I found a solution that might help,
using creationflags to tell Popen that the child process should not inherit the parent process console, and thus has it's own stdout, stdin and stderr as if it was a parent process.
subprocess.Popen("/bin/bash", creationflags= subprocess.CREATE_NEW_CONSOLE)
Popen supports creationflags keyword argument according to docs:
creationflags, if given, can be one or more of the following flags:
CREATE_NEW_CONSOLE
CREATE_NEW_PROCESS_GROUP
ABOVE_NORMAL_PRIORITY_CLASS
BELOW_NORMAL_PRIORITY_CLASS
HIGH_PRIORITY_CLASS
IDLE_PRIORITY_CLASS
NORMAL_PRIORITY_CLASS
REALTIME_PRIORITY_CLASS
CREATE_NO_WINDOW
DETACHED_PROCESS
CREATE_DEFAULT_ERROR_MODE
CREATE_BREAKAWAY_FROM_JOB
The ones that you are interested in are DETACHED_PROCESS and CREATE_NEW_CONSOLE, I've used CREATE_NEW_CONSOLE, what it does is it spawns a new process as if it was a parent, I used Python3.5 Windows, in Python 3.7 DETACHED_PROCESS was added and it's documented to do the same as CREATE_NEW_CONSOLE.

Python script doesn't restart itself properly

I have a Python script and I want to have it restart itself. I found the following lines Googling around:
def restart_program():
"""Restarts the current program.
Note: this function does not return. Any cleanup action (like
saving data) must be done before calling this function."""
python = sys.executable
os.execl(python, python, * sys.argv)
but problems became apparent right after trying this out. I'm running on a really small embedded system and I ran out of memory really quick (after 2 or three iterations of this function). Checking the process list, I can see a whole bunch of python processes.
Now, I realize, I could check the process list and kill all processes that have another PID than myself - is this what I have to do or is there a better Python solution?
This spawns a new child process using the same invocation that was used to spawn the first process, but it does not stop the existing process (more precisely: the existing process waits for the child to exit).
The easier way would be to refactor your program so you don't have to restart it. Why do you need to do this?
I rewrote my restart function as follows, it will kill every python process other than itself before launching the new sub process:
def restart_program():
"""Restarts the current program.
Note: this function does not return. Any cleanup action (like
saving data) must be done before calling this function."""
logger.info("RESTARTING SCRIPT")
# command to extract the PID from all the python processes
# in the process list
CMD="/bin/ps ax | grep python | grep -v grep | awk '{ print $1 }'"
#executing above command and redirecting the stdout int subprocess instance
p = subprocess.Popen(CMD, shell=True, stdout=subprocess.PIPE)
#reading output into a string
pidstr = p.communicate()[0]
#load pidstring into list by breaking at \n
pidlist = pidstr.split("\n")
#get pid of this current process
mypid = str(os.getpid())
#iterate through list killing all left over python processes other than this one
for pid in pidlist:
#find mypid
if mypid in pid:
logger.debug("THIS PID "+pid)
else:
#kill all others
logger.debug("KILL "+pid)
try:
pidint = int(pid)
os.kill(pidint, signal.SIGTERM)
except:
logger.error("CAN NOT KILL PID: "+pid)
python = sys.executable
os.execl(python, python, * sys.argv)
Not exactly sure if this is the best solution but it works for the interim anyways...

Run multiple programs sequentially in one Windows command prompt?

I need to run multiple programs one after the other and they each run in a console window.
I want the console window to be visible, but a new window is created for each program. This is annoying because each window is opened in a new position from where the other is closed and steals focus when working in Eclipse.
This is the initial code I was using:
def runCommand( self, cmd, instream=None, outstream=None, errstream=None ):
proc = subprocess.Popen( cmd, stdin=instream, stdout=outstream, stderr=errstream )
while True:
retcode = proc.poll()
if retcode == None:
if mAbortBuild:
proc.terminate()
return False
else:
time.sleep(1)
else:
if retcode == 0:
return True
else:
return False
I switched to opening a command prompt using 'cmd' when calling subprocess.Popen and then calling proc.stdin.write( b'program.exe\r\n' ).
This seems to solve the one command window problem but now I can't tell when the first program is done and I can start the second. I want to stop and interrogate the log file from the first program before running the second program.
Any tips on how I can achieve this? Is there another option for running the programs in one window I haven't found yet?
Since you're using Windows, you could just create a batch file listing each program you want to run which will all execute in a single console window. Since it's a batch script you can do things like put conditional statements in it as shown in the example.
import os
import subprocess
import textwrap
# create a batch file with some commands in it
batch_filename = 'commands.bat'
with open(batch_filename, "wt") as batchfile:
batchfile.write(textwrap.dedent("""
python hello.py
if errorlevel 1 (
#echo non-zero exit code: %errorlevel% - terminating
exit
)
time /t
date /t
"""))
# execute the batch file as a separate process and echo its output
kwargs = dict(stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True)
with subprocess.Popen(batch_filename, **kwargs).stdout as output:
for line in output:
print line,
try: os.remove(batch_filename) # clean up
except os.error: pass
In section 17.5.3.1. Constants in the subprocess module documentation there's description of subprocess.CREATE_NEW_CONSOLE constant:
The new process has a new console, instead of inheriting its parent’s
console (the default).
As we see, by default, new process inherits its parent's console. The reason you observe multiple consoles being opened is the fact that you call your scripts from within Eclipse, which itself does not have console so each subprocess creates its own console as there's no console it could inherit. If someone would like to simulate this behavior it's enough to run Python script which creates subprocesses using pythonw.exe instead of python.exe. The difference between the two is that the former does not open a console whereas the latter does.
The solution is to have helper script — let's call it launcher — which, by default, creates console and runs your programs in subprocesses. This way each program inherits one and the same console from its parent — the launcher. To run programs sequentially we use Popen.wait() method.
--- script_run_from_eclipse.py ---
import subprocess
import sys
subprocess.Popen([sys.executable, 'helper.py'])
--- helper.py ---
import subprocess
programs = ['first_program.exe', 'second_program.exe']
for program in programs:
subprocess.Popen([program]).wait()
if input('Do you want to continue? (y/n): ').upper() == 'N':
break

Categories

Resources