Is it possible to start subprocess without waiting for it to terminate?
I have a Windows program that I need to execute in python script and I want to leave it run on background without waiting for the subprocess to quit since the program is expecting input to terminate itself (press q to quit).
I have tried many ways but none of them worked.
What I basically want to achieve is the following:
args = [os.path.join(path, 'myProgram.exe'), '/run']
p = subprocess.Popen(args)
and do some other stuff here with the myProgram.exe still running.
The myProgram.exe can also be installed as service. When I tried this approach by
subprocess.call('net start myService', shell=True)
the service always fails to start. It fails with system error code 1067 which means the process terminated unexpectedly.
NOTE: I'm using python 2.7
Thanks for the advices.
Edit:
I have found a workaround which I don't understand...
As workaround I've created a myProgram.bat file which starts myProgram.exe.
BUT there's a catch, if I do only
start myProgram.exe
it behaves exactly the same as when calling subprocess - it terminates. However if I do
timeout 0 -- #means wait 0 seconds
start myProgram.exe
the program starts normally.
you could try:
args = ['start', os.path.join(path, 'myProgram.exe'), '/run']
see start /? for help ( or http://www.computerhope.com/starthlp.htm)
If the program expects a q to quit, maybe send it one?
args = [os.path.join(path, 'myProgram.exe'), '/run']
p = subprocess.Popen(args, stdin=subprocess.PIPE)
...
p.stdin.write("q\n")
Related
I'm trying to have one subprocess open a program (rv in this case, using a specific tag) and then wait for this program to start before running two additional rvpush commands against this same rv session:
tag = "my-tag"
path = "/path/to/file/to/open/my_file"
cmd = f"rvpush -tag {tag} merge {path}"
cmd += f"; sleep 7; rvpush -tag {tag} py-eval \"rv.commands.setViewNode('master')\""
cmd += f"; rvpush -tag {tag} py-eval \"[rv.commands.setStringProperty(f'{{stack}}.composite.type', ['replace'], True) for stack in rv.commands.nodesOfType('RVStack')]\""
proc = subprocess.Popen(cmd, shell=True)
print("Starting the subprocess.")
proc.wait()
print(f"Subprocess ended. Deleting temporary file {path}.")
pathlib.Path(path).unlink()
I can do this by adding a sleep in-between, as above suggests, to let the program start, but I was hoping there would be a better solution for this.
I've played around with all different call, Popen, check_call as well as proc.wait(), proc.communicate(), proc.poll() etc, but I can't make it work without adding a sleep.
Do anyone know if you can somehow track via subprocess if the program has started successfuly? I feels it's not possible as subprocess only seem aware of if the subprocess is executed or finished, not the context that it runs.
Edit: For some context, the problem with using proc.wait() is that it waits for the process to finish, which is not what I want. I want my next commands to execute when the subprocess has finished opening the program via that initial command. This adds some complexity that I do not think subprocess initially handles, as it doesn't know about the context of which the command it executes has.
I'm running an application from within my code, and it rewrites files which I need to read later on in the code. There is no output the goes directly into my program. I can't get my code to wait until the subprocess has finished, it just goes ahead and reads the unchanged files.
I've tried subprocess.Popen.wait(), subprocess.call(), and subprocess.check_call(), but none of them work for my problem. Does anyone have any idea how to make this work? Thanks.
Edit: Here is the relevant part of my code:
os.chdir('C:\Users\Jeremy\Documents\FORCAST\dusty')
t = subprocess.Popen('start dusty.exe', shell=True)
t.wait()
os.chdir('C:\Users\Jeremy\Documents\FORCAST')
Do you use the return object of subprocess.Popen()?
p = subprocess.Popen(command)
p.wait()
should work.
Are you sure that the command does not end instantly?
If you execute a program with
t = subprocess.Popen(prog, Shell=True)
Python won't thrown an error, regardless whether the program exists or not. If you try to start an non-existing program with Popen and Shell=False, you will get an error. My guess would be that your program either doesn't exist in the folder or doesn't execute. Try to execute in the Python IDLE environment with Shell=False and see if you get a new window.
I am working on some scripts (in the company I work in) that are loaded/unloaded into hypervisors to fire a piece of code when an event occurs. The only way to actually unload a script is to hit Ctrl-C. I am writing a function in Python that automates the process
As soon as it sees the string "done" in the output of the program, it should kill the vprobe.
I am using subprocess.Popen to execute the command:
lineList = buff.readlines()
cmd = "vprobe /vprobe/myhello.emt"
p = subprocess.Popen(args = cmd, shell=True,stdout = buff, universal_newlines = True,preexec_fn=os.setsid)
while not re.search("done",lineList[-1]):
print "waiting"
os.kill(p.pid,signal.CTRL_C_EVENT)
As you can see, I am writing the output in buff file descriptor opened in read+write mode. I check the last line; if it has 'done', I kill it. Unfortunately, the CTRL_C_EVENT is only valid for Windows.
What can I do for Linux?
I think you can just send the Linux equivalent, signal.SIGINT (the interrupt signal).
(Edit: I used to have something here discouraging the use of this strategy for controlling subprocesses, but on more careful reading it sounds like you've already decided you need control-C in this specific case... So, SIGINT should do it.)
In Linux, Ctrl-C keyboard interrupt can be sent programmatically to a process using Popen.send_signal(signal.SIGINT) function. For example
import subprocess
import signal
..
process = subprocess.Popen(..)
..
process.send_signal(signal.SIGINT)
..
Don't use Popen.communicate() for blocking commands..
Maybe I misunderstand something, but the way you do it it is difficult to get the desired result.
Whatever buff is, you query it first, then use it in the context of Popen() and then you hope that by maciv lineList fills itself up.
What you probably want is something like
logfile = open("mylogfile", "a")
p = subprocess.Popen(['vprobe', '/vprobe/myhello.emt'], stdout=subprocess.PIPE, buff, universal_newlines=True, preexec_fn=os.setsid)
for line in p.stdout:
logfile.write(line)
if re.search("done", line):
break
print "waiting"
os.kill(p.pid, signal.CTRL_C_EVENT)
This gives you a pipe end fed by your vprobe script which you can read out linewise and act appropriately upon the found output.
I am developing a wrapper around gdb using python. Basically, I just want to be able to detect a few setup annoyances up-front and be able to run a single command to invoke gdb, rather than a huge string I have to remember each time.
That said, there are two cases that I am using. The first, which works fine, is invoking gdb by creating a new process and attaching to it. Here's the code that I have for this one:
def spawnNewProcessInGDB():
global gObjDir, gGDBProcess;
from subprocess import Popen
from os.path import join
import subprocess
binLoc = join(gObjDir, 'dist');
binLoc = join(binLoc, 'bin');
binLoc = join(binLoc, 'mycommand')
profileDir = join(gObjDir, '..')
profileDir = join(profileDir, 'trash-profile')
try:
gGDBProcess = Popen(['gdb', '--args', binLoc, '-profile', profileDir], cwd=gObjDir)
gGDBProcess.wait()
except KeyboardInterrupt:
# Send a termination signal to the GDB process, if it's running
promptAndTerminate(gGDBProcess)
Now, if the user presses CTRL-C while this is running, it breaks (i.e. it forwards the CTRL-C to GDB). This is the behavior I want.
The second case is a bit more complicated. It might be the case that I already had this program running on my system and it crashed, but was caught. In this case, I want to be able to connect to it using gdb to get a stack trace (or perhaps I was already running it, and I simply now want to connect to the process that's already in memory).
As a convenience feature, I've created a mirror function, which will connect to a running process using gdb:
def connectToProcess(procNum):
global gObjDir, gGDBProcess
from subprocess import Popen
import subprocess
import signal
print("Connecting to mycommand process number " + str(procNum) + "...")
try:
gGDBProcess = Popen(['gdb', '-p', procNum], cwd=gObjDir)
gGDBProcess.wait()
except KeyboardInterrupt:
promptAndTerminate(gGDBProcess)
Again, this seems to work as expected. It starts gdb, I can set breakpoints, run the program, etc. The only catch is that it doesn't forward CTRL-C to gdb if I press it while the program is running. Instead, it jumps immediately to promptAndTerminate().
I'm wondering if anyone can see why this is happening - the two calls to subprocess.Popen() seem identical to me, albeit that one is running gdb in a different mode.
I have also tried replacing the call to subprocess.Popen() with the following:
gGDBProcess = Popen(['gdb', '-p', procNum], cwd=gObjDir, stdin=subprocess.PIPE)
but this leads to undesirable results as well, because it doesn't actually communicate anything to the child gdb process (e.g. if I type in c to start the program running again after it is broken upon connection from gdb, it doesn't do anything). Again, it terminates the running python process when I type CTRL-C.
Any help would be appreciated!
Prior to this,I run two command in for loop,like
for x in $set:
command
In order to save time,i want to run these two command in the same time,like parallel method in makefile
Thanks
Lyn
The threading module won't give you much performance-wise because of the Global Interpreter Lock.
I think the best way to do this is to use the subprocess module and open each command with it's own stdout.
processes = {}
for cmd in ['cmd1', 'cmd2', 'cmd3']:
p = subprocess.Popen('cmd1', stdout=subprocess.PIPE)
processes[p.stdout] = p
while len(processes):
rfds, _, _ = select.select(processes.keys(), [], [])
for fd in rfds:
process = processses[fd]
print fd.read()
if process.returncode is not None:
print "Process {0} returned with code {1}".format(process.pid, process.returncode)
del processes[fd]
You basically have to use select to see which file descriptors are ready and you have to check their returncode to see if doing a "read" caused them to exit. Processes basically go into a wait state until their stdout is closed. If you would like to do some things while you're waiting, you can put a timeout on select.select() so you'll stop waiting after so long. You can test the length of rfds and if it is 0 then you know that the timeout happened.
twisted or select module is probably what you're after.
If all you want to do is a bunch of batch commands, shell scripts, ie
#!/bin/sh
for i in "command1 command2 command3"; do
$i &
done
Might work better. Alternately, a Makefile like you said.
Look at the threading module.