I'm trying to have one subprocess open a program (rv in this case, using a specific tag) and then wait for this program to start before running two additional rvpush commands against this same rv session:
tag = "my-tag"
path = "/path/to/file/to/open/my_file"
cmd = f"rvpush -tag {tag} merge {path}"
cmd += f"; sleep 7; rvpush -tag {tag} py-eval \"rv.commands.setViewNode('master')\""
cmd += f"; rvpush -tag {tag} py-eval \"[rv.commands.setStringProperty(f'{{stack}}.composite.type', ['replace'], True) for stack in rv.commands.nodesOfType('RVStack')]\""
proc = subprocess.Popen(cmd, shell=True)
print("Starting the subprocess.")
proc.wait()
print(f"Subprocess ended. Deleting temporary file {path}.")
pathlib.Path(path).unlink()
I can do this by adding a sleep in-between, as above suggests, to let the program start, but I was hoping there would be a better solution for this.
I've played around with all different call, Popen, check_call as well as proc.wait(), proc.communicate(), proc.poll() etc, but I can't make it work without adding a sleep.
Do anyone know if you can somehow track via subprocess if the program has started successfuly? I feels it's not possible as subprocess only seem aware of if the subprocess is executed or finished, not the context that it runs.
Edit: For some context, the problem with using proc.wait() is that it waits for the process to finish, which is not what I want. I want my next commands to execute when the subprocess has finished opening the program via that initial command. This adds some complexity that I do not think subprocess initially handles, as it doesn't know about the context of which the command it executes has.
Related
I need to save some image files from my simulation at different times. So my idea was to open a subprocess save some image files and close it .
import subprocess
cmd = "rosrun pcl_ros pointcloud_to_pcd input:=camera/depth/points"
proc = subprocess.Popen(cmd, shell=True)
When it comes to closing I tried different things:
import os
import signal
import subprocess
cmd = "rosrun pcl_ros pointcloud_to_pcd input:=camera/depth/points"
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM)
command did not execute , so it doesn't work for me. I also tried a solution with psutil and it didn't work neither...
you probably don't need shell=True here, which is the cause of your problems. I suspect that when you kill the process group in your second snippet, the shell process is killed before the process you want to run has a chance to start...
Try to pass the parameters as a list of strings (so you don't need shell=True), wait a bit, and use terminate on the Popen object. You don't need process group, or psutil to kill the process & its children, just plain old terminate() on the process object does the trick.
cmd = ["rosrun","pcl_ros","pointcloud_to_pcd","input:=camera/depth/points"]
proc = subprocess.Popen(cmd)
time.sleep(1) # maybe needed to wait the process to do something useful
proc.terminate()
Note that proc.terminate() tries to exit gracefully where proc.kill() would have just killed the process (there's a difference under Un*x systems, not under Windows)
Another plea for "do not use shell=True unless forced at gunpoint".
I try to run simple script in windows in the same shell.
When I run
subprocess.call(["python.exe", "a.py"], shell=False)
It works fine.
But when I run
subprocess.Popen(["python.exe", "a.py"], shell=False)
It opens new shell and the shell=false has no affect.
a.py just print message to the screen.
First calling Popen with shell=False doesn't mean that the underlying python won't try to open a window/console. It's just that the current python instance executes python.exe directly and not in a system shell (cmd or sh).
Second, Popen returns a handle on the process, and you have to perform a wait() on this handle for it to end properly or you could generate a defunct process (depending on the platform you're running on). I suggest that you try
p = subprocess.Popen(["python.exe", "a.py"], shell=False)
return_code = p.wait()
to wait for process termination and get return code.
Note that Popen is a very bad way to run processes in background. The best way would be to use a separate thread
import subprocess
import threading
def run_it():
subprocess.call(["python.exe", "a.py"], shell=False)
t = threading.Thread(target=run_it)
t.start()
# do your stuff
# in the end
t.join()
Is it possible to start subprocess without waiting for it to terminate?
I have a Windows program that I need to execute in python script and I want to leave it run on background without waiting for the subprocess to quit since the program is expecting input to terminate itself (press q to quit).
I have tried many ways but none of them worked.
What I basically want to achieve is the following:
args = [os.path.join(path, 'myProgram.exe'), '/run']
p = subprocess.Popen(args)
and do some other stuff here with the myProgram.exe still running.
The myProgram.exe can also be installed as service. When I tried this approach by
subprocess.call('net start myService', shell=True)
the service always fails to start. It fails with system error code 1067 which means the process terminated unexpectedly.
NOTE: I'm using python 2.7
Thanks for the advices.
Edit:
I have found a workaround which I don't understand...
As workaround I've created a myProgram.bat file which starts myProgram.exe.
BUT there's a catch, if I do only
start myProgram.exe
it behaves exactly the same as when calling subprocess - it terminates. However if I do
timeout 0 -- #means wait 0 seconds
start myProgram.exe
the program starts normally.
you could try:
args = ['start', os.path.join(path, 'myProgram.exe'), '/run']
see start /? for help ( or http://www.computerhope.com/starthlp.htm)
If the program expects a q to quit, maybe send it one?
args = [os.path.join(path, 'myProgram.exe'), '/run']
p = subprocess.Popen(args, stdin=subprocess.PIPE)
...
p.stdin.write("q\n")
I have a python script that opens a .exe program using the subprocess module. This .exe program is an infinitely iterative script, in that it will continue to print the results of each iteration until the user closes the window. Every so often, it prints the results of the iteration into a file, replacing the previous data in the file.
My aims here are to:
Run the .exe program, and test for the existence of the file it outputs.
Once the file has been shown to exist, I need to run a test on the file to see if the iteration has converged to within a given tolerance. Once the iteration has converged, I need to kill the .exe subprocess.
This is my current code. It is designed to kill the subprocess once the iterate file has been created:
import subprocess
from subprocess import Popen, PIPE
fileexists = False
iteratecomms = Popen('iterate.exe', stdout=PIPE, stdin=PIPE, stderr=PIPE)
# Begin the iteration. Need to select options 1 and then 1 again at program menu
out, err = iteratecomms.communicate("1\n1\n".encode())
while (fileexists == False):
fileexists = os.path.exists(filelocation)
else:
Popen.kill(iteratecomms)
I know that this is incorrect; the issue is that as soon as I start the out, err = iteratecomms.communicate("1\n1\n".encode()) line, the program begins iterating, and does not move on to the next set of python code. Essentially, I need to start the .exe program, and at the same time test to see if the file has been created. I can't do this, however, because the program runs indefinitely.
How could I get around this? I have assumed that moving on to step 2 (testing the file and killing the subprocess under certain conditions) would not take too much work on top of this; if this is not true, how would I go about completing all of my aims?
Thank you very much for the help!
Edit: Clarified that the external file is overwritten.
I would use the multiprocessing module.
pool = multiprocessing.Pool()
def start_iteration():
return Popen('iterate.exe', stdout=PIPE, stdin=PIPE, stderr=PIPE)
pool.apply_async(start_iteration)
while (fileexists == False):
fileexists = os.path.exists(filelocation)
Popen.kill(???)
The only problem now is that you'll have to somehow find the PID of the process without waiting for Popen to return (because Popen should never return.)
Assuming that you're trying to continuously trying to read this file I would suggest running a tail on the file in question. This can be done from a separate terminal in any *nix family OS, but otherwise I would check out this article for a Python implementation:
http://code.activestate.com/recipes/157035-tail-f-in-python/
After that if you want to kill the program running you should just be able to call terminate on the process running:
import subprocess
sub = subprocess.popen(#Whatever)
#Do something
sub.terminate()
Prior to this,I run two command in for loop,like
for x in $set:
command
In order to save time,i want to run these two command in the same time,like parallel method in makefile
Thanks
Lyn
The threading module won't give you much performance-wise because of the Global Interpreter Lock.
I think the best way to do this is to use the subprocess module and open each command with it's own stdout.
processes = {}
for cmd in ['cmd1', 'cmd2', 'cmd3']:
p = subprocess.Popen('cmd1', stdout=subprocess.PIPE)
processes[p.stdout] = p
while len(processes):
rfds, _, _ = select.select(processes.keys(), [], [])
for fd in rfds:
process = processses[fd]
print fd.read()
if process.returncode is not None:
print "Process {0} returned with code {1}".format(process.pid, process.returncode)
del processes[fd]
You basically have to use select to see which file descriptors are ready and you have to check their returncode to see if doing a "read" caused them to exit. Processes basically go into a wait state until their stdout is closed. If you would like to do some things while you're waiting, you can put a timeout on select.select() so you'll stop waiting after so long. You can test the length of rfds and if it is 0 then you know that the timeout happened.
twisted or select module is probably what you're after.
If all you want to do is a bunch of batch commands, shell scripts, ie
#!/bin/sh
for i in "command1 command2 command3"; do
$i &
done
Might work better. Alternately, a Makefile like you said.
Look at the threading module.