Calling 2 external applications at the same time - python

I have a python script in which I am trying to call them out at the same time.
I have written it as:
os.system('externalize {0}'.format(result))
os.system('viewer {0} -b {1}'.format(img_list[0], img_list[1]))
However by doing so, the second application will only be open/appear unless I quit/ exit out of the first application.
I tried using subprocess as follows:
subprocess.call('externalize {0}'.format(result), shell=True)
subprocess.call('viewer {0} -b {1}'.format(img_list[0], img_list[1]))
But I am not getting much success. Am I doing it wrong somewhere?

Run them as subprocesses without waiting for finish:
p1=subprocess.Popen(<args1>)
p2=subprocess.Popen(<args2>)
If/when you then need to wait for their finish and/or check their exit code, call wait() (or whatever else applicable) on these objects.
(In general, you should never ignore the object that Popen() returns and its exit code if you need to do something as a result of the subprocess' work (e.g. clean up the files you fed them if they're temporary).)

Several subprocess functions such as call are just convenience wrappers for the Popen object which executes programs asynchronously. You can use it instead
import subprocess as subp
result = 'foo'
img_list = ['bar', 'baz']
proc1 = subp.Popen('externalize {0}'.format(result), shell=True)
proc2 = subp.Popen('viewer {0} -b {1}'.format(img_list[0], img_list[1]), shell=True)
proc1.wait()
proc2.wait()

Related

Python: subprocess.Popen() returns None

I need to execute a CLI binary with args, keep the process alive and run multiple commands throughout the python script. So I am using Python and subprocess.Popen() in the following way:
from subprocess import Popen, PIPE
cmd = ["/full/path/to/binary","--arg1"]
process = Popen(cmd,stdin=PIPE, stdout=None)
process.stdin.write(f"command-for-the-CLI-tool".encode())
process.stdin.flush()
However, no matter how I call Popen(), the returned process object is None.
If I run process = Popen(cmd), without specifying stdin and stdout, I can see the process running correctly in the output console, meaning that the binary path and args are correct, but the process object is still None, meaning that I cannot issue other commands afterwards.
EDIT: The point of this is that I want to execute the following:
command = (
f"cat << EOF | {cmd}\n"
f"use {dbname};\n"
"set optimizer_switch='hypergraph_optimizer=on';\n"
f"SET forced_plan='{forced_plan}';\n"
f"{query_text}\n"
"EOF"
)
runtimes = []
for _ in trange(runs):
start = time.time()
subprocess.run(command, shell=True, stdout=sys.stdout)
runtimes.append(time.time() - start)
But this clearly measures the time of all the commands, whereas I am only interested in measuring the "query_text" command.
This is why I am looking for a solution where I can send the commands separately and time only the one I am interested in.
If I use multiple subprocess.run(), then the process instances will be different. I want the instance to be the same because the query depends on the previous commands.
With subprocess.run you can pass the entire input as ... input.
command = f"""\
use {dbname};
set optimizer_switch='hypergraph_optimizer=on';
SET forced_plan='{forced_plan}';
{query_text}
"""
runtimes = []
for _ in trange(runs):
start = time.time()
subprocess.run([cmd], text=true, input=command, stdout=sys.stdout)
runtimes.append(time.time() - start)
I took out shell=True; perhaps see also Actual meaning of shell=True in subprocess as well as perhaps Running Bash commands in Python which elaborates on several of the changes here.
Try using subprocess.run() instead of subprocess.Popen()
If you still use subprocess.Popen(), then you can use the .poll() method
But subprocess.Popen() will always return None if the execution of the command has not yet completed, or an exit code if the command has finished its execution.

Using subprocess.call to execute python file?

I am using subprocess.call in order to execute another python file. Considering that the script that will be called will never terminate the execution as it's inside an infinite loop, how can I make it possible for the original script to continue the execution after the subprocess call ?
Example:
I have script1.py which does some calculations and then calls script2.py using subprocess.call(["python", "script2.py"]), since it's inside an infinite loop the script1 gets stuck on execution, is there another way to run the file other than using subprocess module ?
subprocess.call(["python", "script2.py"]) waits for the sub-process to finish.
Just use Popen instead:
proc = subprocess.Popen(["python", "script2.py"])
You can later do proc.poll() to see whether it is finished or not, or proc.wait() to wait for it to finish (as call does), or just forget about it and do other things instead.
BTW, you might want to ensure that the same python is called, and that the OS can find it, by using sys.executable instead of just "python":
subprocess.Popen([sys.executable, "script2.py"])

How to check if a shell command is over in Python

Let's say that I have this simple line in python:
os.system("sudo apt-get update")
of course, apt-get will take some time untill it's finished, how can I check in python if the command had finished or not yet?
Edit: this is the code with Popen:
os.environ['packagename'] = entry.get_text()
process = Popen(['dpkg-repack', '$packagename'])
if process.poll() is None:
print "It still working.."
else:
print "It finished"
Now the problem is, it never print "It finished" even when it really finish.
As the documentation states it:
This is implemented by calling the Standard C function system(), and
has the same limitations
The C call to system simply runs the program until it exits. Calling os.system blocks your python code until the bash command has finished thus you'll know that it is finished when os.system returns. If you'd like to do other stuff while waiting for the call to finish, there are several possibilities. The preferred way is to use the subprocessing module.
from subprocess import Popen
...
# Runs the command in another process. Doesn't block
process = Popen(['ls', '-l'])
# Later
# Returns the return code of the command. None if it hasn't finished
if process.poll() is None:
# Still running
else:
# Has finished
Check the link above for more things you can do with Popen
For a more general approach at running code concurrently, you can run that in another thread or process. Here's example code:
from threading import Thread
...
thread = Thread(group=None, target=lambda:os.system("ls -l"))
thread.run()
# Later
if thread.is_alive():
# Still running
else:
# Has finished
Another option would be to use the concurrent.futures module.
os.system will actually wait for the command to finish and return the exit status (format dependent format).
os.system is blocking; it calls the command waits for its completion, and returns its return code.
So, it'll be finished once os.system returns.
If your code isn't working, I think that could be caused by one of sudo's quirks, it refuses to give rights on certain environments(I don't know the details tho.).

Executing a command and storing its output in a variable

I'm currently trying to write a python script that, among many things, calls an executable and stores what that executable sends to stdout in a variable. Here is what I have:
1 #!/usr/bin/python
2 import subprocess
3
4 subprocess.call("./pmm", shell=True)
How would I get the output of pmm to be stored in a variable?
In Python 2.7 (and 3.1 or above), you can use subprocess.check_output(). Example from the documentation:
>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
p = subprocess.Popen(["./pmm"], shell=False, stdout=subprocess.PIPE)
output = p.stdout.read()
I wrote a post about this some time ago:
http://trifoliummedium.blogspot.com/2010/12/running-command-line-with-python-and.html
Use p.communicate() to get both stdout and stderr
First you have to save a reference to the subprocess (bind it to a name ... which, in other languages and more informally is referred to as "assigning it to a variable"). So you should use something like proc = subprocess.Popen(...)
From there I recommend that you call proc.poll() to test if the program has completed, and either sleep (using the time.sleep() function, for example) or perform other work (using select.select() for example) and then checking again, later. Or you can call proc.wait() so that you're sure the this ./pmm command has completed it's work before your program continues. The poll() method on an subprocess instance will return "None" if the subprocess it still running; otherwise it'll return the exit value of the command that was running on that subprocess. The wait() method for a subprocess will cause your program to block and then return the exit value.
After that you can call (output, errormsgs) = proc.communicate() to capture any output or error messages from your subprocess. If the output is too large it could cause problems; using the process instance's .stdout (PIPE file descriptor) is tricky and, if you were going to attempt this then you should use features in the fcntl (file descriptor control) module to switch it into a non-blocking mode and be prepared to handle he exceptions raised when attempting read() calls on the buffer when it's empty.

python run multi command in the same time

Prior to this,I run two command in for loop,like
for x in $set:
command
In order to save time,i want to run these two command in the same time,like parallel method in makefile
Thanks
Lyn
The threading module won't give you much performance-wise because of the Global Interpreter Lock.
I think the best way to do this is to use the subprocess module and open each command with it's own stdout.
processes = {}
for cmd in ['cmd1', 'cmd2', 'cmd3']:
p = subprocess.Popen('cmd1', stdout=subprocess.PIPE)
processes[p.stdout] = p
while len(processes):
rfds, _, _ = select.select(processes.keys(), [], [])
for fd in rfds:
process = processses[fd]
print fd.read()
if process.returncode is not None:
print "Process {0} returned with code {1}".format(process.pid, process.returncode)
del processes[fd]
You basically have to use select to see which file descriptors are ready and you have to check their returncode to see if doing a "read" caused them to exit. Processes basically go into a wait state until their stdout is closed. If you would like to do some things while you're waiting, you can put a timeout on select.select() so you'll stop waiting after so long. You can test the length of rfds and if it is 0 then you know that the timeout happened.
twisted or select module is probably what you're after.
If all you want to do is a bunch of batch commands, shell scripts, ie
#!/bin/sh
for i in "command1 command2 command3"; do
$i &
done
Might work better. Alternately, a Makefile like you said.
Look at the threading module.

Categories

Resources