I am using subprocess.call in order to execute another python file. Considering that the script that will be called will never terminate the execution as it's inside an infinite loop, how can I make it possible for the original script to continue the execution after the subprocess call ?
Example:
I have script1.py which does some calculations and then calls script2.py using subprocess.call(["python", "script2.py"]), since it's inside an infinite loop the script1 gets stuck on execution, is there another way to run the file other than using subprocess module ?
subprocess.call(["python", "script2.py"]) waits for the sub-process to finish.
Just use Popen instead:
proc = subprocess.Popen(["python", "script2.py"])
You can later do proc.poll() to see whether it is finished or not, or proc.wait() to wait for it to finish (as call does), or just forget about it and do other things instead.
BTW, you might want to ensure that the same python is called, and that the OS can find it, by using sys.executable instead of just "python":
subprocess.Popen([sys.executable, "script2.py"])
Related
I have a python script in which I am trying to call them out at the same time.
I have written it as:
os.system('externalize {0}'.format(result))
os.system('viewer {0} -b {1}'.format(img_list[0], img_list[1]))
However by doing so, the second application will only be open/appear unless I quit/ exit out of the first application.
I tried using subprocess as follows:
subprocess.call('externalize {0}'.format(result), shell=True)
subprocess.call('viewer {0} -b {1}'.format(img_list[0], img_list[1]))
But I am not getting much success. Am I doing it wrong somewhere?
Run them as subprocesses without waiting for finish:
p1=subprocess.Popen(<args1>)
p2=subprocess.Popen(<args2>)
If/when you then need to wait for their finish and/or check their exit code, call wait() (or whatever else applicable) on these objects.
(In general, you should never ignore the object that Popen() returns and its exit code if you need to do something as a result of the subprocess' work (e.g. clean up the files you fed them if they're temporary).)
Several subprocess functions such as call are just convenience wrappers for the Popen object which executes programs asynchronously. You can use it instead
import subprocess as subp
result = 'foo'
img_list = ['bar', 'baz']
proc1 = subp.Popen('externalize {0}'.format(result), shell=True)
proc2 = subp.Popen('viewer {0} -b {1}'.format(img_list[0], img_list[1]), shell=True)
proc1.wait()
proc2.wait()
I am writing a django application where I need to call a python script, say foo.py when a method bar is called. The script foo.py can take a lot of time to execute as it iterates over millions of rows in database. That is why I don't want to wait for its output, I want the file to be executed purely by the OS. I have tried:
execfile
os.system
subprocess.Popen
subprocess.call
But they all wait for the file to produce an output. How can I achieve this? Is there a module that I am missing or can I write an "observer script" that observes if the bar method is called, it will run the foo.py file independently without and let the method finish execution instead of waiting.
Probably, you did something incorrect, because pure subprocess.Popen doesn't wait for end of child process...
Just tried with following example:
bar.py:
import subprocess
subprocess.Popen(['python', 'foo.py'])
print '123'
foo.py:
import time
time.sleep(50)
Run bar.py:
And I immediately see the "123" output and also I see "python" in processes list
From what I can tell, execv overtakes the current process, and once the called executable finishes, the program terminates. I want to call execv multiple times within the same script, but because of this, that cannot be done.
Is there an alternative to execv that runs within the current process (i.e. prints to same stdout) and won't terminate my program? If so, what is it?
Yes, use subprocess.
os.execv* is not approporiate for your task, from doc:
These functions all execute a new program, replacing the current
process; they do not return. On Unix, the new executable is loaded
into the current process, and will have the same process id as the
caller.
So, as you want the external exe to print to the same output, this is what you might do:
import subprocess
output = subprocess.check_output(['your_exe', 'arg1'])
By default, check_output() only returns output written to standard output. If you want both standard output and error collected, use the stderr argument.
output = subprocess.check_output(['your_exe', 'arg1'], stderr=subprocess.STDOUT)
The subprocess module in the stdlib is the best way to create processes.
Let's say that I have this simple line in python:
os.system("sudo apt-get update")
of course, apt-get will take some time untill it's finished, how can I check in python if the command had finished or not yet?
Edit: this is the code with Popen:
os.environ['packagename'] = entry.get_text()
process = Popen(['dpkg-repack', '$packagename'])
if process.poll() is None:
print "It still working.."
else:
print "It finished"
Now the problem is, it never print "It finished" even when it really finish.
As the documentation states it:
This is implemented by calling the Standard C function system(), and
has the same limitations
The C call to system simply runs the program until it exits. Calling os.system blocks your python code until the bash command has finished thus you'll know that it is finished when os.system returns. If you'd like to do other stuff while waiting for the call to finish, there are several possibilities. The preferred way is to use the subprocessing module.
from subprocess import Popen
...
# Runs the command in another process. Doesn't block
process = Popen(['ls', '-l'])
# Later
# Returns the return code of the command. None if it hasn't finished
if process.poll() is None:
# Still running
else:
# Has finished
Check the link above for more things you can do with Popen
For a more general approach at running code concurrently, you can run that in another thread or process. Here's example code:
from threading import Thread
...
thread = Thread(group=None, target=lambda:os.system("ls -l"))
thread.run()
# Later
if thread.is_alive():
# Still running
else:
# Has finished
Another option would be to use the concurrent.futures module.
os.system will actually wait for the command to finish and return the exit status (format dependent format).
os.system is blocking; it calls the command waits for its completion, and returns its return code.
So, it'll be finished once os.system returns.
If your code isn't working, I think that could be caused by one of sudo's quirks, it refuses to give rights on certain environments(I don't know the details tho.).
How do I execute a program from within my program without blocking until the executed program finishes?
I have tried:
os.system()
But it stops my program till the executed program is stopped/closed. Is there a way to allow my program to keep running after the execution of the external program?
Consider using the subprocess module.
Python 2: http://docs.python.org/2/library/subprocess.html
Python 3: http://docs.python.org/3/library/subprocess.html
subprocess spawns a new process in which your external application is run. Your application continues execution while the other application runs.
You want subprocess.
You could use the subprocess module, but the os.system will also work. It works through a shell, so you just have to put an '&' at the end of your string. Just like in an interactive shell, it will then run in the background.
If you need to get some kind of output from it, however, you will most likely want to use the subprocess module.
You can use subprocess for that:
import subprocess
import codecs
# start 'yourexecutable' with some parameters
# and throw the output away
with codecs.open(os.devnull, 'wb', encoding='utf8') as devnull:
subprocess.check_call(["yourexecutable",
"-param",
"value"],
stdout=devnull, stderr=subprocess.STDOUT
)