Let's say that I have this simple line in python:
os.system("sudo apt-get update")
of course, apt-get will take some time untill it's finished, how can I check in python if the command had finished or not yet?
Edit: this is the code with Popen:
os.environ['packagename'] = entry.get_text()
process = Popen(['dpkg-repack', '$packagename'])
if process.poll() is None:
print "It still working.."
else:
print "It finished"
Now the problem is, it never print "It finished" even when it really finish.
As the documentation states it:
This is implemented by calling the Standard C function system(), and
has the same limitations
The C call to system simply runs the program until it exits. Calling os.system blocks your python code until the bash command has finished thus you'll know that it is finished when os.system returns. If you'd like to do other stuff while waiting for the call to finish, there are several possibilities. The preferred way is to use the subprocessing module.
from subprocess import Popen
...
# Runs the command in another process. Doesn't block
process = Popen(['ls', '-l'])
# Later
# Returns the return code of the command. None if it hasn't finished
if process.poll() is None:
# Still running
else:
# Has finished
Check the link above for more things you can do with Popen
For a more general approach at running code concurrently, you can run that in another thread or process. Here's example code:
from threading import Thread
...
thread = Thread(group=None, target=lambda:os.system("ls -l"))
thread.run()
# Later
if thread.is_alive():
# Still running
else:
# Has finished
Another option would be to use the concurrent.futures module.
os.system will actually wait for the command to finish and return the exit status (format dependent format).
os.system is blocking; it calls the command waits for its completion, and returns its return code.
So, it'll be finished once os.system returns.
If your code isn't working, I think that could be caused by one of sudo's quirks, it refuses to give rights on certain environments(I don't know the details tho.).
Related
Summary: I want to start an external process from Python (version 3.6), poll the result nonblocking, and kill after a timeout.
Details: there is an external process with 2 "bad habits":
It prints out the relevant result after an undefined time.
It does not stop after it printed out the result.
Example: maybe the following simple application resembles mostly the actual program to be called (mytest.py; source code not available):
import random
import time
print('begin')
time.sleep(10*random.random())
print('result=5')
while True: pass
This is how I am trying to call it:
import subprocess, time
myprocess = subprocess.Popen(['python', 'mytest.py'], stdout=subprocess.PIPE)
for i in range(15):
time.sleep(1)
# check if something is printed, but do not wait to be printed anything
# check if the result is there
# if the result is there, then break
myprocess.kill()
I want to implement the logic in comment.
Analysis
The following are not appropriate:
Use myprocess.communicate(), as it waits for termination, and the subprocess does not terminate.
Kill the process and then call myprocess.communicate(), because we don't know when exactly the result is printed out
Use process.stdout.readline() because that is a blocikg statement, so it waits until something is printed. But here at the end does not print anything.
The type of the myprocess.stdout is io.BufferedReader. So the question practically is: is there a way to check if something is printed to the io.BufferedReader, and if so, read it, but otherwise do not wait?
I think I got the exact package you need.
Meet command_runner, which is a subprocess wrapper and allows:
Live stdout / stderr output
timeouts regardless of execution
process tree including child processes killing in case of timeout
stdout / stderr redirection to queues, files or callback functions
Install with pip install command_runner
Usage:
from command_runner import command_runner
def callback(stdout_output):
# Do whatever you want here with the output
print(stdout_output)
exit_code, output = command_runner("python mytest.py", timeout=300, stdout=callback, method='poller')
if exit_code == -254:
print("Oh no, we got a timeout")
print(output)
# Check for good exit_code and full stdout output here
If timeout is reached, you'll get exit_code -254 but still get to have output filled with whatever your subprocess wrote to stdout/stderr.
Disclaimer: I'm the author of command_runner
Additional non blocking examples using queues can be seen on the github page.
I have a python script in which I am trying to call them out at the same time.
I have written it as:
os.system('externalize {0}'.format(result))
os.system('viewer {0} -b {1}'.format(img_list[0], img_list[1]))
However by doing so, the second application will only be open/appear unless I quit/ exit out of the first application.
I tried using subprocess as follows:
subprocess.call('externalize {0}'.format(result), shell=True)
subprocess.call('viewer {0} -b {1}'.format(img_list[0], img_list[1]))
But I am not getting much success. Am I doing it wrong somewhere?
Run them as subprocesses without waiting for finish:
p1=subprocess.Popen(<args1>)
p2=subprocess.Popen(<args2>)
If/when you then need to wait for their finish and/or check their exit code, call wait() (or whatever else applicable) on these objects.
(In general, you should never ignore the object that Popen() returns and its exit code if you need to do something as a result of the subprocess' work (e.g. clean up the files you fed them if they're temporary).)
Several subprocess functions such as call are just convenience wrappers for the Popen object which executes programs asynchronously. You can use it instead
import subprocess as subp
result = 'foo'
img_list = ['bar', 'baz']
proc1 = subp.Popen('externalize {0}'.format(result), shell=True)
proc2 = subp.Popen('viewer {0} -b {1}'.format(img_list[0], img_list[1]), shell=True)
proc1.wait()
proc2.wait()
I have a script that is supposed to run 24/7 unless interrupted. This script is script A.
I want script A to call Script B, and have script A exit while B is running. Is this possible?
This is what I thought would work
#script_A.py
while(1)
do some stuff
do even more stuff
if true:
os.system("python script_B.py")
sys.exit(0)
#script_B.py
time.sleep(some_time)
do something
os.system("python script_A.py")
sys.exit(0)
But it seems as if A doesn't actually exit until B has finished executing (which is not what I want to happen).
Is there another way to do this?
What you are describing sounds a lot like a function call:
def doScriptB():
# do some stuff
# do some more stuff
def doScriptA():
while True:
# do some stuff
if Your Condition:
doScriptB()
return
while True:
doScriptA()
If this is insufficient for you, then you have to detach the process from you python process. This normally involves spawning the process in the background, which is done by appending an ampersand to the command in bash:
yes 'This is a background process' &
And detaching said process from the current shell, which, in a simple C program is done by forking the process twice. I don't know how to do this in python, but would bet, that there is a module for this.
This way, when the calling python process exits, it won't terminate the spawned child, since it is now independent.
It seems you want to detach a system call to another thread.
script_A.py
import subprocess
import sys
while(1)
do some stuff
do even more stuff
if true:
pid = subprocess.Popen([sys.executable, "python script_B.py"]) # call subprocess
sys.exit(0)
Anyway it does not seem a good practice at all. Why do you not try the script A listens the Process Stack and if it finds script B running then stops. This is another example how you could do it.
import subprocess
import sys
import psutil
while(1)
#This sections queries the current processes running
for proc in psutil.process_iter():
pinfo = proc.as_dict(attrs=['pid', 'name'])
if pinfo[ 'name' ] == "script_B.py":
sys.exit(0)
do some stuff
do even more stuff
if true:
pid = subprocess.Popen([sys.executable, "python script_B.py"]) # call subprocess
sys.exit(0)
I'm launching a number of subprocesses with subprocess.Popen in Python.
I'd like to check whether one such process has completed. I've found two ways of checking the status of a subprocess, but both seem to force the process to complete.
One is using process.communicate() and printing the returncode, as explained here: checking status of process with subprocess.Popen in Python.
Another is simply calling process.wait() and checking that it returns 0.
Is there a way to check if a process is still running without waiting for it to complete if it is?
Ouestion: ... a way to check if a process is still running ...
You can do it for instance:
p = subprocess.Popen(...
"""
A None value indicates that the process hasn't terminated yet.
"""
poll = p.poll()
if poll is None:
# p.subprocess is alive
Python » 3.6.1 Documentation popen-objects
Tested with Python:3.4.2
Doing the
myProcessIsRunning = poll() is None
As suggested by the main answer, is the recommended way and the simplest way to check if a process running. (and it works in jython as well)
If you do not have the process instance in hand to check it.
Then use the operating system TaskList / Ps processes.
On windows, my command will look as follows:
filterByPid = "PID eq %s" % pid
pidStr = str(pid)
commandArguments = ['cmd', '/c', "tasklist", "/FI", filterByPid, "|", "findstr", pidStr ]
This is essentially doing the same thing as the following command line:
cmd /c "tasklist /FI "PID eq 55588" | findstr 55588"
And on linux, I do exactly the same using the:
pidStr = str(pid)
commandArguments = ['ps', '-p', pidStr ]
The ps command will already be returning error code 0 / 1 depending on whether the process is found. While on windows you need the find string command.
This is the same approach that is discussed on the following stack overflow thread:
Verify if a process is running using its PID in JAVA
NOTE:
If you use this approach, remember to wrap your command call in a try/except:
try:
foundRunningProcess = subprocess.check_output(argumentsArray, **kwargs)
return True
except Exception as err:
return False
Note, be careful if you are developing with VS Code and using pure Python and Jython.
On my environment, I was under the illusion that the poll() method did not work because a process that I suspected that must have ended was indeed running.
This process had launched Wildfly. And after I had asked for wildfly to stop, the shell was still waiting for user to "Press any key to continue . . .".
In order to finish off this process, in pure python the following code was working:
process.stdin.write(os.linesep)
On jython, I had to fix this code to look as follows:
print >>process.stdin, os.linesep
And with this difference the process did indeed finish.
And the jython.poll() started telling me that the process is indeed finished.
As suggested by the other answers None is the designed placeholder for the "return code" when no code has been returned yet by the subprocess.
The documentation for the returncode attribute backs this up (emphasis mine):
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
A negative value -N indicates that the child was terminated by signal N (POSIX only).
An interesting place where this None value occurs is when using the timeout parameter for wait or communicate.
If the process does not terminate after timeout seconds, a TimeoutExpired exception will be raised.
If you catch that exception and check the returncode attribute it will indeed be None
import subprocess
with subprocess.Popen(['ping','127.0.0.1']) as p:
try:
p.wait(timeout=3)
except subprocess.TimeoutExpired:
assert p.returncode is None
If you look at the source for subprocess you can see the exception being raised.
https://github.com/python/cpython/blob/47be7d0108b4021ede111dbd15a095c725be46b7/Lib/subprocess.py#L1930-L1931
If you search that source for self.returncode is you'll find many uses where the library authors lean on that None return code design to infer if an app is running or not running. The returncode attribute is initialized to None and only ever changes in a few spots, the main flow in invocations to _handle_exitstatus to pass on the actual return code.
You could use subprocess.check_output to have a look at your output.
Try this code:
import subprocess
subprocess.check_output(['your command here'], shell=True, stderr=subprocess.STDOUT)
Hope this helped!
I am programming in python which involves me implementing a shell in Python in Linux. I am trying to run standard unix commands by using os.execvp(). I need to keep asking the user for commands so I have used an infinite while loop. However, the infinite while loop doesn't work. I have tried searching online but they're isn't much available for Python. Any help would be appreciated. Thanks
This is the code I have written so far:
import os
import shlex
def word_list(line):
"""Break the line into shell words."""
lexer = shlex.shlex(line, posix=True)
lexer.whitespace_split = False
lexer.wordchars += '#$+-,./?#^='
args = list(lexer)
return args
def main():
while(True):
line = input('psh>')
split_line = word_list(line)
if len(split_line) == 1:
print(os.execvp(split_line[0],[" "]))
else:
print(os.execvp(split_line[0],split_line))
if __name__ == "__main__":
main()
So when I run this and put in the input "ls" I get the output "HelloWorld.py" (which is correct) and "Process finished with exit code 0". However I don't get the output "psh>" which is waiting for the next command. No exceptions are thrown when I run this code.
Your code does not work because it uses os.execvp. os.execvp replaces the current process image completely with the executing program, your running process becomes the ls.
To execute a subprocess use the aptly named subprocess module.
In case of an ill-advised programming exercise then you need to:
# warning, never do this at home!
pid = os.fork()
if not pid:
os.execvp(cmdline) # in child
else:
os.wait(pid) # in parent
os.fork returns twice, giving the pid of child in parent process, zero in child process.
If you want it to run like a shell you are looking for os.fork() . Call this before you call os.execvp() and it will create a child process. os.fork() returns the process id. If it is 0 then you are in the child process and can call os.execvp(), otherwise continue with the code. This will keep the while loop running. You can have the original process either wait for it to complete os.wait(), or continue without waiting to the start of the while loop. The pseudo code on page 2 of this link should help https://www.cs.auckland.ac.nz/courses/compsci340s2c/assignments/A1/A1.pdf