Pipe subprocess standard output to a variable [duplicate] - python

This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 4 years ago.
I want to run a command in pythong, using the subprocess module, and store the output in a variable. However, I do not want the command's output to be printed to the terminal.
For this code:
def storels():
a = subprocess.Popen("ls",shell=True)
storels()
I get the directory listing in the terminal, instead of having it stored in a. I've also tried:
def storels():
subprocess.Popen("ls > tmp",shell=True)
a = open("./tmp")
[Rest of Code]
storels()
This also prints the output of ls to my terminal. I've even tried this command with the somewhat dated os.system method, since running ls > tmp in the terminal doesn't print ls to the terminal at all, but stores it in tmp. However, the same thing happens.
Edit:
I get the following error after following marcog's advice, but only when running a more complex command. cdrecord --help. Python spits this out:
Traceback (most recent call last):
File "./install.py", line 52, in <module>
burntrack2("hi")
File "./install.py", line 46, in burntrack2
a = subprocess.Popen("cdrecord --help",stdout = subprocess.PIPE)
File "/usr/lib/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1139, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

To get the output of ls, use stdout=subprocess.PIPE.
>>> proc = subprocess.Popen('ls', stdout=subprocess.PIPE)
>>> output = proc.stdout.read()
>>> print output
bar
baz
foo
The command cdrecord --help outputs to stderr, so you need to pipe that indstead. You should also break up the command into a list of tokens as I've done below, or the alternative is to pass the shell=True argument but this fires up a fully-blown shell which can be dangerous if you don't control the contents of the command string.
>>> proc = subprocess.Popen(['cdrecord', '--help'], stderr=subprocess.PIPE)
>>> output = proc.stderr.read()
>>> print output
Usage: wodim [options] track1...trackn
Options:
-version print version information and exit
dev=target SCSI target to use as CD/DVD-Recorder
gracetime=# set the grace time before starting to write to #.
...
If you have a command that outputs to both stdout and stderr and you want to merge them, you can do that by piping stderr to stdout and then catching stdout.
subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
As mentioned by Chris Morgan, you should be using proc.communicate() instead of proc.read().
>>> proc = subprocess.Popen(['cdrecord', '--help'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> out, err = proc.communicate()
>>> print 'stdout:', out
stdout:
>>> print 'stderr:', err
stderr:Usage: wodim [options] track1...trackn
Options:
-version print version information and exit
dev=target SCSI target to use as CD/DVD-Recorder
gracetime=# set the grace time before starting to write to #.
...

If you are using python 2.7 or later, the easiest way to do this is to use the subprocess.check_output() command. Here is an example:
output = subprocess.check_output('ls')
To also redirect stderr you can use the following:
output = subprocess.check_output('ls', stderr=subprocess.STDOUT)
In the case that you want to pass parameters to the command, you can either use a list or use invoke a shell and use a single string.
output = subprocess.check_output(['ls', '-a'])
output = subprocess.check_output('ls -a', shell=True)

With a = subprocess.Popen("cdrecord --help",stdout = subprocess.PIPE)
, you need to either use a list or use shell=True;
Either of these will work. The former is preferable.
a = subprocess.Popen(['cdrecord', '--help'], stdout=subprocess.PIPE)
a = subprocess.Popen('cdrecord --help', shell=True, stdout=subprocess.PIPE)
Also, instead of using Popen.stdout.read/Popen.stderr.read, you should use .communicate() (refer to the subprocess documentation for why).
proc = subprocess.Popen(['cdrecord', '--help'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()

Related

How can I get the value from os.system with Python 3 [duplicate]

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.
What would be a code example that would do such a thing?
For example:
def run_command(cmd):
# ??????
print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:
>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.
The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.
Modern versions of Python (3.5 or higher): run
If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:
>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:
>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
This can all be compressed to a one-liner if desired:
>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:
>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'
You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:
>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.
Older versions of Python (3-3.4): more about check_output
If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.
subprocess.check_output(*popenargs, **kwargs)
It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.
You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.
Complex applications and legacy versions of Python (2.6 and below): Popen
If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.
The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.
To send input and capture output, communicate is almost always the preferred method. As in:
output = subprocess.Popen(["mycmd", "myarg"],
stdout=subprocess.PIPE).communicate()[0]
Or
>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo
If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:
>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo
Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.
In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.
As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.
Notes
1. Running shell commands: the shell=True argument
Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:
>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'
However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via
run(cmd, [stdout=etc...], input=other_output)
Or
Popen(cmd, [stdout=etc...]).communicate(other_output)
The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.
This is way easier, but only works on Unix (including Cygwin) and Python2.7.
import commands
print commands.getstatusoutput('wc -l file')
It returns a tuple with the (return_value, output).
For a solution that works in both Python2 and Python3, use the subprocess module instead:
from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
I had the same problem but figured out a very simple way of doing this:
import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Note: This solution is Python3 specific as subprocess.getoutput() doesn't work in Python2
Something like that:
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None:
break
Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.
This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).
For your case the usage would be:
for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
print line,
This is a tricky but super simple solution which works in many situations:
import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())
A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.
Extra note from the comments:
You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.
os.remove('tmp')
Vartec's answer doesn't read all lines, so I made a version that did:
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
Usage is the same as the accepted answer:
command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
print(line)
You can use following commands to run any shell command. I have used them on ubuntu.
import os
os.popen('your command here').read()
Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example
import subprocess
p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
I had a slightly different flavor of the same problem with the following requirements:
Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
#vartec solved this Pythonically with his use of generators and the 'yield'
keyword above
Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
Don't waste CPU cycles polling the process at high-frequency
Check the return code of the subprocess
Print STDERR (separate from STDOUT) if we get a non-zero error return code.
I've combined and tweaked previous answers to come up with the following:
import subprocess
from time import sleep
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
# Read stdout from subprocess until the buffer is empty !
for line in iter(p.stdout.readline, b''):
if line: # Don't print blank lines
yield line
# This ensures the process has completed, AND sets the 'returncode' attr
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# The run_command() function is responsible for logging STDERR
print("Error: " + str(err))
This code would be executed the same as previous answers:
for line in run_command(cmd):
print(line)
Your Mileage May Vary, I attempted #senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.
I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.
import subprocess
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE).communicate()
and call like this, ([0] gets the first element of the tuple, stdout):
run_command('tracert 11.1.0.1')[0]
After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.
To stop console popups (with Windows), do this:
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
# instantiate a startupinfo obj:
startupinfo = subprocess.STARTUPINFO()
# set the use show window flag, might make conditional on being in Windows:
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# pass as the startupinfo keyword argument:
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=startupinfo).communicate()
run_command('tracert 11.1.0.1')
On Python 3.7+, use subprocess.run and pass capture_output=True:
import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))
This will return bytes:
b'hello world\n'
If you want it to convert the bytes to a string, add text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))
This will read the bytes using your default encoding:
'hello world\n'
If you need to manually specify a different encoding, use encoding="your encoding" instead of text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Splitting the initial command for the subprocess might be tricky and cumbersome.
Use shlex.split() to help yourself out.
Sample command
git log -n 5 --since "5 years ago" --until "2 year ago"
The code
from subprocess import check_output
from shlex import split
res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Without shlex.split() the code would look as follows
res = check_output([
'git',
'log',
'-n',
'5',
'--since',
'5 years ago',
'--until',
'2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Here a solution, working if you want to print output while process is running or not.
I added the current working directory also, it was useful to me more than once.
Hoping the solution will help someone :).
import subprocess
def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.
:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []
process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
while True:
next_line = process.stdout.readline()
if next_line:
output.append(str(next_line))
if print_constantly:
print(next_line)
elif not process.poll():
break
error = process.communicate()[1]
return process.returncode, '\n'.join(output), error
For some reason, this one works on Python 2.7 and you only need to import os!
import os
def bash(command):
output = os.popen(command).read()
return output
print_me = bash('ls -l')
print(print_me)
If you need to run a shell command on multiple files, this did the trick for me.
import os
import subprocess
# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
# Get all filenames in working directory
for filename in os.listdir('./'):
# This command will be run on each file
cmd = 'nm ' + filename
# Run the command and capture the output line by line.
for line in runProcess(cmd.split()):
# Eliminate leading and trailing whitespace
line.strip()
# Split the output
output = line.split()
# Filter the output and print relevant lines
if len(output) > 2:
if ((output[2] == 'set_program_name')):
print filename
print line
Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.
According to #senderle, if you use python3.6 like me:
def sh(cmd, input=""):
rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
assert rst.returncode == 0, rst.stderr.decode("utf-8")
return rst.stdout.decode("utf-8")
sh("ls -a")
Will act exactly like you run the command in bash
Improvement for better logging.
For better output you can use iterator.
From below, we get better
from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
output = iter(result.stdout.readline, b'')
error = iter(result.stderr.readline, b'')
print("##### OutPut ###")
for line in output:
print(line.decode("utf-8"))
print("###### Error ########")
for line in error:
print(error.decode("utf-8")) # Convert bytes to str
status, terminal_output = run_command(cmd)
print(terminal_output)
shell_command("ls") # this will display all the files & folders in directory
Other method using getstatusoutput ( Easy to understand)
from subprocess import Popen, getstatusoutput, PIPE
status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output
# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory
If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.
The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.
import subprocess
def command_caller(command=None)
sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
out, err = sp.communicate()
if sp.returncode:
print(
"Return code: %(ret_code)s Error message: %(err_msg)s"
% {"ret_code": sp.returncode, "err_msg": err}
)
return sp.returncode, out, err
I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl and was runs on python3.
simppl allows the user to run shell commands and read the output from the screen.
The developers suggest three types of use cases:
The simplest usage will look like this:
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(start=0, end=100):
sp.print_and_run('<YOUR_FIRST_OS_COMMAND>')
sp.print_and_run('<YOUR_SECOND_OS_COMMAND>') ```
To run multiple commands concurrently use:
commands = ['<YOUR_FIRST_OS_COMMAND>', '<YOUR_SECOND_OS_COMMAND>']
max_number_of_processes = 4
sp.run_parallel(commands, max_number_of_processes) ```
Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will
be run from the same process, but it will appear from the logs as
another command in the pipeline. This enables smoother debugging and
refactoring of tools calling other tools.
from example_module import example_tool
sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'],
{'-key1': 'val1', '-key2': 'val2'},
{'--flag'}) ```
Note that the printing to STDOUT/STDERR is via python's logging module.
Here is a complete code to show how simppl works:
import logging
from logging.config import dictConfig
logging_config = dict(
version = 1,
formatters = {
'f': {'format':
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
},
handlers = {
'h': {'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG}
},
root = {
'handlers': ['h'],
'level': logging.DEBUG,
},
)
dictConfig(logging_config)
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:
from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)
Out: ['hello world']
It has a couple of advantages
It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support
It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO
It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself
To demonstrate:
var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)
Out: ['hello world hello world']
Just wanted to give you an extra option, especially if you already have Jupyter installed
Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:
result = !echo hello world
print(result)
To accomplish the same.
The output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))
eg, execute('ls -ahl')
differentiated three/four possible returns and OS platforms:
no output, but run successfully
output empty line, run successfully
run failed
output something, run successfully
function below
def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
could be
[], ie, len()=0 --> no output;
[''] --> output empty line;
None --> error occured, see below
if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
print "Command: " + cmd
# https://stackoverflow.com/a/40139101/2292993
def _execute_cmd(cmd):
if os.name == 'nt' or platform.system() == 'Windows':
# set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
else:
# Use bash; the default is sh
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")
# the Popen() instance starts running once instantiated (??)
# additionally, communicate(), or poll() and wait process to terminate
# communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
# if communicate(), the results are buffered in memory
# Read stdout from subprocess until the buffer is empty !
# if error occurs, the stdout is '', which means the below loop is essentially skipped
# A prefix of 'b' or 'B' is ignored in Python 2;
# it indicates that the literal should become a bytes literal in Python 3
# (e.g. when code is automatically converted with 2to3).
# return iter(p.stdout.readline, b'')
for line in iter(p.stdout.readline, b''):
# # Windows has \r\n, Unix has \n, Old mac has \r
# if line not in ['','\n','\r','\r\n']: # Don't print blank lines
yield line
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# responsible for logging STDERR
print("Error: " + str(err))
yield None
out = []
for line in _execute_cmd(cmd):
# error did not occur earlier
if line is not None:
# trailing comma to avoid a newline (by print itself) being printed
if output: print line,
out.append(line.strip())
else:
# error occured earlier
out = None
return out
else:
print "Simulation! The command is " + cmd
print ""

Python how to call another python script using subprocess.call and get the correct return value [duplicate]

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.
What would be a code example that would do such a thing?
For example:
def run_command(cmd):
# ??????
print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:
>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.
The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.
Modern versions of Python (3.5 or higher): run
If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:
>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:
>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
This can all be compressed to a one-liner if desired:
>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:
>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'
You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:
>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.
Older versions of Python (3-3.4): more about check_output
If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.
subprocess.check_output(*popenargs, **kwargs)
It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.
You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.
Complex applications and legacy versions of Python (2.6 and below): Popen
If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.
The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.
To send input and capture output, communicate is almost always the preferred method. As in:
output = subprocess.Popen(["mycmd", "myarg"],
stdout=subprocess.PIPE).communicate()[0]
Or
>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo
If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:
>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo
Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.
In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.
As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.
Notes
1. Running shell commands: the shell=True argument
Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:
>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'
However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via
run(cmd, [stdout=etc...], input=other_output)
Or
Popen(cmd, [stdout=etc...]).communicate(other_output)
The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.
This is way easier, but only works on Unix (including Cygwin) and Python2.7.
import commands
print commands.getstatusoutput('wc -l file')
It returns a tuple with the (return_value, output).
For a solution that works in both Python2 and Python3, use the subprocess module instead:
from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
I had the same problem but figured out a very simple way of doing this:
import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Note: This solution is Python3 specific as subprocess.getoutput() doesn't work in Python2
Something like that:
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None:
break
Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.
This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).
For your case the usage would be:
for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
print line,
This is a tricky but super simple solution which works in many situations:
import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())
A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.
Extra note from the comments:
You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.
os.remove('tmp')
Vartec's answer doesn't read all lines, so I made a version that did:
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
Usage is the same as the accepted answer:
command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
print(line)
You can use following commands to run any shell command. I have used them on ubuntu.
import os
os.popen('your command here').read()
Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example
import subprocess
p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
I had a slightly different flavor of the same problem with the following requirements:
Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
#vartec solved this Pythonically with his use of generators and the 'yield'
keyword above
Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
Don't waste CPU cycles polling the process at high-frequency
Check the return code of the subprocess
Print STDERR (separate from STDOUT) if we get a non-zero error return code.
I've combined and tweaked previous answers to come up with the following:
import subprocess
from time import sleep
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
# Read stdout from subprocess until the buffer is empty !
for line in iter(p.stdout.readline, b''):
if line: # Don't print blank lines
yield line
# This ensures the process has completed, AND sets the 'returncode' attr
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# The run_command() function is responsible for logging STDERR
print("Error: " + str(err))
This code would be executed the same as previous answers:
for line in run_command(cmd):
print(line)
Your Mileage May Vary, I attempted #senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.
I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.
import subprocess
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE).communicate()
and call like this, ([0] gets the first element of the tuple, stdout):
run_command('tracert 11.1.0.1')[0]
After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.
To stop console popups (with Windows), do this:
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
# instantiate a startupinfo obj:
startupinfo = subprocess.STARTUPINFO()
# set the use show window flag, might make conditional on being in Windows:
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# pass as the startupinfo keyword argument:
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=startupinfo).communicate()
run_command('tracert 11.1.0.1')
On Python 3.7+, use subprocess.run and pass capture_output=True:
import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))
This will return bytes:
b'hello world\n'
If you want it to convert the bytes to a string, add text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))
This will read the bytes using your default encoding:
'hello world\n'
If you need to manually specify a different encoding, use encoding="your encoding" instead of text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Splitting the initial command for the subprocess might be tricky and cumbersome.
Use shlex.split() to help yourself out.
Sample command
git log -n 5 --since "5 years ago" --until "2 year ago"
The code
from subprocess import check_output
from shlex import split
res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Without shlex.split() the code would look as follows
res = check_output([
'git',
'log',
'-n',
'5',
'--since',
'5 years ago',
'--until',
'2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Here a solution, working if you want to print output while process is running or not.
I added the current working directory also, it was useful to me more than once.
Hoping the solution will help someone :).
import subprocess
def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.
:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []
process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
while True:
next_line = process.stdout.readline()
if next_line:
output.append(str(next_line))
if print_constantly:
print(next_line)
elif not process.poll():
break
error = process.communicate()[1]
return process.returncode, '\n'.join(output), error
For some reason, this one works on Python 2.7 and you only need to import os!
import os
def bash(command):
output = os.popen(command).read()
return output
print_me = bash('ls -l')
print(print_me)
If you need to run a shell command on multiple files, this did the trick for me.
import os
import subprocess
# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
# Get all filenames in working directory
for filename in os.listdir('./'):
# This command will be run on each file
cmd = 'nm ' + filename
# Run the command and capture the output line by line.
for line in runProcess(cmd.split()):
# Eliminate leading and trailing whitespace
line.strip()
# Split the output
output = line.split()
# Filter the output and print relevant lines
if len(output) > 2:
if ((output[2] == 'set_program_name')):
print filename
print line
Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.
According to #senderle, if you use python3.6 like me:
def sh(cmd, input=""):
rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
assert rst.returncode == 0, rst.stderr.decode("utf-8")
return rst.stdout.decode("utf-8")
sh("ls -a")
Will act exactly like you run the command in bash
Improvement for better logging.
For better output you can use iterator.
From below, we get better
from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
output = iter(result.stdout.readline, b'')
error = iter(result.stderr.readline, b'')
print("##### OutPut ###")
for line in output:
print(line.decode("utf-8"))
print("###### Error ########")
for line in error:
print(error.decode("utf-8")) # Convert bytes to str
status, terminal_output = run_command(cmd)
print(terminal_output)
shell_command("ls") # this will display all the files & folders in directory
Other method using getstatusoutput ( Easy to understand)
from subprocess import Popen, getstatusoutput, PIPE
status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output
# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory
If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.
The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.
import subprocess
def command_caller(command=None)
sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
out, err = sp.communicate()
if sp.returncode:
print(
"Return code: %(ret_code)s Error message: %(err_msg)s"
% {"ret_code": sp.returncode, "err_msg": err}
)
return sp.returncode, out, err
I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl and was runs on python3.
simppl allows the user to run shell commands and read the output from the screen.
The developers suggest three types of use cases:
The simplest usage will look like this:
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(start=0, end=100):
sp.print_and_run('<YOUR_FIRST_OS_COMMAND>')
sp.print_and_run('<YOUR_SECOND_OS_COMMAND>') ```
To run multiple commands concurrently use:
commands = ['<YOUR_FIRST_OS_COMMAND>', '<YOUR_SECOND_OS_COMMAND>']
max_number_of_processes = 4
sp.run_parallel(commands, max_number_of_processes) ```
Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will
be run from the same process, but it will appear from the logs as
another command in the pipeline. This enables smoother debugging and
refactoring of tools calling other tools.
from example_module import example_tool
sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'],
{'-key1': 'val1', '-key2': 'val2'},
{'--flag'}) ```
Note that the printing to STDOUT/STDERR is via python's logging module.
Here is a complete code to show how simppl works:
import logging
from logging.config import dictConfig
logging_config = dict(
version = 1,
formatters = {
'f': {'format':
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
},
handlers = {
'h': {'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG}
},
root = {
'handlers': ['h'],
'level': logging.DEBUG,
},
)
dictConfig(logging_config)
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:
from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)
Out: ['hello world']
It has a couple of advantages
It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support
It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO
It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself
To demonstrate:
var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)
Out: ['hello world hello world']
Just wanted to give you an extra option, especially if you already have Jupyter installed
Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:
result = !echo hello world
print(result)
To accomplish the same.
The output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))
eg, execute('ls -ahl')
differentiated three/four possible returns and OS platforms:
no output, but run successfully
output empty line, run successfully
run failed
output something, run successfully
function below
def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
could be
[], ie, len()=0 --> no output;
[''] --> output empty line;
None --> error occured, see below
if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
print "Command: " + cmd
# https://stackoverflow.com/a/40139101/2292993
def _execute_cmd(cmd):
if os.name == 'nt' or platform.system() == 'Windows':
# set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
else:
# Use bash; the default is sh
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")
# the Popen() instance starts running once instantiated (??)
# additionally, communicate(), or poll() and wait process to terminate
# communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
# if communicate(), the results are buffered in memory
# Read stdout from subprocess until the buffer is empty !
# if error occurs, the stdout is '', which means the below loop is essentially skipped
# A prefix of 'b' or 'B' is ignored in Python 2;
# it indicates that the literal should become a bytes literal in Python 3
# (e.g. when code is automatically converted with 2to3).
# return iter(p.stdout.readline, b'')
for line in iter(p.stdout.readline, b''):
# # Windows has \r\n, Unix has \n, Old mac has \r
# if line not in ['','\n','\r','\r\n']: # Don't print blank lines
yield line
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# responsible for logging STDERR
print("Error: " + str(err))
yield None
out = []
for line in _execute_cmd(cmd):
# error did not occur earlier
if line is not None:
# trailing comma to avoid a newline (by print itself) being printed
if output: print line,
out.append(line.strip())
else:
# error occured earlier
out = None
return out
else:
print "Simulation! The command is " + cmd
print ""

How to execute executable file in windows and save its output using Python [duplicate]

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.
What would be a code example that would do such a thing?
For example:
def run_command(cmd):
# ??????
print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:
>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.
The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.
Modern versions of Python (3.5 or higher): run
If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:
>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:
>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
This can all be compressed to a one-liner if desired:
>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:
>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'
You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:
>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.
Older versions of Python (3-3.4): more about check_output
If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.
subprocess.check_output(*popenargs, **kwargs)
It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.
You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.
Complex applications and legacy versions of Python (2.6 and below): Popen
If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.
The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.
To send input and capture output, communicate is almost always the preferred method. As in:
output = subprocess.Popen(["mycmd", "myarg"],
stdout=subprocess.PIPE).communicate()[0]
Or
>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo
If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:
>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo
Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.
In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.
As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.
Notes
1. Running shell commands: the shell=True argument
Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:
>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'
However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via
run(cmd, [stdout=etc...], input=other_output)
Or
Popen(cmd, [stdout=etc...]).communicate(other_output)
The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.
This is way easier, but only works on Unix (including Cygwin) and Python2.7.
import commands
print commands.getstatusoutput('wc -l file')
It returns a tuple with the (return_value, output).
For a solution that works in both Python2 and Python3, use the subprocess module instead:
from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
I had the same problem but figured out a very simple way of doing this:
import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Note: This solution is Python3 specific as subprocess.getoutput() doesn't work in Python2
Something like that:
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None:
break
Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.
This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).
For your case the usage would be:
for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
print line,
This is a tricky but super simple solution which works in many situations:
import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())
A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.
Extra note from the comments:
You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.
os.remove('tmp')
Vartec's answer doesn't read all lines, so I made a version that did:
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
Usage is the same as the accepted answer:
command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
print(line)
You can use following commands to run any shell command. I have used them on ubuntu.
import os
os.popen('your command here').read()
Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example
import subprocess
p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
I had a slightly different flavor of the same problem with the following requirements:
Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
#vartec solved this Pythonically with his use of generators and the 'yield'
keyword above
Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
Don't waste CPU cycles polling the process at high-frequency
Check the return code of the subprocess
Print STDERR (separate from STDOUT) if we get a non-zero error return code.
I've combined and tweaked previous answers to come up with the following:
import subprocess
from time import sleep
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
# Read stdout from subprocess until the buffer is empty !
for line in iter(p.stdout.readline, b''):
if line: # Don't print blank lines
yield line
# This ensures the process has completed, AND sets the 'returncode' attr
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# The run_command() function is responsible for logging STDERR
print("Error: " + str(err))
This code would be executed the same as previous answers:
for line in run_command(cmd):
print(line)
Your Mileage May Vary, I attempted #senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.
I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.
import subprocess
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE).communicate()
and call like this, ([0] gets the first element of the tuple, stdout):
run_command('tracert 11.1.0.1')[0]
After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.
To stop console popups (with Windows), do this:
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
# instantiate a startupinfo obj:
startupinfo = subprocess.STARTUPINFO()
# set the use show window flag, might make conditional on being in Windows:
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# pass as the startupinfo keyword argument:
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=startupinfo).communicate()
run_command('tracert 11.1.0.1')
On Python 3.7+, use subprocess.run and pass capture_output=True:
import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))
This will return bytes:
b'hello world\n'
If you want it to convert the bytes to a string, add text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))
This will read the bytes using your default encoding:
'hello world\n'
If you need to manually specify a different encoding, use encoding="your encoding" instead of text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Splitting the initial command for the subprocess might be tricky and cumbersome.
Use shlex.split() to help yourself out.
Sample command
git log -n 5 --since "5 years ago" --until "2 year ago"
The code
from subprocess import check_output
from shlex import split
res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Without shlex.split() the code would look as follows
res = check_output([
'git',
'log',
'-n',
'5',
'--since',
'5 years ago',
'--until',
'2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Here a solution, working if you want to print output while process is running or not.
I added the current working directory also, it was useful to me more than once.
Hoping the solution will help someone :).
import subprocess
def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.
:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []
process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
while True:
next_line = process.stdout.readline()
if next_line:
output.append(str(next_line))
if print_constantly:
print(next_line)
elif not process.poll():
break
error = process.communicate()[1]
return process.returncode, '\n'.join(output), error
For some reason, this one works on Python 2.7 and you only need to import os!
import os
def bash(command):
output = os.popen(command).read()
return output
print_me = bash('ls -l')
print(print_me)
If you need to run a shell command on multiple files, this did the trick for me.
import os
import subprocess
# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
# Get all filenames in working directory
for filename in os.listdir('./'):
# This command will be run on each file
cmd = 'nm ' + filename
# Run the command and capture the output line by line.
for line in runProcess(cmd.split()):
# Eliminate leading and trailing whitespace
line.strip()
# Split the output
output = line.split()
# Filter the output and print relevant lines
if len(output) > 2:
if ((output[2] == 'set_program_name')):
print filename
print line
Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.
According to #senderle, if you use python3.6 like me:
def sh(cmd, input=""):
rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
assert rst.returncode == 0, rst.stderr.decode("utf-8")
return rst.stdout.decode("utf-8")
sh("ls -a")
Will act exactly like you run the command in bash
Improvement for better logging.
For better output you can use iterator.
From below, we get better
from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
output = iter(result.stdout.readline, b'')
error = iter(result.stderr.readline, b'')
print("##### OutPut ###")
for line in output:
print(line.decode("utf-8"))
print("###### Error ########")
for line in error:
print(error.decode("utf-8")) # Convert bytes to str
status, terminal_output = run_command(cmd)
print(terminal_output)
shell_command("ls") # this will display all the files & folders in directory
Other method using getstatusoutput ( Easy to understand)
from subprocess import Popen, getstatusoutput, PIPE
status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output
# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory
If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.
The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.
import subprocess
def command_caller(command=None)
sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
out, err = sp.communicate()
if sp.returncode:
print(
"Return code: %(ret_code)s Error message: %(err_msg)s"
% {"ret_code": sp.returncode, "err_msg": err}
)
return sp.returncode, out, err
I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl and was runs on python3.
simppl allows the user to run shell commands and read the output from the screen.
The developers suggest three types of use cases:
The simplest usage will look like this:
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(start=0, end=100):
sp.print_and_run('<YOUR_FIRST_OS_COMMAND>')
sp.print_and_run('<YOUR_SECOND_OS_COMMAND>') ```
To run multiple commands concurrently use:
commands = ['<YOUR_FIRST_OS_COMMAND>', '<YOUR_SECOND_OS_COMMAND>']
max_number_of_processes = 4
sp.run_parallel(commands, max_number_of_processes) ```
Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will
be run from the same process, but it will appear from the logs as
another command in the pipeline. This enables smoother debugging and
refactoring of tools calling other tools.
from example_module import example_tool
sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'],
{'-key1': 'val1', '-key2': 'val2'},
{'--flag'}) ```
Note that the printing to STDOUT/STDERR is via python's logging module.
Here is a complete code to show how simppl works:
import logging
from logging.config import dictConfig
logging_config = dict(
version = 1,
formatters = {
'f': {'format':
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
},
handlers = {
'h': {'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG}
},
root = {
'handlers': ['h'],
'level': logging.DEBUG,
},
)
dictConfig(logging_config)
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:
from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)
Out: ['hello world']
It has a couple of advantages
It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support
It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO
It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself
To demonstrate:
var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)
Out: ['hello world hello world']
Just wanted to give you an extra option, especially if you already have Jupyter installed
Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:
result = !echo hello world
print(result)
To accomplish the same.
The output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))
eg, execute('ls -ahl')
differentiated three/four possible returns and OS platforms:
no output, but run successfully
output empty line, run successfully
run failed
output something, run successfully
function below
def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
could be
[], ie, len()=0 --> no output;
[''] --> output empty line;
None --> error occured, see below
if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
print "Command: " + cmd
# https://stackoverflow.com/a/40139101/2292993
def _execute_cmd(cmd):
if os.name == 'nt' or platform.system() == 'Windows':
# set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
else:
# Use bash; the default is sh
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")
# the Popen() instance starts running once instantiated (??)
# additionally, communicate(), or poll() and wait process to terminate
# communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
# if communicate(), the results are buffered in memory
# Read stdout from subprocess until the buffer is empty !
# if error occurs, the stdout is '', which means the below loop is essentially skipped
# A prefix of 'b' or 'B' is ignored in Python 2;
# it indicates that the literal should become a bytes literal in Python 3
# (e.g. when code is automatically converted with 2to3).
# return iter(p.stdout.readline, b'')
for line in iter(p.stdout.readline, b''):
# # Windows has \r\n, Unix has \n, Old mac has \r
# if line not in ['','\n','\r','\r\n']: # Don't print blank lines
yield line
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# responsible for logging STDERR
print("Error: " + str(err))
yield None
out = []
for line in _execute_cmd(cmd):
# error did not occur earlier
if line is not None:
# trailing comma to avoid a newline (by print itself) being printed
if output: print line,
out.append(line.strip())
else:
# error occured earlier
out = None
return out
else:
print "Simulation! The command is " + cmd
print ""

No output in terminal running bash script from Python [duplicate]

This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 4 years ago.
How can I get the output of a process run using subprocess.call()?
Passing a StringIO.StringIO object to stdout gives this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 444, in call
return Popen(*popenargs, **kwargs).wait()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 588, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 945, in _get_handles
c2pwrite = stdout.fileno()
AttributeError: StringIO instance has no attribute 'fileno'
>>>
If you have Python version >= 2.7, you can use subprocess.check_output which basically does exactly what you want (it returns standard output as string).
Simple example (linux version, see note):
import subprocess
print subprocess.check_output(["ping", "-c", "1", "8.8.8.8"])
Note that the ping command is using linux notation (-c for count). If you try this on Windows remember to change it to -n for same result.
As commented below you can find a more detailed explanation in this other answer.
Output from subprocess.call() should only be redirected to files.
You should use subprocess.Popen() instead. Then you can pass subprocess.PIPE for the stderr, stdout, and/or stdin parameters and read from the pipes by using the communicate() method:
from subprocess import Popen, PIPE
p = Popen(['program', 'arg1'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate(b"input data that is passed to subprocess' stdin")
rc = p.returncode
The reasoning is that the file-like object used by subprocess.call() must have a real file descriptor, and thus implement the fileno() method. Just using any file-like object won't do the trick.
See here for more info.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.
from subprocess import PIPE, run
command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)
I have the following solution. It captures the exit code, the stdout, and the stderr too of the executed external command:
import shlex
from subprocess import Popen, PIPE
def get_exitcode_stdout_stderr(cmd):
"""
Execute the external command and get its exitcode, stdout and stderr.
"""
args = shlex.split(cmd)
proc = Popen(args, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
exitcode = proc.returncode
#
return exitcode, out, err
cmd = "..." # arbitrary external command, e.g. "python mytest.py"
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
I also have a blog post on it here.
Edit: the solution was updated to a newer one that doesn't need to write to temp. files.
I recently just figured out how to do this, and here's some example code from a current project of mine:
#Getting the random picture.
#First find all pictures:
import shlex, subprocess
cmd = 'find ../Pictures/ -regex ".*\(JPG\|NEF\|jpg\)" '
#cmd = raw_input("shell:")
args = shlex.split(cmd)
output,error = subprocess.Popen(args,stdout = subprocess.PIPE, stderr= subprocess.PIPE).communicate()
#Another way to get output
#output = subprocess.Popen(args,stdout = subprocess.PIPE).stdout
ber = raw_input("search complete, display results?")
print output
#... and on to the selection process ...
You now have the output of the command stored in the variable "output". "stdout = subprocess.PIPE" tells the class to create a file object named 'stdout' from within Popen. The communicate() method, from what I can tell, just acts as a convenient way to return a tuple of the output and the errors from the process you've run. Also, the process is run when instantiating Popen.
The key is to use the function subprocess.check_output
For example, the following function captures stdout and stderr of the process and returns that as well as whether or not the call succeeded. It is Python 2 and 3 compatible:
from subprocess import check_output, CalledProcessError, STDOUT
def system_call(command):
"""
params:
command: list of strings, ex. `["ls", "-l"]`
returns: output, success
"""
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call(["ls", "-l"])
If you want to pass commands as strings rather than arrays, use this version:
from subprocess import check_output, CalledProcessError, STDOUT
import shlex
def system_call(command):
"""
params:
command: string, ex. `"ls -l"`
returns: output, success
"""
command = shlex.split(command)
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call("ls -l")
In Ipython shell:
In [8]: import subprocess
In [9]: s=subprocess.check_output(["echo", "Hello World!"])
In [10]: s
Out[10]: 'Hello World!\n'
Based on sargue's answer. Credit to sargue.

How to get variable from subprocess.call in Python? [duplicate]

This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 4 years ago.
How can I get the output of a process run using subprocess.call()?
Passing a StringIO.StringIO object to stdout gives this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 444, in call
return Popen(*popenargs, **kwargs).wait()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 588, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 945, in _get_handles
c2pwrite = stdout.fileno()
AttributeError: StringIO instance has no attribute 'fileno'
>>>
If you have Python version >= 2.7, you can use subprocess.check_output which basically does exactly what you want (it returns standard output as string).
Simple example (linux version, see note):
import subprocess
print subprocess.check_output(["ping", "-c", "1", "8.8.8.8"])
Note that the ping command is using linux notation (-c for count). If you try this on Windows remember to change it to -n for same result.
As commented below you can find a more detailed explanation in this other answer.
Output from subprocess.call() should only be redirected to files.
You should use subprocess.Popen() instead. Then you can pass subprocess.PIPE for the stderr, stdout, and/or stdin parameters and read from the pipes by using the communicate() method:
from subprocess import Popen, PIPE
p = Popen(['program', 'arg1'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate(b"input data that is passed to subprocess' stdin")
rc = p.returncode
The reasoning is that the file-like object used by subprocess.call() must have a real file descriptor, and thus implement the fileno() method. Just using any file-like object won't do the trick.
See here for more info.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.
from subprocess import PIPE, run
command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)
I have the following solution. It captures the exit code, the stdout, and the stderr too of the executed external command:
import shlex
from subprocess import Popen, PIPE
def get_exitcode_stdout_stderr(cmd):
"""
Execute the external command and get its exitcode, stdout and stderr.
"""
args = shlex.split(cmd)
proc = Popen(args, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
exitcode = proc.returncode
#
return exitcode, out, err
cmd = "..." # arbitrary external command, e.g. "python mytest.py"
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
I also have a blog post on it here.
Edit: the solution was updated to a newer one that doesn't need to write to temp. files.
I recently just figured out how to do this, and here's some example code from a current project of mine:
#Getting the random picture.
#First find all pictures:
import shlex, subprocess
cmd = 'find ../Pictures/ -regex ".*\(JPG\|NEF\|jpg\)" '
#cmd = raw_input("shell:")
args = shlex.split(cmd)
output,error = subprocess.Popen(args,stdout = subprocess.PIPE, stderr= subprocess.PIPE).communicate()
#Another way to get output
#output = subprocess.Popen(args,stdout = subprocess.PIPE).stdout
ber = raw_input("search complete, display results?")
print output
#... and on to the selection process ...
You now have the output of the command stored in the variable "output". "stdout = subprocess.PIPE" tells the class to create a file object named 'stdout' from within Popen. The communicate() method, from what I can tell, just acts as a convenient way to return a tuple of the output and the errors from the process you've run. Also, the process is run when instantiating Popen.
The key is to use the function subprocess.check_output
For example, the following function captures stdout and stderr of the process and returns that as well as whether or not the call succeeded. It is Python 2 and 3 compatible:
from subprocess import check_output, CalledProcessError, STDOUT
def system_call(command):
"""
params:
command: list of strings, ex. `["ls", "-l"]`
returns: output, success
"""
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call(["ls", "-l"])
If you want to pass commands as strings rather than arrays, use this version:
from subprocess import check_output, CalledProcessError, STDOUT
import shlex
def system_call(command):
"""
params:
command: string, ex. `"ls -l"`
returns: output, success
"""
command = shlex.split(command)
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call("ls -l")
In Ipython shell:
In [8]: import subprocess
In [9]: s=subprocess.check_output(["echo", "Hello World!"])
In [10]: s
Out[10]: 'Hello World!\n'
Based on sargue's answer. Credit to sargue.

Categories

Resources