I'm running on a machine with python 2.6 and no I can't upgrade for now.
I need the subrpocess.check_output function but as I've understood this is note defined in 2.6.
So I've used a workaround:
try:
import subprocess
if "check_output" not in dir( subprocess ): # duck punch it in!
def check_output(*popenargs, **kwargs):
r"""Run command with arguments and return its output as a byte string.
Backported from Python 2.7 as it's implemented as pure python on stdlib.
>>> check_output(['/usr/bin/python', '--version'])
Python 2.6.2
"""
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
error = subprocess.CalledProcessError(retcode, cmd)
error.output = output
raise error
return output
subprocess.check_output = check_output
# Git Information
git_info= {
"last_tag" : subprocess.check_output(['git', 'describe', '--always']),
"last_commit" : subprocess.check_output(['git', 'log', '-1', '--pretty=format:\'%h (%ci)\'', '--abbrev-commit'])
}
except Exception, e:
raise e
else:
data = git_info
return data
I'm using this in conjunction with Django + wsgi.
The previous piece of code always give me Command '['git', 'describe', '--always']' returned non-zero exit status 128.
Now if I run git describe --always I get a correct output so I don't think the problem is there.
I have no idea what could cause the problem.
EDIT:
If I use the command subprocess.check_output(['ls', '-l']) or subprocess.check_output(['pwd']) things work and from here I've understood that the view called from Django is actually running at /var/www being this the DocumentRoot specified in the Apache config file.
The real file is not located under /var/www in fact everything works on my local machine where I use the local django dev server. So the git command won't work because there is no git repository under /var/www. How can I execute the original subprocess.check_output(['git', 'describe', '--always']) from its original path (where the python file is actually located)?
I've solved by passing the cwd argument to check_output as suggested in a comment.
def get_git_info():
git_info = {}
try:
import subprocess
# subprocess.check_output did not exist in 2.6
if "check_output" not in dir(subprocess): # duck punch it in!
# workaround/redefinition for the subprocess.check_output() command
def check_output(*popenargs, **kwargs):
""" Run command with arguments and return its output as a byte string.
Backported from Python 2.7 as it's implemented as pure python on stdlib.
>>> check_output(['/usr/bin/python', '--version'])
Python 2.6.2
"""
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
error = subprocess.CalledProcessError(retcode, cmd)
error.output = output
raise error
# In case we want the error in string format:
# stderr=subprocess.STDOUT
# raise Exception(stderr)
return output
subprocess.check_output = check_output
# Set on which dir the git command should be invoked
if os.path.isdir(r'/my/git/dir'):
cwd = r'/my/git/dir'
# If using the django local dev server, then it will invoke the command from the dir where this script is located
else:
cwd = None
# Check that the directory is a git repo:
# 'git rev-parse' returns a number !=0 if we are in a git repo
if subprocess.check_output(['git', 'rev-parse'], cwd=cwd) != 0:
# Git Information
git_info = {
"last_tag": subprocess.check_output(['git', 'describe', '--always'], cwd=cwd),
"last_commit": subprocess.check_output(['git', 'log', '-1', '--pretty=format:\'%h (%ci)\'', '--abbrev-commit'], cwd=cwd),
}
except Exception, e:
log.exception('Problem getting git information')
pass
# return the git info or an empty dict (defined above)
return git_info
Related
I am making a program that adds additional functionality to the standard command shell in Windows. For instance, typing google followed by keywords will open a new tab with Google search for those keywords, etc. Whenever the input doesn't refer to a custom function I've created, it gets processed as a shell command using subprocess.call(rawCommand, shell=True).
Since I'd like to anticipate when my input isn't a valid command and return something like f"Invalid command: {rawCommand}", how should I go about doing that?
So far I've tried subprocess.call(rawCommand) which also return the standard output as well as the exit code. So that looks like this:
>>> from subprocess import call
>>> a, b = call("echo hello!", shell=1), call("xyz arg1 arg2", shell=1)
hello!
'xyz' is not recognized as an internal or external command,
operable program or batch file.
>>> a
0
>>> b
1
I'd like to simply recieve that exit code. Any ideas on how I can do this?
Should you one day want deal with encoding errors, get back the result of the command you're running, have a timeout or decide which exit codes other than 0 may not trigger errors (i'm looking at you, java runtime !), here's a complete function that does that job:
import os
from logging import getLogger
import subprocess
logger = getLogger()
def command_runner(command, valid_exit_codes=None, timeout=300, shell=False, encoding='utf-8',
windows_no_window=False, **kwargs):
"""
Whenever we can, we need to avoid shell=True in order to preseve better security
Runs system command, returns exit code and stdout/stderr output, and logs output on error
valid_exit_codes is a list of codes that don't trigger an error
windows_no_window will hide the command window (works with Microsoft Windows only)
Accepts subprocess.check_output arguments
"""
# Set default values for kwargs
errors = kwargs.pop('errors', 'backslashreplace') # Don't let encoding issues make you mad
universal_newlines = kwargs.pop('universal_newlines', False)
creationflags = kwargs.pop('creationflags', 0)
if windows_no_window:
creationflags = creationflags | subprocess.CREATE_NO_WINDOW
try:
# universal_newlines=True makes netstat command fail under windows
# timeout does not work under Python 2.7 with subprocess32 < 3.5
# decoder may be unicode_escape for dos commands or utf-8 for powershell
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=shell,
timeout=timeout, universal_newlines=universal_newlines, encoding=encoding,
errors=errors, creationflags=creationflags, **kwargs)
except subprocess.CalledProcessError as exc:
exit_code = exc.returncode
try:
output = exc.output
except Exception:
output = "command_runner: Could not obtain output from command."
if exit_code in valid_exit_codes if valid_exit_codes is not None else [0]:
logger.debug('Command [%s] returned with exit code [%s]. Command output was:' % (command, exit_code))
if isinstance(output, str):
logger.debug(output)
return exc.returncode, output
else:
logger.error('Command [%s] failed with exit code [%s]. Command output was:' %
(command, exc.returncode))
logger.error(output)
return exc.returncode, output
# OSError if not a valid executable
except (OSError, IOError) as exc:
logger.error('Command [%s] failed because of OS [%s].' % (command, exc))
return None, exc
except subprocess.TimeoutExpired:
logger.error('Timeout [%s seconds] expired for command [%s] execution.' % (timeout, command))
return None, 'Timeout of %s seconds expired.' % timeout
except Exception as exc:
logger.error('Command [%s] failed for unknown reasons [%s].' % (command, exc))
logger.debug('Error:', exc_info=True)
return None, exc
else:
logger.debug('Command [%s] returned with exit code [0]. Command output was:' % command)
if output:
logger.debug(output)
return 0, output
Usage:
exit_code, output = command_runner('whoami', shell=True)
Some shells have a syntax-checking mode (e.g., bash -n), but that’s the only form of error that’s separable from “try to execute the commands and see what happens”. Defining a larger class of “immediate” errors is a fraught proposition: if echo hello; ./foo is invalid because foo can’t be found as a command, what about false && ./foo, which will never try to run it, or cp /bin/ls foo; ./foo, which may succeed (or might fail to copy)? What about eval $(configure_shell); foo which might or might not manipulate PATH so as to find foo? What about foo || install_foo, where the failure might be anticipated?
As such, anticipating failure is not possible in any meaningful sense: your only real option is to capture the command’s output/error (as mentioned in the comments) and report them in some useful way.
I am trying to move a folder into another folder but am getting Permission Denied error when I try to perform this operation in a Python script vs. the move working successfully when I run it in bash or even in Python interactive mode.
cmd = ['sudo', 'mv', '/path1/dir', '/path2']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
print(stderr)
I also tried adding shell=True.
p = subprocess.Popen(' '.join(cmd), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
print(stderr)
In both cases, I am getting the following error:
"mv: cannot move '/path1/dir' to '/path2/dir': Permission denied\n"
I invoke my script in the following manner:
sudo python script.py
I tried executing each command in shell as well as Python interactive mode and I don't get any errors. Any idea what is going on over here?
After wasting hours of time debugging as to what was going wrong, I finally figured out what was happening. I was creating /path1 and /path2 using tempfile. Here is a snippet of the code:
class UtilitiesTest(unittest.TestCase):
#staticmethod
def createTestFiles():
dir = tempfile.mkdtemp()
_, file = tempfile.mkstemp(dir=dir)
return dir, file
def test_MoveFileToAnotherLocation(self):
src_dir, src_file = UtilitiesTest.createTestFiles()
dest_dir, dest_file = UtilitiesTest.createTestFiles()
cmd = ['sudo', 'mv', src_dir, dest_dir]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
print(stderr)
Like zwer said in the comments, if I am running this script using sudo, I don't need to add sudo in my mv command. Because I kept getting permission denied errors, I kept thinking that sudo would fix my problem. The actual issue over here was when tempfile.mkstemp() is called, it returns an open file descriptor along with the file path. I didn't pay much attention to the first argument, so when I modified my createTestFiles() to below, everything started working.
#staticmethod
def createTestFiles():
dir = tempfile.mkdtemp()
fd, file = tempfile.mkstemp(dir=dir)
os.close(fd)
return dir, file
import subprocess
def run_cmd(args_list):
print('Running system command: {0}'.format(' '.join(args_list)))
proc = subprocess.Popen(args_list, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.communicate()
return proc.returncode
cmd = ['hadoop', 'fs', '-test', '-e', hdfs_file_path]
code = run_cmd(cmd)
if code:
print 'file not exist'
When I give this command to see whether file exists in HDFS, it is throwing me this error:
RuntimeError: Error running command: hadoop fs -test -f /app/tmp/1.json. Return code: 1, Error: b''
How to resolve this issue?
I would use an api instead of calling subprocesses. It is always better to use an api for this for example snakebite which is created by spotify. This example checks if a file exists in the given folder:
from snakebite.client import Client
client = Client("localhost", 8020, use_trash=False)
return "fileName" in client.ls(['hdfs_path'])
I've been reading the Python documentation about the subprocess module (see here) and it talks about a subprocess.check_output() command which seems to be exactly what I need.
However, when I try and use it I get an error that it doesn't exist, and when I run dir(subprocess) it is not listed.
I am running Python 2.6.5, and the code I have used is below:
import subprocess
subprocess.check_output(["ls", "-l", "/dev/null"])
Does anyone have any idea why this is happening?
It was introduced in 2.7 See the docs.
Use subprocess.Popen if you want the output:
>>> import subprocess
>>> output = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE).communicate()[0]
IF it's used heavily in the code you want to run but that code doesn't have to be maintained long-term (or you need a quick fix irrespective of potential maintenance headaches in the future) then you could duck punch (aka monkey patch) it in wherever subprocess is imported...
Just lift the code from 2.7 and insert it thusly...
import subprocess
if "check_output" not in dir( subprocess ): # duck punch it in!
def f(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd)
return output
subprocess.check_output = f
Minor fidgeting may be required.
Do bear in mind though the onus is on you to maintain dirty little backports like this. If bugs are discovered and corrected in the latest python then you a) have to notice that and b) update your version if you want to stay secure. Also, overriding & defining internal functions yourself is the next guy's worst nightmare, especially when the next guy is YOU several years down the line and you've forgot all about the grody hacks you did last time! In summary: it's very rarely a good idea.
Thanks to the monkey patch suggestion (and my attempts failing - but we were consuming CalledProcessError output, so needed to monkey patch that)
found a working 2.6 patch here:
http://pydoc.net/Python/pep8radius/0.9.0/pep8radius.shell/
"""Note: We also monkey-patch subprocess for python 2.6 to
give feature parity with later versions.
"""
try:
from subprocess import STDOUT, check_output, CalledProcessError
except ImportError: # pragma: no cover
# python 2.6 doesn't include check_output
# monkey patch it in!
import subprocess
STDOUT = subprocess.STDOUT
def check_output(*popenargs, **kwargs):
if 'stdout' in kwargs: # pragma: no cover
raise ValueError('stdout argument not allowed, '
'it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE,
*popenargs, **kwargs)
output, _ = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd,
output=output)
return output
subprocess.check_output = check_output
# overwrite CalledProcessError due to `output`
# keyword not being available (in 2.6)
class CalledProcessError(Exception):
def __init__(self, returncode, cmd, output=None):
self.returncode = returncode
self.cmd = cmd
self.output = output
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (
self.cmd, self.returncode)
subprocess.CalledProcessError = CalledProcessError
I've been reading the Python documentation about the subprocess module (see here) and it talks about a subprocess.check_output() command which seems to be exactly what I need.
However, when I try and use it I get an error that it doesn't exist, and when I run dir(subprocess) it is not listed.
I am running Python 2.6.5, and the code I have used is below:
import subprocess
subprocess.check_output(["ls", "-l", "/dev/null"])
Does anyone have any idea why this is happening?
It was introduced in 2.7 See the docs.
Use subprocess.Popen if you want the output:
>>> import subprocess
>>> output = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE).communicate()[0]
IF it's used heavily in the code you want to run but that code doesn't have to be maintained long-term (or you need a quick fix irrespective of potential maintenance headaches in the future) then you could duck punch (aka monkey patch) it in wherever subprocess is imported...
Just lift the code from 2.7 and insert it thusly...
import subprocess
if "check_output" not in dir( subprocess ): # duck punch it in!
def f(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd)
return output
subprocess.check_output = f
Minor fidgeting may be required.
Do bear in mind though the onus is on you to maintain dirty little backports like this. If bugs are discovered and corrected in the latest python then you a) have to notice that and b) update your version if you want to stay secure. Also, overriding & defining internal functions yourself is the next guy's worst nightmare, especially when the next guy is YOU several years down the line and you've forgot all about the grody hacks you did last time! In summary: it's very rarely a good idea.
Thanks to the monkey patch suggestion (and my attempts failing - but we were consuming CalledProcessError output, so needed to monkey patch that)
found a working 2.6 patch here:
http://pydoc.net/Python/pep8radius/0.9.0/pep8radius.shell/
"""Note: We also monkey-patch subprocess for python 2.6 to
give feature parity with later versions.
"""
try:
from subprocess import STDOUT, check_output, CalledProcessError
except ImportError: # pragma: no cover
# python 2.6 doesn't include check_output
# monkey patch it in!
import subprocess
STDOUT = subprocess.STDOUT
def check_output(*popenargs, **kwargs):
if 'stdout' in kwargs: # pragma: no cover
raise ValueError('stdout argument not allowed, '
'it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE,
*popenargs, **kwargs)
output, _ = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd,
output=output)
return output
subprocess.check_output = check_output
# overwrite CalledProcessError due to `output`
# keyword not being available (in 2.6)
class CalledProcessError(Exception):
def __init__(self, returncode, cmd, output=None):
self.returncode = returncode
self.cmd = cmd
self.output = output
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (
self.cmd, self.returncode)
subprocess.CalledProcessError = CalledProcessError