I'm a little shaky on the syntax of subprocess.call command arguments when those arguments include combined strings of variables
I have 4 variables that are used in the single complete command-string that ran successfully using os.system:
userPWD
userName
hostName
path
os.system("sshpass -p %s scp %s#%s:/var/tmp/*Metrics.csv %s" % (userPWD, userName, hostName, path))
Now converting that to subprocess.call which will give me the needed output status I need, the format of some aspects of this command-string is loosing me, as the subprocess.call documentation usually just shows very simple commands like this:
subprocess.call(['ls', '-l'])
My first effort to convert it looks like this:
subprocess.call(["sshpass", "-p", userPWD, "scp", "userName#hostName:/var/tmp/*Metrics.csv", path"])
but this produces the following error messages in Python 2.7.3:
Traceback (most recent call last):
File "pyprobeConnect.py", line 73, in <module>
get_csvPassFail = subprocess.Popen("sshpass -p %s scp %s#%s:/var/tmp/*Metrics.csv %s" % (userPWD, userName, hostName, path)).read()
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
When you include * you need to pass shell=True and if you're passing shell=True you need to specify the first argument as a string not a list.
subprocess.call("sshpass -p %s scp %s#%s:/var/tmp/*Metrics.csv %s" % (userPWD, userName, hostName, path),shell=True)
In general, I don't think using shell=True in the subprocess families is a good idea. This is a very well-known vector of attack (or inconvenience). The malicious (or clueless) user may inject arbitrary shell command. In your case the password field seems to be in control of the user, so there could be a risk. The reason one refrains from using os.system is to prevent this kind of error.
Here, I presume you're using scp to pull files from the remote to the local host. In that case shell * globbing doesn't matter, because this is expanded by the remote shell.
Your stack trace says the problem is that the executable, in your case sshpass, cannot be located. This is likely because the directory containing the executable isn't in your PATH environment variable.
To correct this, you can simply modify PATH temporarily as you call the command, in the following fake Python (you need to fill in your own details):
import os
cur_path = os.environ["PATH"]
if dir_of_your_executable not in cur_path:
cmd_path = "%s:%s" % (dir_of_your_executable, cur_path)
else:
cmd_path = cur_path
cmd_env = os.environ.copy().update(PATH=cmd_path)
subprocess.call(["sshpass", "rest", "of", "your", "command"], env=cmd_env)
The code will first check if the directory of sshpass is in the PATH. If not, it is prefixed to the PATH and used for command execution.
Alternatively, just use the absolute path:
subprocess.call(["/path/to/sshpass", "rest", "of", "your", "command"])
Finally, a word of caution: Just say no to sshpass. It's insecure, hence evil. Use public-key based SSH authentication by starting ssh-agent before executing automated commands. Dispense with passwords, especially passwords passed by sshpass -p. They're evil. Just Say No.
Related
Below is example code:
from subprocess import check_output
list1 = ['df', 'df -h']
for x in list1:
output = check_output([x])
Getting below error for list1 of dh -h value.
File "/usr/lib64/python2.7/subprocess.py", line 568, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
what is best method to read linux command output's in python2.7
You should provide check_output arguments as a list.
This works:
from subprocess import check_output
list1 = ['df', 'df -h']
for x in list1:
output = check_output(x.split())
I recommend delegator written by kennethreitz, with his package https://github.com/kennethreitz/delegator.py, you can simply do, and both the API and output is cleaner:
import delegator
cmds = ['df', 'df -h']
for cmd in cmds:
p = delegator.run(cmd)
print(p.out)
There are a few options with this situation, for ways of passing a cmd and args:
# a list broken into individual parts, can be passed with `shell=False
['cmd', 'arg1', 'arg2', ... ]
# a string with just a `cmd`, can be passed with `shell=False`
'cmd`
# a string with a `cmd` and `args`
# can only be passed to subprocess functions with `shell=True`
'cmd arg1 arg2 ...'
Just to follow up on mariis answer. The subprocess docs on python.org have more info on why you may want to pick one of a couple of options.
args is required for all calls and should be a string, or a sequence
of program arguments. Providing a sequence of arguments is generally
preferred, as it allows the module to take care of any required
escaping and quoting of arguments (e.g. to permit spaces in file
names). If passing a single string, either shell must be True (see
below) or else the string must simply name the program to be executed
without specifying any arguments.
(emphesis added)
While adding shell=True would be OK for this, it's recommended to avoid, as changing 'df -h' to ['df', '-h'] isn't very difficult, and is a good habit to get into, only using the shell if you really need to. As the docs also add, against a red background no less:
Warning.
Executing shell commands that incorporate unsanitized input from an
untrusted source makes a program vulnerable to shell injection, a
serious security flaw which can result in arbitrary command execution.
For this reason, the use of shell=True is strongly discouraged in
cases where the command string is constructed from external input
My end goal is to have a script that can be initially launched by a non-privileged user without using sudo, but will prompt for sudo password and self-elevate to root. I've been doing this with a bash wrapper script but would like something tidier that doesn't need an additional file.
Some googling found this question on StackOverflow where the accepted answer suggesting using os.execlpe to re-launch the script while retaining the same environment. I tried it, but it immediately failed to import a non-built-in module on the second run.
Investigating revealed that the PYTHONPATH variable is not carried over, while almost every other environment variable is (PERL5LIB is also missing, and a couple of others, but I'm not using them so they're not troubling me).
I have a brief little test script that demonstrates the issue:
#!/usr/bin/env python
import os
import sys
print(len(os.environ['PYTHONPATH']))
euid = os.geteuid()
if euid != 0:
print("Script not started as root. Running with sudo.")
args = ['sudo', sys,executable] + sys.argv + [os.environ]
os.execlpe('sudo', *args)
print("Success")
Expected output would be:
6548
Script not started as root. Running with sudo.
[sudo] password for esker:
6548
Success
But instead I'm getting a KeyError:
6548
Script not started as root. Running with sudo.
[sudo] password for esker:
Traceback (most recent call last):
File "/usr/home/esker/execlpe_test.py", line 5, in <module>
print(len(os.environ['PYTHONPATH']))
File "/vol/apps/python/2.7.6/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'PYTHONPATH'
What would be the cause of this missing variable, and how can I avoid it disappearing? Alternatively, is there a better way about doing this that won't result in running into the problem?
I found this very weird too, and couldn't find any direct way to pass the environment into the replaced process. But I didn't do a full system debugging either.
What I found to work as a workaround is this:
pypath = os.environ.get('PYTHONPATH', "")
args = ['sudo', f"PYTHONPATH={pypath}", sys.executable] + sys.argv
os.execvpe('sudo', args, os.environ)
I.e. explicitly pass PYTHONPATH= to the new process. Note that I prefer to use os.execvpe(), but it works the same with the other exec*(), given the correct call. See this answer for a good overview of the schema.
However, PATH and the rest of the environment is still it's own environment, as an initial print(os.environ) shows. But PYTHONPATH will be passed on this way.
You're passing the environment as arguments to your script instead of arguments to execlpe. Try this instead:
args = ['sudo', sys,executable] + sys.argv + [os.environ]
os.execvpe('sudo', args, os.environ)
If you just want to inherit the environment you can even
os.execvp('sudo', args)
I am trying to capture the output when I execute a custom command using Popen:
import subprocess
def exec_command():
command = "ls -la" # will be replaced by my custom command
result = subprocess.Popen(command, stdout=subprocess.PIPE).communicate()[0]
print(result)
exec_command()
I get an OSError with following stacktrace:
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Please let me know what I would need to use.
Note: The stacktrace shows the code was executed in Python 2.7, but I got the same error running with Python 2.6
When running without shell=True (which you are doing, correctly; shell=True is dangerous), you should pass your command as a sequence of the command and the arguments, not a single string. Fixed code is:
def exec_command():
command = ["ls", "-la"] # list of command and argument
... rest of code unchanged ...
If you had user input involved for some of the arguments, you'd just insert it into the list:
def exec_command(somedirfromuser):
command = ["ls", "-la", somedirfromuser]
Note: If your commands are sufficiently simple, I'd recommend avoiding subprocess entirely. os.listdir and os.stat (or on Python 3.5+, os.scandir alone) can get you the same info as ls -la in a more programmatically usable form without the need to parse it, and likely faster than launching an external process and communicating with it via a pipe.
I need to run the command date | grep -o -w '"+tz+"'' | wc -w using Python on my localhost. I am using subprocess module for the same and using the check_output method as I need to capture the output for the same.
However it is throwing me an error :
Traceback (most recent call last):
File "test.py", line 47, in <module>
check_timezone()
File "test.py", line 40, in check_timezone
count = subprocess.check_output(command)
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception-
OSError: [Errno 2] No such file or directory
You have to add shell=True to execute a shell command. check_output is trying to find an executable called: date | grep -o -w '"+tz+"'' | wc -w and he cannot find it. (no idea why you removed the essential information from the error message).
See the difference between:
>>> subprocess.check_output('date | grep 1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/subprocess.py", line 603, in check_output
with Popen(*popenargs, stdout=PIPE, **kwargs) as process:
File "/usr/lib/python3.4/subprocess.py", line 848, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.4/subprocess.py", line 1446, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'date | grep 1'
And:
>>> subprocess.check_output('date | grep 1', shell=True)
b'gio 19 giu 2014, 14.15.35, CEST\n'
Read the documentation about the Frequently Used Arguments for more information about the shell argument and how it changes the interpretation of the other arguments.
Note that you should try to avoid using shell=True since spawning a shell can be a security hazard (even if you do not execute untrusted input attacks like Shellshock can still be performed!).
The documentation for the subprocess module has a little section about replacing the shell pipeline.
You can do so by spawning the two processes in python and use subprocess.PIPE:
date_proc = subprocess.Popen(['date'], stdout=subprocess.PIPE)
grep_proc = subprocess.check_output(['grep', '1'], stdin=date_proc.stdout, stdout=subprocess.PIPE)
date_proc.stdout.close()
output = grep_proc.communicate()[0]
You can write some simple wrapper function to easily define pipelines:
import subprocess
from shlex import split
from collections import namedtuple
from functools import reduce
proc_output = namedtuple('proc_output', 'stdout stderr')
def pipeline(starter_command, *commands):
if not commands:
try:
starter_command, *commands = starter_command.split('|')
except AttributeError:
pass
starter_command = _parse(starter_command)
starter = subprocess.Popen(starter_command, stdout=subprocess.PIPE)
last_proc = reduce(_create_pipe, map(_parse, commands), starter)
return proc_output(*last_proc.communicate())
def _create_pipe(previous, command):
proc = subprocess.Popen(command, stdin=previous.stdout, stdout=subprocess.PIPE)
previous.stdout.close()
return proc
def _parse(cmd):
try:
return split(cmd)
except Exception:
return cmd
With this in place you can write pipeline('date | grep 1') or pipeline('date', 'grep 1') or pipeline(['date'], ['grep', '1'])
The most common cause of FileNotFound with subprocess, in my experience, is the use of spaces in your command. If you have just a single command (not a pipeline, and no redirection, wildcards, etc), use a list instead.
# Wrong, even with a valid command string
subprocess.run(['grep -o -w "+tz+"'])
# Fixed; notice also
subprocess.run(["grep", "-o", "-w", '"+tz+"'])
This change results in no more FileNotFound errors, and is a nice solution if you got here searching for that exception with a simpler command.
If you need a pipeline or other shell features, the simple fix is to add shell=True:
subprocess.run(
'''date | grep -o -w '"+tz+"'' | wc -w''',
shell=True)
However, if you are using python 3.5 or greater, try using this approach:
import subprocess
a = subprocess.run(["date"], stdout=subprocess.PIPE)
print(a.stdout.decode('utf-8'))
b = subprocess.run(["grep", "-o", "-w", '"+tz+"'],
input=a.stdout, stdout=subprocess.PIPE)
print(b.stdout.decode('utf-8'))
c = subprocess.run(["wc", "-w"],
input=b.stdout, stdout=subprocess.PIPE)
print(c.stdout.decode('utf-8'))
You should see how one command's output becomes another's input just like using a shell pipe, but you can easily debug each step of the process in python. Using subprocess.run is recommended for python > 3.5, but not available in prior versions.
The FileNotFoundError happens because - in the absence of shell=True - Python tries to find an executable whose file name is the entire string you are passing in. You need to add shell=True to get the shell to parse and execute the string, or figure out how to rearticulate this command line to avoid requiring a shell.
As an aside, the shell programming here is decidedly weird. On any normal system, date will absolutely never output "+tz+" and so the rest of the processing is moot.
Further, using wc -w to count the number of output words from grep is unusual. The much more common use case (if you can't simply use grep -c to count the number of matching lines) would be to use wc -l to count lines of output from grep.
Anyway, if you can, you want to avoid shell=True; if the intent here is to test the date command, you should probably replace the rest of the shell script with native Python code.
Pros:
The person trying to understand the program only needs to understand Python, not shell script.
The script will have fewer external dependencies (here, date) rather than require a Unix-like platform.
Cons:
Reimplementing standard Unix tools in Python is tiresome and sometimes rather verbose.
With that out of the way, if the intent is simply to count how wany times "+tz+" occurs in the output from date, try
p = subprocess.run(['date'],
capture_output=True, text=True,
check=True)
result = len(p.stdout.split('"+tz+"'))-1
The keyword argument text=True requires Python 3.7; for compatibility back to earlier Python versions, try the (misnomer) legacy synonym universal_newlines=True. For really old Python versions, maybe fall back to subprocess.check_output().
If you really need the semantics of the -w option of grep, you need to check if the characters adjacent to the match are not alphabetic, and exclude those which are. I'm leaving that as an exercise, and in fact would assume that the original shell script implementation here was not actually correct. (Maybe try re.split(r'(?<=^|\W)"\+tz\+"(?=\W|$)', p.stdout).)
In more trivial cases (single command, no pipes, wildcards, redirection, shell builtins, etc) you can use Python's shlex.split() to parse a command into a correctly quoted list of arguments. For example,
>>> import shlex
>>> shlex.split(r'''one "two three" four\ five 'six seven' eight"'"nine'"'ten''')
['one', 'two three', 'four five', 'six seven', 'eight\'nine"ten']
Notice how the regular string split() is completely unsuitable here; it simply splits on every whitespace character, and doesn't support any sort of quoting or escaping. (But notice also how it boneheadedly just returns a list of tokens from the original input:
>>> shlex.split('''date | grep -o -w '"+tz+"' | wc -w''')
['date', '|', 'grep', '-o', '-w', '"+tz+"', '|', 'wc', '-w']
(Even more parenthetically, this isn't exactly the original input, which had a superfluous extra single quote after '"+tz+"').
This is in fact passing | and grep etc as arguments to date, not implementing a shell pipeline! You still have to understand what you are doing.)
The question already has an answer above but just in case the solutions are not working for you; Please check the path itself and if all the environment variables are set for the process to locate the path.
what worked for me on python 3.8.10 (inspired by #mightypile solution here: https://stackoverflow.com/a/49986004/12361522), was removed splits of parametres and i had to enable shell, too:
this:
c = subprocess.run(["wc -w"], input=b.stdout, stdout=subprocess.PIPE, shell=True)
instead of:
c = subprocess.run(["wc", "-w"], input=b.stdout, stdout=subprocess.PIPE)
if anyone wanted to try my solution (at least for v3.8.10), here is mine:
i have directory with multiple files of at least 2 file-types (.jpg and others). i needed to count specific file type (.jpg) and not all files in the directory, via 1 pipe:
ls *.jpg | wc -l
so eventually i got it working like here:
import subprocess
proc1 = subprocess.run(["ls *.jpg"], stdout=subprocess.PIPE, shell=True)
proc2 = subprocess.run(['wc -l'], input=proc1.stdout, stdout=subprocess.PIPE, shell=True)
print(proc2.stdout.decode())
it would not work with splits:
["ls", "*.jpg"] that would make ls to ignore contraint *.jpg
['wc', '-l'] that would return correct count, but will all 3 outputs and not just one i was after
all that would not work without enabled shell shell=True
I had this error too and what worked for me was setting the line endings of the .sh file - that I was calling with subprocess - to Unix (LF) instead of Windows CRLF.
We have a vendor-supplied python tool ( that's byte-compiled, we don't have the source ). Because of this, we're also locked into using the vendor supplied python 2.4. The way to the util is:
source login.sh
oupload [options]
The login.sh just sets a few env variables, and then 2 aliases:
odownload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_download_command.pyc "$#"
}
oupload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_upload_command.pyc "$#"
}
Now, when I run it their way - works fine. It will prompt for a username and password, then do it's thing.
I'm trying to create a wrapper around the tool to do some extra steps after it's run and provide some sane defaults for the utility. The problem I'm running into is I cannot, for the life of me, figure out how to use subprocess to successfully do this. It seems to realize that the original command isn't running directly from the terminal and bails.
I created a '/usr/local/bin/oupload' and copied from the original login.sh. Only difference is instead of doing an alias at the end, I actually run the command.
Then, in my python script, I try to run my new shell script:
if os.path.exists(options.zipfile):
try:
cmd = string.join(cmdargs,' ')
p1 = Popen(cmd, shell=True, stdin=PIPE)
But I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 112, in getAuth
Empty Username not legal
Unknown Error Encountered
SUMMARY:
Name: Empty Username not legal
Description: None
So it seemed like an extra carriage return was getting sent ( I tried rstripping all the options, didn't help ).
If I don't set stdin=PIPE, I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 109, in getAuth
IOError: [Errno 5] Input/output error
Unknown Error Encountered
I've tried other variations of using p1.communicate, p1.stdin.write() along with shell=False and shell=True, but I've had no luck in trying to figure out how to properly send along the username and password. As a last result, I tried looking at the byte code for the utility they provided - it didn't help - once I called the util's main routine with the proper arguments, it ended up core dumping w/ thread errors.
Final thoughts - the utility doesn't want to seem to 'wait' for any input. When run from the shell, it pauses at the 'Username' prompt. When run through python's popen, it just blazes thru and ends, assuming no password was given. I tried to lookup ways of maybe preloading the stdin buffer - thinking maybe the process would read from that if it was available, but couldn't figure out if that was possible.
I'm trying to stay away from using pexpect, mainly because we have to use the vendor's provided python 2.4 because of the precompiled libraries they provide and I'm trying to keep distribution of the script to as minimal a footprint as possible - if I have to, I have to, but I'd rather not use it ( and I honestly have no idea if it works in this situation either ).
Any thoughts on what else I could try would be most appreciated.
UPDATE
So I solved this by diving further into the bytecode and figuring out what I was missing from the compiled command.
However, this presented two problems -
The vendor code, when called, was doing an exit when it completed
The vendor code was writing to stdout, which I needed to store and operate on ( it contains the ID of the uploaded pkg ). I couldn't just redirect stdout, because the vendor code was still asking for the username/password.
1 was solved easy enough by wrapping their code in a try/except clause.
2 was solved by doing something similar to: https://stackoverflow.com/a/616672/677373
Instead of a log file, I used cStringIO. I also had to implement a fake 'flush' method, since it seems the vendor code was calling that and complaining that the new obj I had provided for stdout didn't supply it - code ends up looking like:
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = StringIO()
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
self.terminal.flush()
self.log.flush()
if os.path.exists(options.zipfile):
try:
os.environ['OCLI_CODESET'] = 'ISO-8859-1'
backup = sys.stdout
sys.stdout = output = Logger()
# UploadCommand was the command found in the bytecode
upload = UploadCommand()
try:
upload.main(cmdargs)
except Exception, rc:
pass
sys.stdout = backup
# now do some fancy stuff with output from output.log
I should note that the only reason I simply do a 'pass' in the except: clause is that the except clause is always called. The 'rc' is actually the return code from the command, so I will probably add handling for non-zero cases.
I tried to lookup ways of maybe preloading the stdin buffer
Do you perhaps want to create a named fifo, fill it with username/password info, then reopen it in read mode and pass it to popen (as in popen(..., stdin=myfilledbuffer))?
You could also just create an ordinary temporary file, write the data to it, and reopen it in read mode, again, passing the reopened handle as stdin. (This is something I'd personally avoid doing, since writing username/passwords to temporary files is often of the bad. OTOH it's easier to test than FIFOs)
As for the underlying cause: I suspect that the offending software is reading from stdin via a non-blocking method. Not sure why that works when connected to a terminal.
AAAANYWAY: no need to use pipes directly via Popen at all, right? I kinda laugh at the hackishness of this, but I'll bet it'll work for you:
# you don't actually seem to need popen here IMO -- call() does better for this application.
statuscode = call('echo "%s\n%s\n" | oupload %s' % (username, password, options) , shell=True)
tested with status = call('echo "foo\nbar\nbar\nbaz" |wc -l', shell = True) (output is '4', naturally.)
The original question was solved by just avoiding the issue and not using the terminal and instead importing the python code that was being called by the shell script and just using that.
I believe J.F. Sebastian's answer would probably work better for what was originally asked, however, so I'd suggest people looking for an answer to a similar question look down the path of using the pty module.