How to pass multiple arguments within subprocess.check_call()? - python

How can I pass multiple arguments within my subprocess call in python while running my shell script?
import subprocess
subprocess.check_call('./test_bash.sh '+arg1 +arg2, shell=True)
This prints out the arg1 and arg2 concatenated as one argument. I would need to pass 3 arguments to my shell script.

of course it concatenates because you're not inserting a space between them. Quickfix would be (with format, and can fail if some arguments contain spaces)
subprocess.check_call('./test_bash.sh {} {}'.format(arg1,arg2), shell=True)
can you try (more robust, no need to quote spaces, automatic command line generation):
check_call(['./test_bash.sh',arg1,arg2],shell=True)`
(not sure it works on all systems because of shell=True and argument list used together)
or drop shell=True and call the shell explicitly (may fail because shebang is not considered, unlike with shell=True, but it's worth giving up shell=True, liable to code injection among other issues):
check_call(['sh','-c','./test_bash.sh',arg1,arg2])

Related

how to avoid shell=True in subprocess

I have subprocess command to check md5 checksum as
subprocess.check_output('md5 Downloads/test.txt', stderr=subprocess.STDOUT, shell=True)
It works fine.
But I read try to avoid shell=True
but when I run
subprocess.check_output('md5 Downloads/test.txt', stderr=subprocess.STDOUT, shell=False)
I get error OSError: [Errno 2] No such file or directory
Can I run above command or workaround with shell=False or it's ok to keep shell=True?
Just pass the arguments to check_output() as a list:
subprocess.check_output(["md5", "Downloads/test.txt"], stderr=subprocess.STDOUT)
From the docs:
args is required for all calls and should be a string, or a sequence
of program arguments. Providing a sequence of arguments is generally
preferred, as it allows the module to take care of any required
escaping and quoting of arguments (e.g. to permit spaces in file
names). If passing a single string, either shell must be True (see
below) or else the string must simply name the program to be executed
without specifying any arguments.
in case of complex commands , you can use shlex to pass the commands as a list to Check_Output or any other subprocess classes
from the document
shlex.split() can be useful when determining the correct tokenization for args, especially in complex cases:
https://docs.python.org/3.6/library/subprocess.html#subprocess.check_output
coming to above example
import shlex
inp="md5 Downloads/test.txt"
command=shlex.split(inp)
subprocess.check_output(command, stderr=subprocess.STDOUT)

Subprocess call failed to parse argument (kill function) Python [duplicate]

import os
import subprocess
proc = subprocess.Popen(['ls','*.bc'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err = proc.communicate()
print out
This script should print all the files with .bc suffix however it returns an empty list. If I do ls *.bc manually in the command line it works. Doing ['ls','test.bc'] inside the script works as well but for some reason the star symbol doesnt work.. Any ideas ?
You need to supply shell=True to execute the command through a shell interpreter.
If you do that however, you can no longer supply a list as the first argument, because the arguments will get quoted then. Instead, specify the raw commandline as you want it to be passed to the shell:
proc = subprocess.Popen('ls *.bc', shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Expanding the * glob is part of the shell, but by default subprocess does not send your commands via a shell, so the command (first argument, ls) is executed, then a literal * is used as an argument.
This is a good thing, see the warning block in the "Frequently Used Arguments" section, of the subprocess docs. It mainly discusses security implications, but can also helps avoid silly programming errors (as there are no magic shell characters to worry about)
My main complaint with shell=True is it usually implies there is a better way to go about the problem - with your example, you should use the glob module:
import glob
files = glob.glob("*.bc")
print files # ['file1.bc', 'file2.bc']
This will be quicker (no process startup overhead), more reliable and cross platform (not dependent on the platform having an ls command)
Besides doing shell=True, also make sure that your path is not quoted. Otherwise it will not be expanded by shell.
If your path may have special characters, you will have to escape them manually.

How does subprocess.call() work with shell=False?

I am using Python's subprocess module to call some Linux command line functions. The documentation explains the shell=True argument as
If shell is True, the specified command will be executed through the shell
There are two examples, which seem the same to me from a descriptive viewpoint (i.e. both of them call some command-line command), but one of them uses shell=True and the other does not
>>> subprocess.call(["ls", "-l"])
0
>>> subprocess.call("exit 1", shell=True)
1
My question is:
What does running the command with shell=False do, in contrast to shell=True?
I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell. In other words, how can it possibly not execute the argument through the shell?
It would also be helpful to get some examples of:
Things that can be done with shell=True that can't be done with
shell=False and why they can't be done.
Vice versa (although it seems that there are no such examples)
Things for which it does not matter whether shell=True or False and why it doesn't matter
UNIX programs start each other with the following three calls, or derivatives/equivalents thereto:
fork() - Create a new copy of yourself.
exec() - Replace yourself with a different program (do this if you're the copy!).
wait() - Wait for another process to finish (optional, if not running in background).
Thus, with shell=False, you do just that (as Python-syntax pseudocode below -- exclude the wait() if not a blocking invocation such as subprocess.call()):
pid = fork()
if pid == 0: # we're the child process, not the parent
execlp("ls", "ls", "-l", NUL);
else:
retval = wait(pid) # we're the parent; wait for the child to exit & get its exit status
whereas with shell=True, you do this:
pid = fork()
if pid == 0:
execlp("sh", "sh", "-c", "ls -l", NUL);
else:
retval = wait(pid)
Note that with shell=False, the command we executed was ls, whereas with shell=True, the command we executed was sh.
That is to say:
subprocess.Popen(foo, shell=True)
is exactly the same as:
subprocess.Popen(
["sh", "-c"] + ([foo] if isinstance(foo, basestring) else foo),
shell=False)
That is to say, you execute a copy of /bin/sh, and direct that copy of /bin/sh to parse the string into an argument list and execute ls -l itself.
So, why would you use shell=True?
You're invoking a shell builtin.
For instance, the exit command is actually part of the shell itself, rather than an external command. That said, this is a fairly small set of commands, and it's rare for them to be useful in the context of a shell instance that only exists for the duration of a single subprocess.call() invocation.
You have some code with shell constructs (ie. redirections) that would be difficult to emulate without it.
If, for instance, your command is cat one two >three, the syntax >three is a redirection: It's not an argument to cat, but an instruction to the shell to set stdout=open('three', 'w') when running the command ['cat', 'one', 'two']. If you don't want to deal with redirections and pipelines yourself, you need a shell to do it.
A slightly trickier case is cat foo bar | baz. To do that without a shell, you need to start both sides of the pipeline yourself: p1 = Popen(['cat', 'foo', 'bar'], stdout=PIPE), p2=Popen(['baz'], stdin=p1.stdout).
You don't give a damn about security bugs.
...okay, that's a little bit too strong, but not by much. Using shell=True is dangerous. You can't do this: Popen('cat -- %s' % (filename,), shell=True) without a shell injection vulnerability: If your code were ever invoked with a filename containing $(rm -rf ~), you'd have a very bad day. On the other hand, ['cat', '--', filename] is safe with all possible filenames: The filename is purely data, not parsed as source code by a shell or anything else.
It is possible to write safe scripts in shell, but you need to be careful about it. Consider the following:
filenames = ['file1', 'file2'] # these can be user-provided
subprocess.Popen(['cat -- "$#" | baz', '_'] + filenames, shell=True)
That code is safe (well -- as safe as letting a user read any file they want ever is), because it's passing your filenames out-of-band from your script code -- but it's safe only because the string being passed to the shell is fixed and hardcoded, and the parameterized content is external variables (the filenames list). And even then, it's "safe" only to a point -- a bug like Shellshock that triggers on shell initialization would impact it as much as anything else.
I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell.
No, subprocess is perfectly capable of starting a program directly (via an operating system call). It does not need a shell
Things that can be done with shell=True that can't be done with shell=False
You can use shell=False for any command that simply runs some executable optionally with some specified arguments.
You must use shell=True if your command uses shell features. This includes pipelines, |, or redirections or that contains compound statements combined with ; or && or || etc.
Thus, one can use shell=False for a command like grep string file. But, a command like grep string file | xargs something will, because of the | require shell=True.
Because the shell has power features that python programmers do not always find intuitive, it is considered better practice to use shell=False unless you really truly need the shell feature. As an example, pipelines are not really truly needed because they can also be done using subprocess' PIPE feature.

Python pipeline using GNU Parallel

I'm trying to write a wrapper around GNU Parallel in Python to run a command in parallel, but seem to be misunderstanding either how GNU Parallel works, system pipes and/or python subprocess pipes.
Essentially I am looking to use GNU Parallel to handle splitting up an input file and then running another command in parallel on multiple hosts.
I can investigate some pure python way to do this in the future, but it seems like it should be easily implemented using GNU Parallel.
t.py
#!/usr/bin/env python
import sys
print
print sys.stdin.read()
print
p.py
from subprocess import *
import os
from os.path import *
args = ['--block', '10', '--recstart', '">"', '--sshlogin', '3/:', '--pipe', './t.py']
infile = 'test.fa'
fh = open('test.fa','w')
fh.write('''>M02261:11:000000000-ADWJ7:1:1101:16207:1115 1:N:0:1
CAGCTACTCGGGGAATCCTTGTTGCTGAGCTCTTCCCTTTTCGCTCGCAGCTACTCGGGGAATCCTTGTTGCTGAGCTCTTCCCTTTTCGCTCGCAGCTACTCGGGGAATCCTTGTTGCTGAGCTCTTCCCTTTTCGCTCGCAGCTACTCGGGGAATCCTTGTTGCTGAGCTCTTCCCTTT
>M02261:11:000000000-ADWJ7:1:1101:21410:1136 1:N:0:1
ATAGTAGATAGGGACATAGGGAATCTCGTTAATCCATTCATGCGCGTCACTAATTAGATGACGAGGCATTTGGCTACCTTAAGAGAGTCATAGTTACTCCCGCCGTTTACC
>M02261:11:000000000-ADWJ7:1:1101:13828:1155 1:N:0:1
GGTTTAGAGTCTCTAGTCGATAGATCAATGTAGGTAAGGGAAGTCGGCAAATTAGATCCGTAACTTCGGGATAAGGATTGGCTCTGAAGGCTGGGATGACTCGGGCTCTGGTGCCTTCGCGGGTGCTTTGCCTCAACGCGCGCCGGCCGGCTCGGGTGGTTTGCGCCGCCTGTGGTCGCGTCGGCCGCTGCAGTCATCAATAAACAGCCAATTCAGAACTGGCACGGCTGAGGGAATCCGACGGTCTAATTAAAACAAAGCATTGTGATGGACTCCGCAGGTGTTGACACAATGTGATTTT
>M02261:11:000000000-ADWJ7:1:1101:14120:1159 1:N:0:1
GAGTAGCTGCGAGCGAAAAGGGAAGAGCTCAAGGGGAGGAAAAGAAACTAACAAGGATTCCCCGAGTAGCTGCGAGCGAAAAGGGAAGCGCCCAAGGGGGGCAACAGGAACTAACAAGAATTCGCCGACTAGCTGCGACCTGAAAAGGAAAAACCCAAGGGGAGGAAAAGAAACTAACAAGGATTCCCCGAGTAGCTGCGAGCAGAAAAGGAAAAGCACAAGAGGAGGAAACGACACTAATAAGACTTCCCATACAAGCGGCGAGCAAAACAGCACGAGCCCAACGGCGAGAAAAGCAAAA
>M02261:11:000000000-ADWJ7:1:1101:8638:1172 1:N:0:1
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
''')
fh.close()
# Call 1
Popen(['parallel']+args, stdin=open(infile,'rb',0), stdout=open('output','w')).wait()
# Call 2
_cat = Popen(['cat', infile], stdout=PIPE)
Popen(['parallel']+args, stdin=_cat.stdout, stdout=open('output2','w')).wait()
# Call 3
Popen('cat '+infile+' | parallel ' + ' '.join(args), shell=True, stdout=open('output3','w')).wait()
Call 1 and Call 2 produce the same output while Call 3 produces the output I would expect where the input file was split up and contains empty lines between records.
I'm more curious about what the differences are between Call 1,2 and Call 3.
TL;DR Don't quote ">" when shell=False.
If you use shell=True, you can use all the shell's facilities, like globbing, I/O redirection, etc. You will need to quote anything which needs to be escaped from the shell. You can pass the entire command line as a single string, and the shell will parse it.
unsafe = subprocess.Popen('echo `date` "my files" * >output', shell=True)
With shell=False, you have no "secret" side effects behind the scenes, and none of the shell's facilities are available to you. So you need to take care of globbing, redirection, etc on the Python side. On the plus account, you save a (potentially significant) extra process, you have more control, and you don't need (and indeed mustn't) quote things which had to be quoted when the shell was involved. In summary, this is also safer, because you can see exactly what you are doing.
cmd = ['echo']
cmd.append(datestamp())
cmd.append['my files'] # notice absence of shell quotes around string
cmd.extend(glob('*'))
safer = subprocess.Popen(cmd, shell=False, stdout=open('output', 'w+'))
(This still differs slightly, because with modern shells, echo is a builtin, whereas now, we will be executing an external utility /bin/echo or whichever executable with that name comes first in your PATH.)
Now, returning to your examples, the problem in your args is that you are quoting a literal ">" as the record separator. When a shell is involved, an unquoted right broket would invoke redirection, so to specify it as a string, it has to be escaped or quoted; but when no shell is in the picture, there isn't anything which handles (or requires) those quotes, so to pass a literal > argument, simply pass that literally.
With that out of the way, your call #1 definitely seems like the way to go. (Though I'm not entirely convinced that it's sane to write a Python wrapper for a shell command implemented in Perl. I suspect that juggling a bunch of parallel child processes in Python directly would not be more complicated.)

Why is my variable not be included in my subprocess.Popen?

I'm simply trying to pass along a variable to my shell script, but it isn't being handed off. I've
following examples from the python docs, but it's not working. What am I missing?
subprocess.Popen(['./script.sh' + variable] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
You shouldn't be using shell=True here at all, unless you want any actual shell syntax in your variable (like >file.log) to be executed.
subprocess.Popen(['./script.sh', variable],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
If you really want shell=True, you have a few options to do so securely. The first is to use pipes.quote() (or, in Python 3, shlex.quote()) to prevent shell escapes:
subprocess.Popen('./script.sh ' + pipes.quote(variable), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
The second is to pass the name as a subsequent argument (note the empty string, which becomes $0 in the generated shell):
subprocess.Popen(['./script.sh "$1"', '', variable], shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Remember, Bobby Tables isn't just for SQL -- his younger sister
Susan $(rm -rf /) is out there too.
You're combining two different ways to doing things. And, on top of that, you're doing it wrong, but just fixing the "doing it wrong" isn't the answer.
You can put your two arguments in a list, and then launch it without the shell, like ['./script.sh', variable]. This is usually better. Using the shell means you have to deal with quoting, and with accidental or malicious injection, and can interfere with your input and output, and adds a performance cost. So, if you don't need it, don't use it.
Or you can put your two arguments in a string, and then launch it with the shell, like './script.sh ' + variable.
But you can't put your two arguments in a string, and then put that string in a list. In some cases, it will happen to work, but that's not something you can rely on.
In some cases, you can use a list with the shell,* or a string without the shell,** but generally you shouldn't do that unless you know what you're doing, and in any case, you still shouldn't be using a list of one string unless there's a specific reason you need to.***
If you want to use a list of arguments, do this:
subprocess.Popen(['./script.sh', variable], shell=False, …)
Notice that this is a list of two strings, not a list of one joined-up string, and that shell=False.
If you want to use a shell command line, don't put the command line in a list, don't skip the space between the arguments, and quote any non-static arguments, like this:
subprocess.Popen('./script.sh ' + shlex.quote(variable), shell=True, …)
* Using a list with the shell on Windows is never useful; they just get combined up in some unspecified way. But on Unix, subprocess will effectively prepend '/bin/sh' and '-c' to your list, and use that as the arg list for /bin/sh, which can be simpler than trying to quote shell arguments, and at least arguably more concise than explicitly calling /bin/sh with shell=False.
** Using a string without the shell on Unix is never useful; that just tries to find a program whose name is the whole string, which is going to fail (unless you're really unlucky). But on Windows, it can be useful; subprocess tries to combine your arguments into a string to be passed to CreateProcess in such a way that MSVCRT will parse them back to the same list of arguments on the other side, and in some edge cases it's necessary to create that string yourself.
*** Basically, you want to spawn ['/bin/sh', '-c', <command line>] exactly.
Add space after ./script.sh:
subprocess.Popen(['./script.sh ' + variable] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Would just add a space after script name:
subprocess.Popen(['./script.sh ' + variable], shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

Categories

Resources