Why is my variable not be included in my subprocess.Popen? - python

I'm simply trying to pass along a variable to my shell script, but it isn't being handed off. I've
following examples from the python docs, but it's not working. What am I missing?
subprocess.Popen(['./script.sh' + variable] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

You shouldn't be using shell=True here at all, unless you want any actual shell syntax in your variable (like >file.log) to be executed.
subprocess.Popen(['./script.sh', variable],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
If you really want shell=True, you have a few options to do so securely. The first is to use pipes.quote() (or, in Python 3, shlex.quote()) to prevent shell escapes:
subprocess.Popen('./script.sh ' + pipes.quote(variable), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
The second is to pass the name as a subsequent argument (note the empty string, which becomes $0 in the generated shell):
subprocess.Popen(['./script.sh "$1"', '', variable], shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Remember, Bobby Tables isn't just for SQL -- his younger sister
Susan $(rm -rf /) is out there too.

You're combining two different ways to doing things. And, on top of that, you're doing it wrong, but just fixing the "doing it wrong" isn't the answer.
You can put your two arguments in a list, and then launch it without the shell, like ['./script.sh', variable]. This is usually better. Using the shell means you have to deal with quoting, and with accidental or malicious injection, and can interfere with your input and output, and adds a performance cost. So, if you don't need it, don't use it.
Or you can put your two arguments in a string, and then launch it with the shell, like './script.sh ' + variable.
But you can't put your two arguments in a string, and then put that string in a list. In some cases, it will happen to work, but that's not something you can rely on.
In some cases, you can use a list with the shell,* or a string without the shell,** but generally you shouldn't do that unless you know what you're doing, and in any case, you still shouldn't be using a list of one string unless there's a specific reason you need to.***
If you want to use a list of arguments, do this:
subprocess.Popen(['./script.sh', variable], shell=False, …)
Notice that this is a list of two strings, not a list of one joined-up string, and that shell=False.
If you want to use a shell command line, don't put the command line in a list, don't skip the space between the arguments, and quote any non-static arguments, like this:
subprocess.Popen('./script.sh ' + shlex.quote(variable), shell=True, …)
* Using a list with the shell on Windows is never useful; they just get combined up in some unspecified way. But on Unix, subprocess will effectively prepend '/bin/sh' and '-c' to your list, and use that as the arg list for /bin/sh, which can be simpler than trying to quote shell arguments, and at least arguably more concise than explicitly calling /bin/sh with shell=False.
** Using a string without the shell on Unix is never useful; that just tries to find a program whose name is the whole string, which is going to fail (unless you're really unlucky). But on Windows, it can be useful; subprocess tries to combine your arguments into a string to be passed to CreateProcess in such a way that MSVCRT will parse them back to the same list of arguments on the other side, and in some edge cases it's necessary to create that string yourself.
*** Basically, you want to spawn ['/bin/sh', '-c', <command line>] exactly.

Add space after ./script.sh:
subprocess.Popen(['./script.sh ' + variable] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

Would just add a space after script name:
subprocess.Popen(['./script.sh ' + variable], shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

Related

Print Subprocess.Popen

I have problem with function Popen. I try retrieving the output from command which i used.
print(subprocess.Popen("dig -x 156.17.86.3 +short", shell=True, stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip())
This part working, but when I call the variable inside Popen (for adress in IP)
print(subprocess.Popen("dig -x ",Adres," +short", shell=True, stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip())
happens something like that:
raise TypeError("bufsize must be an integer")
I thought it would be problem with command so I used this solution:
command=['dig','-x',str(Adres),'+short']
print(subprocess.Popen(command, shell=True, stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip())
But now the return values is different than from console :
dig -x 156.17.4.20 +short
vpn.ii.uni.wroc.pl.
How I can print this the above name in script ?
Thank a lot of
The error is that you're not passing a single string, but multiple separate arguments:
subprocess.Popen("dig -x ",Adres," +short", shell=True, stdout=subprocess.PIPE)
If you look at the Popen constructor in the docs, that means you're passing "dig -x" as the args string, passing Adres as the bufsize, and passing "+short" as the executable. That's definitely not what you want.
You could fix this by building a string with concatenation or string formatting:
subprocess.Popen("dig -x " + str(Adres) + " +short", shell=True, stdout=subprocess.PIPE)
subprocess.Popen(f"dig -x {Adres} +short", shell=True, stdout=subprocess.PIPE)
However, a much better fix is to just not use the shell here, and pass the arguments as a list:
subprocess.Popen(['dig', '-x', Adres, '+short'], stdout=subprocess.PIPE)
Notice that if you do this, you have to remove the shell=True, or this won't work. (It may actually work on Windows, but not on *nix, and you shouldn't do it even on Windows.) In the edited version of your question, you're not doing that, so it's still wrong.
While we're at it, you really don't need to create a Popen object and communicate with it if that's literally all you're doing. A simpler solution is:
print(subprocess.run(['dig', '-x', Adres, '+short'], stdout=subprocess.PIPE).stdout.decode('utf-8'))
Also, if you're having problems debugging a complicated expression like yours, it really helps to break it into separate pieces that you can debug separately (with extra prints, or debugger breakpoints):
proc = subprocess.run(['dig', '-x', Adres, '+short'], stdout=subprocess.PIPE)
result = proc.stdout.decode('utf-8')
print(result)
This is essentially the same thing, with nearly the same efficiency, but easier to read and easier to debug.
And when I run this with Adres = '156.17.4.20', I get exactly the output you're looking for:
vpn.ii.uni.wroc.pl.

Subprocess call failed to parse argument (kill function) Python [duplicate]

import os
import subprocess
proc = subprocess.Popen(['ls','*.bc'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err = proc.communicate()
print out
This script should print all the files with .bc suffix however it returns an empty list. If I do ls *.bc manually in the command line it works. Doing ['ls','test.bc'] inside the script works as well but for some reason the star symbol doesnt work.. Any ideas ?
You need to supply shell=True to execute the command through a shell interpreter.
If you do that however, you can no longer supply a list as the first argument, because the arguments will get quoted then. Instead, specify the raw commandline as you want it to be passed to the shell:
proc = subprocess.Popen('ls *.bc', shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Expanding the * glob is part of the shell, but by default subprocess does not send your commands via a shell, so the command (first argument, ls) is executed, then a literal * is used as an argument.
This is a good thing, see the warning block in the "Frequently Used Arguments" section, of the subprocess docs. It mainly discusses security implications, but can also helps avoid silly programming errors (as there are no magic shell characters to worry about)
My main complaint with shell=True is it usually implies there is a better way to go about the problem - with your example, you should use the glob module:
import glob
files = glob.glob("*.bc")
print files # ['file1.bc', 'file2.bc']
This will be quicker (no process startup overhead), more reliable and cross platform (not dependent on the platform having an ls command)
Besides doing shell=True, also make sure that your path is not quoted. Otherwise it will not be expanded by shell.
If your path may have special characters, you will have to escape them manually.

How does subprocess.call() work with shell=False?

I am using Python's subprocess module to call some Linux command line functions. The documentation explains the shell=True argument as
If shell is True, the specified command will be executed through the shell
There are two examples, which seem the same to me from a descriptive viewpoint (i.e. both of them call some command-line command), but one of them uses shell=True and the other does not
>>> subprocess.call(["ls", "-l"])
0
>>> subprocess.call("exit 1", shell=True)
1
My question is:
What does running the command with shell=False do, in contrast to shell=True?
I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell. In other words, how can it possibly not execute the argument through the shell?
It would also be helpful to get some examples of:
Things that can be done with shell=True that can't be done with
shell=False and why they can't be done.
Vice versa (although it seems that there are no such examples)
Things for which it does not matter whether shell=True or False and why it doesn't matter
UNIX programs start each other with the following three calls, or derivatives/equivalents thereto:
fork() - Create a new copy of yourself.
exec() - Replace yourself with a different program (do this if you're the copy!).
wait() - Wait for another process to finish (optional, if not running in background).
Thus, with shell=False, you do just that (as Python-syntax pseudocode below -- exclude the wait() if not a blocking invocation such as subprocess.call()):
pid = fork()
if pid == 0: # we're the child process, not the parent
execlp("ls", "ls", "-l", NUL);
else:
retval = wait(pid) # we're the parent; wait for the child to exit & get its exit status
whereas with shell=True, you do this:
pid = fork()
if pid == 0:
execlp("sh", "sh", "-c", "ls -l", NUL);
else:
retval = wait(pid)
Note that with shell=False, the command we executed was ls, whereas with shell=True, the command we executed was sh.
That is to say:
subprocess.Popen(foo, shell=True)
is exactly the same as:
subprocess.Popen(
["sh", "-c"] + ([foo] if isinstance(foo, basestring) else foo),
shell=False)
That is to say, you execute a copy of /bin/sh, and direct that copy of /bin/sh to parse the string into an argument list and execute ls -l itself.
So, why would you use shell=True?
You're invoking a shell builtin.
For instance, the exit command is actually part of the shell itself, rather than an external command. That said, this is a fairly small set of commands, and it's rare for them to be useful in the context of a shell instance that only exists for the duration of a single subprocess.call() invocation.
You have some code with shell constructs (ie. redirections) that would be difficult to emulate without it.
If, for instance, your command is cat one two >three, the syntax >three is a redirection: It's not an argument to cat, but an instruction to the shell to set stdout=open('three', 'w') when running the command ['cat', 'one', 'two']. If you don't want to deal with redirections and pipelines yourself, you need a shell to do it.
A slightly trickier case is cat foo bar | baz. To do that without a shell, you need to start both sides of the pipeline yourself: p1 = Popen(['cat', 'foo', 'bar'], stdout=PIPE), p2=Popen(['baz'], stdin=p1.stdout).
You don't give a damn about security bugs.
...okay, that's a little bit too strong, but not by much. Using shell=True is dangerous. You can't do this: Popen('cat -- %s' % (filename,), shell=True) without a shell injection vulnerability: If your code were ever invoked with a filename containing $(rm -rf ~), you'd have a very bad day. On the other hand, ['cat', '--', filename] is safe with all possible filenames: The filename is purely data, not parsed as source code by a shell or anything else.
It is possible to write safe scripts in shell, but you need to be careful about it. Consider the following:
filenames = ['file1', 'file2'] # these can be user-provided
subprocess.Popen(['cat -- "$#" | baz', '_'] + filenames, shell=True)
That code is safe (well -- as safe as letting a user read any file they want ever is), because it's passing your filenames out-of-band from your script code -- but it's safe only because the string being passed to the shell is fixed and hardcoded, and the parameterized content is external variables (the filenames list). And even then, it's "safe" only to a point -- a bug like Shellshock that triggers on shell initialization would impact it as much as anything else.
I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell.
No, subprocess is perfectly capable of starting a program directly (via an operating system call). It does not need a shell
Things that can be done with shell=True that can't be done with shell=False
You can use shell=False for any command that simply runs some executable optionally with some specified arguments.
You must use shell=True if your command uses shell features. This includes pipelines, |, or redirections or that contains compound statements combined with ; or && or || etc.
Thus, one can use shell=False for a command like grep string file. But, a command like grep string file | xargs something will, because of the | require shell=True.
Because the shell has power features that python programmers do not always find intuitive, it is considered better practice to use shell=False unless you really truly need the shell feature. As an example, pipelines are not really truly needed because they can also be done using subprocess' PIPE feature.

Difference between whole string command and list of strings in popen

I found most of the programmers suggest use list of strings to represent the command in popen. However, in my own project, I found a whole string works in more cases.
For example, the following works
subprocess.Popen('pgrep -f "\./run"', stdout=subprocess.PIPE, shell=True).wait()
while
subprocess.Popen(['pgrep', '-f', '"\./run"'], stdout=subprocess.PIPE, shell=True).wait()
does not.
May I know what's the difference between these two ways of implementation and why the second one does not work as expected?
The second should not have a shell=True parameter. Instead, it should be:
subprocess.Popen(['pgrep', '-f', '"\./run"'], stdout=subprocess.PIPE).wait().
The shell parameter sets whether or not to execute the command in a separate shell. That is, if a new shell should be spawned just to execute the command, which must be interpreted by the shell before it can be run.
When providing a list of strings, however, this does not spawn a second shell, and thus is (minimally) faster. It is also better to use for processing variable input, because it avoids string interpolation.
See: https://stackoverflow.com/a/15109975/1730261

Correct incantation of subprocess with shell=True to get output and not hang

Inside a subprocess call, I want to use shell=True so that it does globbing on pathnames (code below), however this has the annoying side-effect of making subprocess spawn a child process (which must then be `communicate()d/ poll()ed/ wait()ed/ terminate()d/ kill()ed/ whatevah).
(Yes I am aware the globbing can also be done with fnmatch/glob, but please show me the 'correct' use of subprocess on this, i.e. the minimal incantation to both get the stdout and stop the child process.)
This works fine (returns output):
subprocess.check_output(['/usr/bin/wc','-l','[A-Z]*/[A-Z]*.F*'], shell=False)
but this hangs
subprocess.check_output(['/usr/bin/wc','-l','[A-Z]*/[A-Z]*.F*'], shell=True)
(PS: It's seriously aggravating that you can't tell subprocess you want some but not all shell functionality e.g. globbing but not spawning. I think there's a worthy PEP in that, if anyone cares to comment, i.e. pass in a tuple of Boolean, or an or of binary flags)
(PPS: the idiom of whether you pass subprocess...(cmdstring.split() or [...]) is just a trivial idiomatic difference. I say tomato, you say tomay-to. In my case, the motivation is the command is fixed but I may want to call it more than once with a difference filespec.)
First off -- there's very little point to passing an array to:
subprocess.check_output(['/usr/bin/wc','-l','A-Z*/A-Z*.F*'], shell=True)
...as this simply runs wc with no arguments, in a shell also passed arguments -l and A-Z*/A-Z*.F* as arguments (to the shell, not to wc). Instead, you want:
subprocess.check_output('/usr/bin/wc -l A-Z*/A-Z*.F*', shell=True)
Before being corrected, this would hang because wc had no arguments and was reading from stdin. I would suggest ensuring that stdin is passed in closed, rather than passing along your Python program's stdin (as is the default behavior).
An easy way to do this, since you have shell=True:
subprocess.check_output(
'/usr/bin/wc -l A-Z*/A-Z*.F* </dev/null',
shell=True)
...alternately:
p = subprocess.Popen('/usr/bin/wc -l A-Z*/A-Z*.F*', shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None)
(output, _) = p.communicate(input='')
...which will ensure an empty stdin from Python code rather than relying on the shell.

Categories

Resources