I have problem with function Popen. I try retrieving the output from command which i used.
print(subprocess.Popen("dig -x 156.17.86.3 +short", shell=True, stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip())
This part working, but when I call the variable inside Popen (for adress in IP)
print(subprocess.Popen("dig -x ",Adres," +short", shell=True, stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip())
happens something like that:
raise TypeError("bufsize must be an integer")
I thought it would be problem with command so I used this solution:
command=['dig','-x',str(Adres),'+short']
print(subprocess.Popen(command, shell=True, stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip())
But now the return values is different than from console :
dig -x 156.17.4.20 +short
vpn.ii.uni.wroc.pl.
How I can print this the above name in script ?
Thank a lot of
The error is that you're not passing a single string, but multiple separate arguments:
subprocess.Popen("dig -x ",Adres," +short", shell=True, stdout=subprocess.PIPE)
If you look at the Popen constructor in the docs, that means you're passing "dig -x" as the args string, passing Adres as the bufsize, and passing "+short" as the executable. That's definitely not what you want.
You could fix this by building a string with concatenation or string formatting:
subprocess.Popen("dig -x " + str(Adres) + " +short", shell=True, stdout=subprocess.PIPE)
subprocess.Popen(f"dig -x {Adres} +short", shell=True, stdout=subprocess.PIPE)
However, a much better fix is to just not use the shell here, and pass the arguments as a list:
subprocess.Popen(['dig', '-x', Adres, '+short'], stdout=subprocess.PIPE)
Notice that if you do this, you have to remove the shell=True, or this won't work. (It may actually work on Windows, but not on *nix, and you shouldn't do it even on Windows.) In the edited version of your question, you're not doing that, so it's still wrong.
While we're at it, you really don't need to create a Popen object and communicate with it if that's literally all you're doing. A simpler solution is:
print(subprocess.run(['dig', '-x', Adres, '+short'], stdout=subprocess.PIPE).stdout.decode('utf-8'))
Also, if you're having problems debugging a complicated expression like yours, it really helps to break it into separate pieces that you can debug separately (with extra prints, or debugger breakpoints):
proc = subprocess.run(['dig', '-x', Adres, '+short'], stdout=subprocess.PIPE)
result = proc.stdout.decode('utf-8')
print(result)
This is essentially the same thing, with nearly the same efficiency, but easier to read and easier to debug.
And when I run this with Adres = '156.17.4.20', I get exactly the output you're looking for:
vpn.ii.uni.wroc.pl.
Related
I'm simply trying to pass along a variable to my shell script, but it isn't being handed off. I've
following examples from the python docs, but it's not working. What am I missing?
subprocess.Popen(['./script.sh' + variable] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
You shouldn't be using shell=True here at all, unless you want any actual shell syntax in your variable (like >file.log) to be executed.
subprocess.Popen(['./script.sh', variable],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
If you really want shell=True, you have a few options to do so securely. The first is to use pipes.quote() (or, in Python 3, shlex.quote()) to prevent shell escapes:
subprocess.Popen('./script.sh ' + pipes.quote(variable), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
The second is to pass the name as a subsequent argument (note the empty string, which becomes $0 in the generated shell):
subprocess.Popen(['./script.sh "$1"', '', variable], shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Remember, Bobby Tables isn't just for SQL -- his younger sister
Susan $(rm -rf /) is out there too.
You're combining two different ways to doing things. And, on top of that, you're doing it wrong, but just fixing the "doing it wrong" isn't the answer.
You can put your two arguments in a list, and then launch it without the shell, like ['./script.sh', variable]. This is usually better. Using the shell means you have to deal with quoting, and with accidental or malicious injection, and can interfere with your input and output, and adds a performance cost. So, if you don't need it, don't use it.
Or you can put your two arguments in a string, and then launch it with the shell, like './script.sh ' + variable.
But you can't put your two arguments in a string, and then put that string in a list. In some cases, it will happen to work, but that's not something you can rely on.
In some cases, you can use a list with the shell,* or a string without the shell,** but generally you shouldn't do that unless you know what you're doing, and in any case, you still shouldn't be using a list of one string unless there's a specific reason you need to.***
If you want to use a list of arguments, do this:
subprocess.Popen(['./script.sh', variable], shell=False, …)
Notice that this is a list of two strings, not a list of one joined-up string, and that shell=False.
If you want to use a shell command line, don't put the command line in a list, don't skip the space between the arguments, and quote any non-static arguments, like this:
subprocess.Popen('./script.sh ' + shlex.quote(variable), shell=True, …)
* Using a list with the shell on Windows is never useful; they just get combined up in some unspecified way. But on Unix, subprocess will effectively prepend '/bin/sh' and '-c' to your list, and use that as the arg list for /bin/sh, which can be simpler than trying to quote shell arguments, and at least arguably more concise than explicitly calling /bin/sh with shell=False.
** Using a string without the shell on Unix is never useful; that just tries to find a program whose name is the whole string, which is going to fail (unless you're really unlucky). But on Windows, it can be useful; subprocess tries to combine your arguments into a string to be passed to CreateProcess in such a way that MSVCRT will parse them back to the same list of arguments on the other side, and in some edge cases it's necessary to create that string yourself.
*** Basically, you want to spawn ['/bin/sh', '-c', <command line>] exactly.
Add space after ./script.sh:
subprocess.Popen(['./script.sh ' + variable] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Would just add a space after script name:
subprocess.Popen(['./script.sh ' + variable], shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
I'm trying to run repo command using subprocess.check_call. I don't see any error but it's not running.
Here is my code.
def repo(*args):
return subprocess.check_call(['repo'] + list(args), shell = True, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
repo('forall','-pc','"','git','merge','--strategy=ours','\${REPO_REMOTE}/branch_name','"','>log.log','2>&1')
Am I missing something?
Please help.
Thanks.
I'm going on a hunch, I guess you don't see anything because the error messages are stuck in your stderr pipe. Try this;
import subprocess
def repo(command):
subprocess.check_call('repo ' + command, shell=True)
repo('forall -pc "git merge --strategy=ours \${REPO_REMOTE}/branch_name" > log.log 2>&1')
Does that look more like you imagined? Also;
when using shell=True (although I don't recommend that your do) you can just pass a str (as opposed to a list); and
there is not pressing need for the return because check_call() either raise an exception or returns 0.
If you have shell=True and the first argument a sequence, as you have, then the first element in the sequence will be passed as option -c to the shell, and the rest of the elements will be additional arguments to the shell. Example
subprocess.check_call(['ls', '-l'], shell=True)
means the following is run:
sh -c "ls" -l
Note that ls doesn't get the option -l, but the shell sh does.
So, you should not use shell=True. If you have to, use a string instead of a list as args.
Also, the fine manual warns not to use stdout=PIPE and stderr=PIPE with check_call().
I am trying to use rsync with python. I have read that the preferred way to passing arguments to Popen is using an array.
The code I tried:
p = Popen(["rsync",
"\"{source}\"".format(source=latestPath),
"\"{user}#{host}:{dir}\"".format(user=user, host=host, dir=dir)],
stdout=PIPE, stderr=PIPE)
The result is rsync asking for password, even though I have set up SSH keys to do the authentication.
I think this is a problem with the environment the new process gets executed in. What I tried next is:
p = Popen(["rsync",
"\"{source}\"".format(source=latestPath),
"\"{user}#{host}:{dir}\"".format(user=user, host=host, dir=dir)],
stdout=PIPE, stderr=PIPE, shell=True)
This results in rsync printing the "correct usage", so the arguments are passed incorrectly to rsync. I am not sure if this is even supposed to work(passing an array with shell=True)
If I remove the array altogether like this:
p = Popen("rsync \"{source}\" \"{user}#{host}:{dir}\"".format(
source=latestPath, user=user, host=host, dir=dir),
stdout=PIPE, stderr=PIPE, shell=True)
The program works fine. It really doesn't matter for the sake of this script, but I'd like to know what's the difference? Why don't the other two(mainly the first one) work?
Is it just that the shell environment is required, and the second one is incorrect?
EDIT: Contents of the variables
latestPath='/home/tomcat/.jenkins/jobs/MC 4thworld/workspace/target/FourthWorld-0.1-SNAPSHOT.jar'
user='mc'
host='192.168.0.32'
dir='/mc/test/plugins/'
I'd like to know what's the difference?
When shell=True, the entire command is passed to the shell. The quotes are there so the shell can correctly pick the command apart again. In particular, passing
foo "bar baz"
to the shell causes it to parse the command as (Python syntax) ['foo', 'bar baz'] so that it can execute the foo command with the argument bar baz.
By contrast, when shell=False, Python will pass the arguments in the list to the program immediately. For example, try the following subprocess commands:
>>> import subprocess
>>> subprocess.call(["echo", '"Hello!"'])
"Hello!"
0
>>> subprocess.call('echo "Hello!"', shell=True)
Hello!
0
and note that in the first, the quotes are echoed back at you by the echo program, while in the second case, the shell has stripped them off prior to executing echo.
In your specific case, rsync gets the quotes but doesn't know how it's supposed to handle them; it's not itself a shell, after all.
Could it be to do with the cwd or env parameters? Maybe in the first syntax, it can't find the SSH keys...
Just a suggestion, it might be easier for you to use sh instead of subprocess:
import sh
sh.rsync(latestPath, user+"#"+host+":"+dir)
Inside a subprocess call, I want to use shell=True so that it does globbing on pathnames (code below), however this has the annoying side-effect of making subprocess spawn a child process (which must then be `communicate()d/ poll()ed/ wait()ed/ terminate()d/ kill()ed/ whatevah).
(Yes I am aware the globbing can also be done with fnmatch/glob, but please show me the 'correct' use of subprocess on this, i.e. the minimal incantation to both get the stdout and stop the child process.)
This works fine (returns output):
subprocess.check_output(['/usr/bin/wc','-l','[A-Z]*/[A-Z]*.F*'], shell=False)
but this hangs
subprocess.check_output(['/usr/bin/wc','-l','[A-Z]*/[A-Z]*.F*'], shell=True)
(PS: It's seriously aggravating that you can't tell subprocess you want some but not all shell functionality e.g. globbing but not spawning. I think there's a worthy PEP in that, if anyone cares to comment, i.e. pass in a tuple of Boolean, or an or of binary flags)
(PPS: the idiom of whether you pass subprocess...(cmdstring.split() or [...]) is just a trivial idiomatic difference. I say tomato, you say tomay-to. In my case, the motivation is the command is fixed but I may want to call it more than once with a difference filespec.)
First off -- there's very little point to passing an array to:
subprocess.check_output(['/usr/bin/wc','-l','A-Z*/A-Z*.F*'], shell=True)
...as this simply runs wc with no arguments, in a shell also passed arguments -l and A-Z*/A-Z*.F* as arguments (to the shell, not to wc). Instead, you want:
subprocess.check_output('/usr/bin/wc -l A-Z*/A-Z*.F*', shell=True)
Before being corrected, this would hang because wc had no arguments and was reading from stdin. I would suggest ensuring that stdin is passed in closed, rather than passing along your Python program's stdin (as is the default behavior).
An easy way to do this, since you have shell=True:
subprocess.check_output(
'/usr/bin/wc -l A-Z*/A-Z*.F* </dev/null',
shell=True)
...alternately:
p = subprocess.Popen('/usr/bin/wc -l A-Z*/A-Z*.F*', shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None)
(output, _) = p.communicate(input='')
...which will ensure an empty stdin from Python code rather than relying on the shell.
I am using a scientific software (called vasp) that works only in bash, and using Python to create a script that will make multiple runs for me. When I use subprocess.check_call to call the function normally, it works fine, but when i add the '| tee tee_output' it doesn't work.
subprocess.check_call('vasp') #this works
subprocess.check_call('vasp | tee tee_output') #this doesn't
I am a noobie to python and programming altogether.
Try this. It executes the command (passed as a string) via a shell, instead of executing the command directly. (It's the equivalent of calling the shell itself with the -c flag, i.e. Popen(['/bin/sh', '-c', args[0], args[1], ...])):
subprocess.check_call('vasp | tee tee_output', shell=True)
But attend to the warning in the docs about this method.
You could do this:
vasp = subprocess.Popen('vasp', stdout=subprocess.PIPE)
subprocess.check_call(('tee', 'tee_output'), stdin=vasp.stdout)
This is generally safer than using shell=True, especially if you can't trust the input.
Note that check_call will check the return code of tee, rather than vasp, to see whether it should raise a CalledProcessError. (The shell=True method will do the same, as this matches the behavior of the shell pipe.) If you want, you can check the return code of vasp yourself by calling vasp.poll(). (The other method won't let you do this.)
Don't use shell=True, it has many security holes. Instead do something like this
cmd1 = ['vasp']
cmd2 = ['tee', 'tee_output']
runcmd = subprocess.Popen(cmd1, stdout=subprocess.PIPE)
runcmd2 = subprocess.Popen(cmd2, stdin=runcmd.stdout, stdout=subprocess.PIPE)
runcmd2.communicate()
I know its longer, but its much safer.
You can find more info in documentation:
http://docs.python.org/library/pipes.html
Just append more strings to t object