Python Popen shell script but fail - python

I want to execute bash command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.

Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set shell=True:
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)

The first argument to subprocess.Popen is the array ['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']. When the first argument to subprocess.Popen is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting >> /tmp/jruby.log to mean "write output to jruby.log".
In order to make the >> redirection work in this command, you'll need to pass command directly to subprocess.Popen() without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)

Consider the following:
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
The first argument is a shell script referring to $1 and $2, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
Of course, don't actually do anything like this in Python -- the native primitives for file IO are far more appropriate.

Have you tried without splitting the command and using shell=True? My usual format is:
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()

Related

Pass variable to bash command with Python

I have the next code:
from subprocess import Popen, PIPE
p = Popen("C:/cygwin64/bin/bash.exe", stdin=PIPE, stdout=PIPE)
path = "C:/Users/Link/Desktop/folder/"
p.stdin.write(b"cd " + str.encode(path)))
p.stdin.close()
out = p.stdout.read()
print(out)
The output is b''
Is there any way to pass a variable to the bash command p.stdin.write(b"cd " + path)
I ask because the way it is written above don't work. Output is null, just like Cygwin started and nothing else.
EDIT
As long as I see the question is not so clear, I'll add this scenario:
I am on Windows and I am using Python 3.6.
I have a bash cmd that requieres Cygwin to be executed. This cmd may have a variable in his string, which will change after every execution. Immagine a for loop which executes a command.
For example (an ImageMagick command):
convert image.jpg -resize 1024x768 output_file.jpg
How can I execute this cmd from Python with output_file.jpg as variable ?
Bash doesn't run in interactive mode by default unless it detects that standard input and output are connected to a terminal. You PIPEd these in, therefore they're definitely not connected to a terminal.
Bash does not display any prompts in non-interactive mode, hence you see nothing. You can force it to be interactive with -i switch.
However, even then, it is not going to write to stdout but stderr; you can try piping stderr to stdout
from subprocess import Popen, PIPE, STDOUT
p = Popen(["C:/cygwin64/bin/bash.exe", "-i"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
and you will capture the prompts and such.
Or use your original approach with a command that does produce output - here pwd that prints the current working directory:
p.stdin.write(b"cd " + path.encode() + b"\n")
p.stdin.write(b"pwd")
It is tricky to talk to an interactive process like this though - read too little => deadlock. Write too much => deadlock. This is why Popen has the .communicate method for providing all of input at once and getting the stdout and stderr afterwards.
As it seems you are using the Cygwin python, than you should use proper
Posix paths and not Windows-like ones
Instead of
p = Popen("C:/cygwin64/bin/bash.exe", stdin=PIPE, stdout=PIPE)
use
p = Popen("/bin/bash.exe", stdin=PIPE, stdout=PIPE)

Using back-ticks in Python subprocess

I want to run this git command through a Python script and get the output of it:
git diff --name-only mybranch `git merge-base mybranch develop`
The purpose of the command is to see what changes have been made on mybranch since the last merge with develop.
To achieve this I'm using subprocess.Popen:
output = subprocess.Popen(["git", "diff", "--name-only", "mybranch", "`git merge-base mybranch develop`"], stdout=subprocess.PIPE, shell=True)
However, this does not work. The variable output.communicate()[0] simply gives me a printout of git usage -- essentially telling me the input command is wrong.
I saw that a similar question exists here, but it only told me to use shell=True which didn't solve my problem.
I also attempted to run the two commands in succession, but that gave me the same output as before. It is possible that I am missing something in this step, though.
Any help or tips are appreciated.
Backticks and subprocess
The backtick being a shell feature, you may not have a choice but to use shell=True, however pass in a shell command string, not a list of args
So for your particular command (assuming it works in the first place)
process = subprocess.Popen("git diff --name-only mybranch `git merge-base mybranch develop`", stdout=subprocess.PIPE, shell=True)
Notice when you call Popen() you get a process, shouldn't be called output IMO
Here's a simple example that works with backticks
>>> process = subprocess.Popen('echo `pwd`', stdout=subprocess.PIPE, shell=True)
>>> out, err = process.communicate()
>>> out
'/Users/bakkal\n'
Or you can use the $(cmd) syntax
>>> process = subprocess.Popen('echo $(pwd)', stdout=subprocess.PIPE, shell=True)
>>> out, err = process.communicate()
>>> out
'/Users/bakkal\n'
Here's what did NOT work (for backticks)
>>> process = subprocess.Popen(['echo', '`pwd`'], stdout=subprocess.PIPE, shell=True)
>>> out, err = process.communicate()
>>> out
'\n'
>>> process = subprocess.Popen(['echo', '`pwd`'], stdout=subprocess.PIPE, shell=False)
>>> out, err = process.communicate()
>>> out
'`pwd`\n'
On POSIX, the argument list is passed to /bin/sh -c i.e., only the first argument is recognized as a shell command i.e., the shell runs git without any arguments that is why you see the usage info. You should pass the command as a string if you want to use shell=True. From the subprocess docs:
On POSIX with shell=True, the shell defaults to /bin/sh. If args is a
string, the string specifies the command to execute through the shell.
This means that the string must be formatted exactly as it would be
when typed at the shell prompt. This includes, for example, quoting or
backslash escaping filenames with spaces in them. If args is a
sequence, the first item specifies the command string, and any
additional items will be treated as additional arguments to the shell
itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
You don't need shell=True in this case.
#!/usr/bin/env python
from subprocess import check_output
merge_base_output = check_output('git merge-base mybranch develop'.split(),
universal_newlines=True).strip()
diff_output = check_output('git diff --name-only mybranch'.split() +
[merge_base_output])

Capturing LIVE output of shell script while running it in python

I am writing a python script to ssh into a linux server and execute a shell script that is already stored on the linux server.
Here is what my code look like so far
command = ['ssh into the remote server',
'cd into the directory of the shell script,
'./running the shell script',
]
process = subprocess.Popen(command,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
err, out = process.communicate()
if out:
print "standard output of subprocess is : "
print out
if err:
print "standard error of subprocess is : "
print err
print "returncode of subprocess: "
print process.returncode
1st question: I can obtain the output of my shell scripts through stderr, but I only obtain it after the entire shell script has finished executing. So if the shell script takes 10 minutes to finish, I only get to see the output of the shell script after 10 minutes.
I want to have the output of my shell scripts return line by line to me just as if I was executing the script manually in the remote server. Can this be done?
2nd question: as you can see, I have three commands in my command list (which is only a small portion of all my commands,) if I put all my commands in the list, I only obtain the output of ALL my commands through stdout ONLY when all my commands has finished executing. If my 1st question cannot be done, is there a way to at least obtain the output of each command after each one has been executed instead of receiving them all at once only when all the commands has finished being executed.
To see the output immediately, don't redirect it:
from subprocess import Popen, PIPE
p = Popen(['ssh', 'user#hostname'], stdin=PIPE)
p.communicate(b"""cd ..
echo 1st command
echo 2nd command
echo ...
""")
If you want both to capture the "live" output in a variable and to display it in the terminal then the solution depends on whether you need to handle stdin/stdout/stderr concurrently.
If input is small and you want to combine stdout/stderr then you could pass all commands at once and read the merged output line-by-line:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['ssh', 'user#hostname'], stdin=PIPE,
stdout=PIPE, stderr=STDOUT, bufsize=1)
p.stdin.write(b"""cd ..
echo 1st command
echo 2nd command
echo ...
""")
p.stdin.close() # no more input
lines = [] # store output here
for line in iter(p.stdout.readline, b''): # newline=b'\n'
lines.append(line) # capture for later
print line, # display now
p.stdout.close()
p.wait()
If you want to capture "live" stdout/stderr separately, see:
Displaying subprocess output to stdout and redirecting it
Subprocess.Popen: cloning stdout and stderr both to terminal and variables
I'm not entirely sure, but maybe you get instant output if you pass the other two commands as arguments to ssh:
command = 'ssh user#example.com \'cd some/path/on/your/server; ./run-the-script.sh\''
The way I understand it, Python first reads and processes all the input and only then returns output. I'm not too familiar with Python, so I might be wrong on this, but if I'm right, this should help.
Don't call .communicate() -- that waits for the process to finish.
Instead, keep reading data from .stdout pipe.
Simple example:
In [1]: import subprocess
In [2]: p = subprocess.Popen(["find", "/"], stdout=subprocess.PIPE)
In [3]: p.stdout
Out[3]: <open file '<fdopen>', mode 'rb' at 0x7f590446dc00>
In [4]: p.stdout.readline()
Out[4]: '/\n'
In [5]: p.stdout.readline()
Out[5]: '/var\n'
In [6]: p.stdout.readline()
Out[6]: '/var/games\n'

Passing arguments to "executable" parameter of the subprocess.Popen() call

The subprocess.Popen() lets you pass the shell of your choice via the "executable" parameter.
I have chosen to pass "/bin/tcsh", and I do not want the tcsh to read my ~/.cshrc.
The tcsh manual says that I need to pass -f to /bin/tcsh to do that.
How do I ask Popen to execute /bin/tcsh with a -f option?
import subprocess
cmd = ["echo hi"]
print cmd
proc = subprocess.Popen(cmd, shell=False, executable="/bin/tcsh", stderr=subprocess.PIPE, stdout=subprocess.PIPE)
return_code = proc.wait()
for line in proc.stdout:
print("stdout: " + line.rstrip())
for line in proc.stderr:
print("stderr: " + line.rstrip())
print return_code
Make your life easier:
subprocess.Popen(['/bin/tcsh', '-f', '-c', 'echo hi'],
shell=False, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
I do not understand what the title of your question "Passing arguments to subprocess executable" has to do with the rest of it, especially "I want the tcsh to not to read my ~/.cshrc."
However - I do know that you are not using your Popen correctly.
Your cmd should either be a list or a string, not a list of 1 string.
So cmd = ["echo hi"] should be either cmd = "echo hi" or cmd = ["echo", "hi"]
Then, depending on if it is a string or list you need to set the shell value to True or False. True if it is a string, False if it is a list.
"passing" an argument is a term for functions, using Popen, or subprocess module is not the same as a function, though they are functions, you are actually running a command with them, not passing arguments to them in the traditional sense, so if you want to run a process with '-f' you simply add '-f' to the string or list that you want to run the command with.
To put the whole thing together, you should run something like:
proc = subprocess.Popen('/bin/tcsh -f -c "echo hi"', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)

Running shell command from Python script

I'm trying to run a shell command from within a python script which needs to do several things
1. The shell command is 'hspice tran.deck >! tran.lis'
2. The script should wait for the shell command to complete before proceeding
3. I need to check the return code from the command and
4. Capture STDOUT if it completed successfully else capture STDERR
I went through the subprocess module and tried out a couple of things but couldn't find a way to do all of the above.
- with subprocess.call() I could check the return code but not capture the output.
- with subprocess.check_output() I could capture the output but not the code.
- with subprocess.Popen() and Popen.communicate(), I could capture STDOUT and STDERR but not the return code.
I'm not sure how to use Popen.wait() or the returncode attribute. I also couldn't get Popen to accept '>!' or '|' as arguments.
Can someone please point me in the right direction? I'm using Python 2.7.1
EDIT: Got things working with the following code
process = subprocess.Popen('ls | tee out.txt', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
if(process.returncode==0):
print out
else:
print err
Also, should I use a process.wait() after the process = line or does it wait by default?
Just use .returncode after .communicate(). Also, tell Popen that what you're trying to run is a shell command, rather than a raw command line:
p = subprocess.Popen('ls | tee out.txt', shell=True, ...)
p.communicate()
print p.returncode
From the docs:
Popen.returncode
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
A negative value -N indicates that the child was terminated by signal N (Unix only).
Here is example how to interact with shell:
>>> process = subprocess.Popen(['/bin/bash'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> process.stdin.write('echo it works!\n')
>>> process.stdout.readline()
'it works!\n'
>>> process.stdin.write('date\n')
>>> process.stdout.readline()
'wto, 13 mar 2012, 17:25:35 CET\n'
>>>

Categories

Resources