How to find string in output of subprocess.Popen.communicate - python

I'm trying to find a string in the output of proc.communicate for subprocess.Popen.
My code looks like this:
proc = subprocess.Popen(["./runCommand.sh" + " -i " + ip + " -c " + cmd], stdout=subprocess.PIPE, shell=True)
output = proc.communicate()
p_status = proc.wait()
if 'someword' in output:
#dosomething
It seems that I can't find the word I'm looking for in the output.
The output looks like this when printed:
(b'blabla someword\blabla\n', None)
Do I need to convert this in order to find something with "in"?
Edit:
Thanks so far for your answers!
I changed it to "output[0], but still I get an error
TypeError: a bytes-like object is required, not 'str'
What can I do here? Use decode()?

You are getting a two elements tuple, you can use in if you access to the first element of the tuple:
>>> 'someword' in (b'blabla someword\blabla\n', None)[0]
True
So you need to replace output with output[0] to make your code work.

You are getting both stdout + stderr into output, so you need to check if 'someword' in output[0]:
Or better yet:
proc = subprocess.Popen(["./runCommand.sh" + " -i " + ip + " -c " + cmd], stdout=subprocess.PIPE, shell=True)
output, _ = proc.communicate() # or output, err = proc.communicate()
p_status = proc.wait()
if 'someword' in output:
#dosomething
always be checking the doc:
In [7]: subprocess.Popen.communicate?
Signature: subprocess.Popen.communicate(self, input=None)
Docstring:
Interact with process: Send data to stdin. Read data from
stdout and stderr, until end-of-file is reached. Wait for
process to terminate. The optional input argument should be a
string to be sent to the child process, or None, if no data
should be sent to the child.
communicate() returns a tuple (stdout, stderr). <<<---
File: /usr/lib/python2.7/subprocess.py
Type: instancemethod

Related

subprocess.Popen for loop output

Code:
with open ('Client.txt','r') as Client_Name:
for Client in Client_Name:
out = subprocess.Popen(['script.sh', '-byclient', Client], stdout=subprocess.PIPE)
outputstring = out.communicate()
print (outputstring)
This code doesn't give the output, I want to pass the client names in Client.txt file to script.sh. Any suggestions please.
Your code to pass the name to the script seems correct, but communicate actually returns a tuple, (stdout_data, stderr_data).
You probably also want to pass universal_newlines=True to Popen to decode the output. Try this:
out = subprocess.Popen(['script.sh', '-byclient', Client],
stdout=subprocess.PIPE,
universal_newlines=True)
out.wait() # wait until the script has finished
stdout_data, stderr_data = out.communicate()
print(stdout_data)
Or, if you want to print the output in real time:
out = subprocess.Popen(['script.sh', '-byclient', Client],
stdout=subprocess.PIPE,
universal_newlines=True)
for line in out.stdout:
print(line)

Store lines of text that result from shell command

Here is the code related to this section of the code:
command1 = '/usr/local/GMT5SAR/bin/ALOS_baseline ' + str(master_file) + ' ' + str(master_file)
print command1
p1 = Popen(command1, shell=True, stdout=PIPE)
out,err = p1.communicate()
print out
My command is working properly. Here's a screenshot of my console.
I need to store the lines that say lon_tie_point ..... and lat_tie_point ...... The issue i'm running into is that those lines aren't including in out, which is what i'm printing out. How can I go about doing this?
It seems that the lines containing the information you need are being printed on stderr instead of stdout. From the subprocess documentation:
subprocess.STDOUT
Special value that can be used as the stderr argument to Popen and indicates that standard error should go into the same handle as standard output.
Based on this, I think the following might work:
command1 = '/usr/local/GMT5SAR/bin/ALOS_baseline ' + str(master_file) + ' ' + str(master_file)
print command1
p1 = Popen(command1, shell=True, stdout=PIPE, stderr=STDOUT)
out,err = p1.communicate()
print out

Added characters to string when exiting subprocces

I am calling a subprocess and returning a string if there is an error.
Code example:
When calling the process:
def read_plan_with_break():
comand = " python script.py "
proc = subprocess.Popen(comand.split(), shell=False, stdout = subprocess.PIPE, stderr= subprocess.PIPE)
if proc.wait() != 0:
output, err = proc.communicate()
print (err)
return "Error in subprocess"
return True
when exiting the subprocess:
def fatal_error():
print("Some message", file=sys.stderr)
exit(1)
My problem is that the stderr output is : b'Some message\r\n'
I can erase the \r\n with strip but have no idea why there is a b at the beginning and the ' at the start and the end.
Does anyone know why this occurs?
EDIT:
I have tried err.split()[2:-1] to get rid of the b' but it cuts off the start of the Some message
If I get a down-vote, please explain so I can improve and make better questions in the future
err is a bytestring, you should decode it first by err.decode(), this returns the string

Get the output of multiple commands from subprocess.Popen

I am trying to run a command, get it's output, then later run another command in the same environment (say if I set an environment variable in the first command, I want it to be available to the second command). I tried this:
import subprocess
process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE);
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
but communicate() reads until the end, so this is not a valid technique. (I get this:)
stdout: Test
Traceback (most recent call last):
File "./MultipleCommands.py", line 15, in <module>
process.stdin.write("echo $MyVar\n")
ValueError: I/O operation on closed file
I have seen this: https://stackoverflow.com/a/15654218/284529 , but it doesn't give a working example of how to do what it proposes. Can anyone demonstrate how to do this?
I have also seen other techniques that involve constantly checking for output in a loop, but this doesn't fit the "get the output of a command" mentality - it is just treating it like a stream.
To get the output of multiple commands, just combine them into a single script:
#!/usr/bin/env python
import subprocess
import sys
output = subprocess.check_output("""
export MyVar="Test"
echo $MyVar
echo ${MyVar/est/ick}
""", shell=True, executable='/bin/bash', universal_newlines=True)
sys.stdout.write(output)
Output
Test
Tick
When using communicate it sees that subprocess had ended, but in case you have a intermediate one (bash), when your sub-subprocess ends, you have to somehow signal manually.
As for the rest, a simplest approach is to just emit a marker line. However, I'm sorry to disappoint you here but pooling (i.e. constantly checking in a loop) is actually the only sane option. If you don't like the loop, you could "hide" it away in a function.
import subprocess
import time
def readlines_upto(stream, until="### DONE ###"):
while True:
line = stream.readline()
if line is None:
time.sleep(0.1)
continue
if line.rstrip() == until:
break
yield line
process = subprocess.Popen("/bin/bash", shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.write("echo '### DONE ###'\n")
process.stdin.flush()
# Note, I don't read stderr here, so if subprocess outputs too much there,
# it'll fill the pipe and stuck. If you don't need stderr data, don't
# redirect it to a pipe at all. If you need it, make readlines read two pipes.
stdout = "".join(line for line in readlines_upto(process.stdout))
print "stdout: " + stdout
# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
communicate and wait methods of Popen objects, close the PIPE after the process returns. If you want stay in communication with the process try something like this:
import subprocess
process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE);
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
process.stdout.readline()
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
I think you misunderstand communicate...
Take a look over this link:-
http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate
communicate sends a string to the other process and then waits on it to finish... (Like you said waits for the EOF listening to the stdout & stderror)
What you should do instead is:
proc.stdin.write('message')
# ...figure out how long or why you need to wait...
proc.stdin.write('message2')
(and if you need to get the stdout or stderr you'd use proc.stdout or proc.stderr)
As per the manual:
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to
terminate. [...]
You need to read from the pipe instead:
import os
stdout = os.read(process.stdout.fileno(), 1024)
print "stdout: " + stdout
If there's no data waiting, it will hang there forever or until data is ready to be read. You should use the select system call to prevent that:
import select
import os
try:
i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
stdout = os.read(i[0].fileno(), 1024)
except IndexError:
# nothing was written to the pipe in 5 seconds
stdout = ""
print "stdout: " + stdout
If you want to fetch multiple writes, to avoid race conditions, you'll have to put it in a loop:
stdout = ""
while True:
try:
i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
stdout += os.read(i[0].fileno(), 1024)
except IndexError:
# nothing was written to the pipe in 5 seconds, we're done here
break

python subprocess proc.stderr.read() introduce extra lines?

I want to run some command and grab whatever is output to stderr. I have two versions of function that does this
version 1.
def Getstatusoutput(cmd):
"""Return (status, output) of executing cmd in a shell."""
import sys
mswindows = (sys.platform == "win32")
import os
if not mswindows:
cmd = '{ ' + cmd + '; }'
pipe = os.popen(cmd + ' 2>&1', 'r')
text = pipe.read()
sts = pipe.close()
if sts is None: sts = 0
if text[-1:] == '\n': text = text[:-1]
return sts, text
and
version 2
def Getstatusoutput2(cmd):
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
return_code = proc.wait()
return return_code, proc.stdout.read(), proc.stderr.read()
The first version prints stderr output as I expect. The second version prints one blank line after every line. I suspect this is due to text[-1:] line in the version 1...but I can't seem to do something similar in second version. Can anybody explain what I need to do to make second function generate the same output as first one without extra lines in between (and at the very end) ?
Update: Here's how I am printing the output
Here's how I am printing
status, output, error = Getstatusoutput2(cmd)
s, oldOutput = Getstatusoutput(cmd)
print "oldOutput = <<%s>>" % (oldOutput)
print "error = <<%s>>" % (error)
You can add .strip():
def Getstatusoutput2(cmd):
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
return_code = proc.wait()
return return_code, proc.stdout.read().strip(), proc.stderr.read().strip()
Python string Docs:
string.strip(s[, chars])
Return a copy of the string with leading and
trailing characters removed. If chars is omitted or None, whitespace
characters are removed. If given and not None, chars must be a string;
the characters in the string will be stripped from the both ends of
the string this method is called on.
string.whitespace
A string containing all characters that are
considered whitespace. On most systems this includes the characters
space, tab, linefeed, return, formfeed, and vertical tab.
You could use subprocess.check_output([cmd], stderr=STDOUT) to capture all output.
To capture stdout, stderr separately you could use .communicate():
stdout, stderr = Popen([cmd], stdout=PIPE, stderr=PIPE).communicate()
To get all lines without a newline character at the end you could call stderr.splitlines().
To avoid printing additional newline if it is already present add ',' after the variable in a print statement:
print line,
Or if you use print() function:
print(line, end='')
Note
Your Getstatusoutput2() will block if the cmd produces enough output, use above solutions instead:
>>> len(Getstatusoutput2(['python', '-c',"""print "*"*2**6"""])[1])
65
>>> len(Getstatusoutput2(['python', '-c',"""print "*"*2**16"""])[1])
Popen.wait() documentation:
Wait for child process to terminate. Set and return returncode attribute.
Warning: This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that.
Related Use communicate() rather than stdin.write(), stdout.read() or stderr.read()

Categories

Resources