python how to use subprocess pipe with linux shell - python

I have a python script search for logs, it continuously output the logs found and I want to use linux pipe to filter the desired output. example like that:
$python logsearch.py | grep timeout
The problem is the sort and wc are blocked until the logsearch.py finishes, while the logsearch.py will continuous output the result.
sample logsearch.py:
p = subprocess.Popen("ping google.com", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
for line in p.stdout:
print(line)
UPDATE:
figured out, just change the stdout in subprocess to sys.stdout, python will handle the pipe for you.
p = subprocess.Popen("ping -c 5 google.com", shell=True, stdout=**sys.stdout**)
Thanks for all of you help!

And why use grep? Why don't do all the stuff in Python?
from subprocess import Popen, PIPE
p = Popen(['ping', 'google.com'], shell=False, stdin=PIPE, stdout=PIPE)
for line in p.stdout:
if 'timeout' in line.split():
# Process the error
print("Timeout error!!")
else:
print(line)
UPDATE:
I change the Popen line as recommended #triplee. Pros and cons in Actual meaning of 'shell=True' in subprocess

Related

How to execute a Linux utility and answer its reply in python?

When I am executing an utility, blab, and it will ask yes or no for confirmation, what can I do? Thanks,
The code is as below:
proc = subprocess.Popen("blab delete {}".format(num), shell=True,
stderr=subprocess.STDOUT, stdin=subprocess.STDIN)
stdout_value = proc.communicate()[0]
Popen.communicate() documentation:
If you want to send data to process's stdin using python, create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
from subprocess import PIPE, Popen, STDOUT
process = Popen("blab delete {}".format(num), shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
output = process.communicate(input=b'yes')[0]
output = output.decode('utf-8')

How to use subprocess.Popen with built-in command on Windows

In my old python script, I use the following code to show the result for Windows cmd command:
print(os.popen("dir c:\\").read())
As the python 2.7 document said os.popen is obsolete and subprocess is recommended. I follow the documentation as:
result = subprocess.Popen("dir c:\\").stdout
And I got error message:
WindowsError: [Error 2] The system cannot find the file specified
Can you tell me the correct way to use the subprocess module?
You should use call subprocess.Popen with shell=True as below:
import subprocess
result = subprocess.Popen("dir c:", shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,error = result.communicate()
print (output)
More info on subprocess module.
This works in Python 3.7:
from subprocess import Popen, PIPE
args = ["echo", "realtime abc"]
p = Popen(args, stdout=PIPE, stderr=PIPE, shell=True, text=True)
for line in p.stdout:
print("O=:", line)
.
Output:
O=: "realtime abc"

How to implement a complex process pipe in Python 2.6?

I like to have the Python (2.6, sorry!) equivalent of this shell pipe:
$ longrunningprocess | sometextfilter | gzip -c
That is, I have to call a binary longrunningprocess, filter its output through sometextfilter and need to get gzip output.
I know how to use subprocess pipes, but I need the output of the pipe chunkwise (probably using yield) and not all at once. E.g. this
https://security.openstack.org/guidelines/dg_avoid-shell-true.html
works only for getting all output at once.
Note, that both longrunningprocess and sometextfilter are external programs, that cannot be replaced with Python functions.
Thanks in advance for any hint or example!
Again, I thought it were difficult, while Python is (supposed to be) easy. Just concatenating the subprocesses just works, it seems:
def get_lines():
lrp = subprocess.Popen(["longrunningprocess"],
stdout=subprocess.PIPE,
close_fds=True)
stf = subprocess.Popen(["sometextfilter"],
stdin=lrp.stdout,
stdout=subprocess.PIPE,
bufsize=1,
close_fds=True)
for l in iter(stf.stdout.readline, ''):
yield l
lrp.stdout.close()
stf.stdout.close()
stf.stdin.close()
stf.wait()
lrp.wait()
[Changes by J.F. Sebastian applied. Thanks!]
Then I can use Pythons gzip for compression.
The shell syntax is optimized for one-liners, use it:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
LINE_BUFFERED = 1
ON_POSIX = 'posix' in sys.builtin_module_names
p = Popen('longrunningprocess | sometextfilter', shell=True,
stdout=PIPE, bufsize=LINE_BUFFERED, close_fds=ON_POSIX)
with p.stdout:
for line in iter(p.stdout.readline, ''):
print line, # do something with the line
p.wait()
How do I use subprocess.Popen to connect multiple processes by pipes?
Python: read streaming input from subprocess.communicate()
If you want to emulate the pipeline manually:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
LINE_BUFFERED = 1
ON_POSIX = 'posix' in sys.builtin_module_names
sometextfilter = Popen('sometextfilter', stdin=PIPE, stdout=PIPE,
bufsize=LINE_BUFFERED, close_fds=ON_POSIX)
longrunningprocess = Popen('longrunningprocess', stdout=sometextfilter.stdin,
close_fds=ON_POSIX)
with sometextfilter.stdin, sometextfilter.stdout as pipe:
for line in iter(pipe.readline, ''):
print line, # do something with the line
sometextfilter.wait()
longrunningprocess.wait()

Subprocess arp -a yielding less results than cmd arp -a

The below code produces a lesser ip yield than doing arp -a in cmd
arpA_req = Popen('arp -a', stdin=PIPE, stdout=PIPE, stderr=STDOUT)
line = arpA_req.stdout.readline().decode('ascii').rsplit()
print(line)
Does anyone know why this may be? And if it's a common issue, how can I obtain a fuller ip list?
As wim pointed out, readline() only reads one line.
To read all the output, one way is to call communicate:
import subprocess
PIPE, STDOUT = subprocess.PIPE, subprocess.STDOUT
arpA_req = subprocess.Popen(
['arp', '-a'], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
out, err = arpA_req.communicate()
print(out)
Or, to process one line at a time, a standard idiom is to use iter(func, stop_value):
for line in iter(arpA_req.stdout.readline, ''):
print(line)

catching stdout in realtime from subprocess

I want to subprocess.Popen() rsync.exe in Windows, and print the stdout in Python.
My code works, but it doesn't catch the progress until a file transfer is done! I want to print the progress for each file in real time.
Using Python 3.1 now since I heard it should be better at handling IO.
import subprocess, time, os, sys
cmd = "rsync.exe -vaz -P source/ dest/"
p, line = True, 'start'
p = subprocess.Popen(cmd,
shell=True,
bufsize=64,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
Some rules of thumb for subprocess.
Never use shell=True. It needlessly invokes an extra shell process to call your program.
When calling processes, arguments are passed around as lists. sys.argv in python is a list, and so is argv in C. So you pass a list to Popen to call subprocesses, not a string.
Don't redirect stderr to a PIPE when you're not reading it.
Don't redirect stdin when you're not writing to it.
Example:
import subprocess, time, os, sys
cmd = ["rsync.exe", "-vaz", "-P", "source/" ,"dest/"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print(">>> " + line.rstrip())
That said, it is probable that rsync buffers its output when it detects that it is connected to a pipe instead of a terminal. This is the default behavior - when connected to a pipe, programs must explicitly flush stdout for realtime results, otherwise standard C library will buffer.
To test for that, try running this instead:
cmd = [sys.executable, 'test_out.py']
and create a test_out.py file with the contents:
import sys
import time
print ("Hello")
sys.stdout.flush()
time.sleep(10)
print ("World")
Executing that subprocess should give you "Hello" and wait 10 seconds before giving "World". If that happens with the python code above and not with rsync, that means rsync itself is buffering output, so you are out of luck.
A solution would be to connect direct to a pty, using something like pexpect.
I know this is an old topic, but there is a solution now. Call the rsync with option --outbuf=L. Example:
cmd=['rsync', '-arzv','--backup','--outbuf=L','source/','dest']
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE)
for line in iter(p.stdout.readline, b''):
print '>>> {}'.format(line.rstrip())
Depending on the use case, you might also want to disable the buffering in the subprocess itself.
If the subprocess will be a Python process, you could do this before the call:
os.environ["PYTHONUNBUFFERED"] = "1"
Or alternatively pass this in the env argument to Popen.
Otherwise, if you are on Linux/Unix, you can use the stdbuf tool. E.g. like:
cmd = ["stdbuf", "-oL"] + cmd
See also here about stdbuf or other options.
On Linux, I had the same problem of getting rid of the buffering. I finally used "stdbuf -o0" (or, unbuffer from expect) to get rid of the PIPE buffering.
proc = Popen(['stdbuf', '-o0'] + cmd, stdout=PIPE, stderr=PIPE)
stdout = proc.stdout
I could then use select.select on stdout.
See also https://unix.stackexchange.com/questions/25372/
for line in p.stdout:
...
always blocks until the next line-feed.
For "real-time" behaviour you have to do something like this:
while True:
inchar = p.stdout.read(1)
if inchar: #neither empty string nor None
print(str(inchar), end='') #or end=None to flush immediately
else:
print('') #flush for implicit line-buffering
break
The while-loop is left when the child process closes its stdout or exits.
read()/read(-1) would block until the child process closed its stdout or exited.
Your problem is:
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
the iterator itself has extra buffering.
Try doing like this:
while True:
line = p.stdout.readline()
if not line:
break
print line
You cannot get stdout to print unbuffered to a pipe (unless you can rewrite the program that prints to stdout), so here is my solution:
Redirect stdout to sterr, which is not buffered. '<cmd> 1>&2' should do it. Open the process as follows: myproc = subprocess.Popen('<cmd> 1>&2', stderr=subprocess.PIPE)
You cannot distinguish from stdout or stderr, but you get all output immediately.
Hope this helps anyone tackling this problem.
To avoid caching of output you might wanna try pexpect,
child = pexpect.spawn(launchcmd,args,timeout=None)
while True:
try:
child.expect('\n')
print(child.before)
except pexpect.EOF:
break
PS : I know this question is pretty old, still providing the solution which worked for me.
PPS: got this answer from another question
p = subprocess.Popen(command,
bufsize=0,
universal_newlines=True)
I am writing a GUI for rsync in python, and have the same probelms. This problem has troubled me for several days until i find this in pyDoc.
If universal_newlines is True, the file objects stdout and stderr are opened as text files in universal newlines mode. Lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the old Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program.
It seems that rsync will output '\r' when translate is going on.
if you run something like this in a thread and save the ffmpeg_time property in a property of a method so you can access it, it would work very nice
I get outputs like this:
output be like if you use threading in tkinter
input = 'path/input_file.mp4'
output = 'path/input_file.mp4'
command = "ffmpeg -y -v quiet -stats -i \"" + str(input) + "\" -metadata title=\"#alaa_sanatisharif\" -preset ultrafast -vcodec copy -r 50 -vsync 1 -async 1 \"" + output + "\""
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, shell=True)
for line in self.process.stdout:
reg = re.search('\d\d:\d\d:\d\d', line)
ffmpeg_time = reg.group(0) if reg else ''
print(ffmpeg_time)
Change the stdout from the rsync process to be unbuffered.
p = subprocess.Popen(cmd,
shell=True,
bufsize=0, # 0=unbuffered, 1=line-buffered, else buffer-size
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
I've noticed that there is no mention of using a temporary file as intermediate. The following gets around the buffering issues by outputting to a temporary file and allows you to parse the data coming from rsync without connecting to a pty. I tested the following on a linux box, and the output of rsync tends to differ across platforms, so the regular expressions to parse the output may vary:
import subprocess, time, tempfile, re
pipe_output, file_name = tempfile.TemporaryFile()
cmd = ["rsync", "-vaz", "-P", "/src/" ,"/dest"]
p = subprocess.Popen(cmd, stdout=pipe_output,
stderr=subprocess.STDOUT)
while p.poll() is None:
# p.poll() returns None while the program is still running
# sleep for 1 second
time.sleep(1)
last_line = open(file_name).readlines()
# it's possible that it hasn't output yet, so continue
if len(last_line) == 0: continue
last_line = last_line[-1]
# Matching to "[bytes downloaded] number% [speed] number:number:number"
match_it = re.match(".* ([0-9]*)%.* ([0-9]*:[0-9]*:[0-9]*).*", last_line)
if not match_it: continue
# in this case, the percentage is stored in match_it.group(1),
# time in match_it.group(2). We could do something with it here...
In Python 3, here's a solution, which takes a command off the command line and delivers real-time nicely decoded strings as they are received.
Receiver (receiver.py):
import subprocess
import sys
cmd = sys.argv[1:]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
for line in p.stdout:
print("received: {}".format(line.rstrip().decode("utf-8")))
Example simple program that could generate real-time output (dummy_out.py):
import time
import sys
for i in range(5):
print("hello {}".format(i))
sys.stdout.flush()
time.sleep(1)
Output:
$python receiver.py python dummy_out.py
received: hello 0
received: hello 1
received: hello 2
received: hello 3
received: hello 4

Categories

Resources