I'm just learning Python but have about 16 years experience with PERL and PHP.
I'm trying to get the output of ngrep and write it to a log file using Python while also tailing the log file. I've seen some examples online but some seem old and outdated and others use shell=True which is discouraged.
In perl I just use something similar to the following
#!/usr/bin/perl
open(NGFH,"ngrep -iW byline $filter");
while ($line = <NGFH>) {
open(LOG,">> /path/to/file.log")
// highlighting, filtering, other sub routine calls
print LOG $line
}
I've gotten tail to work but ngrep doesn't. I'd like to be able to run this infinately and output the stream from ngrep to the log file after filtering. I couldn't get the output from ngrep to show in stdout so that's as far as I've gotten. I was expecting to be able to see the data file tail as the log file was updated and see the output from ngrep. For now i was just using bash to run the following.
echo "." >> /path/to/ngrep.log
Thanks!
Here's what I got so far...
Updated
This seems to work now. I wouldn't know how to improve on it though.
import subprocess
import select
import re
log = open('/path/to/ngrep.log','a+',0)
print log.name
n = subprocess.Popen(['ngrep', '-iW', 'byline'],\
stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
p = select.poll()
p.register(n.stdout)
f = subprocess.Popen(['tail','-F','-n','0','/path/to/tailme.log'],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p2 = select.poll()
p2.register(f.stdout)
def srtrepl(match):
if match.group(0) == 'x.x.x.x':
# do something
if match.group(0) == 'x.x.y.y':
# do something else
return '\033[92m'+ match.group(0) + '\033[0m'
while True:
if p.poll(1):
line = n.stdout.readline()
s = re.compile(r'(8.8.(4.4|8.8)|192.168.[0-9]{1,3}.[0-9]{1,3})' )
print s.sub( srtrepl, line )
log.write(n.stdout.readline())
if p2.poll(1):
print f.stdout.readline().rstrip('\n')
To emulate your perl code in Python:
#!/usr/bin/env python3
from subprocess import Popen, PIPE
with Popen("ngrep -iW byline".split() + [filter_], stdout=PIPE) as process, \
open('/path/to/file.log', 'ab') as log_file:
for line in process.stdout: # read b'\n'-separated lines
# highlighting, filtering, other function calls
log_file.write(line)
It starts ngrep process passing filter_ variable and appends the output to the log file while allowing you to modify it in Python. See Python: read streaming input from subprocess.communicate() (there could be buffering issues: check whether ngrep supports --line-buffered option like grep and if you want to tail file.log then pass buffering=1 to open(), to enable line-buffering (only usable in the text-mode) or call log_file.flush() after log_file.write(line)).
You could emulate ngrep in pure Python too.
If you want to read output from several processes concurrently (ngrep, tail in your case) then you need to able to read pipes without blocking e.g., using threads, async.io.
Related
I'm working with a piece of scientific software called Chimera. For some of the code downstream of this question, it requires that I use Python 2.7.
I want to call a process, give that process some input, read its output, give it more input based on that, etc.
I've used Popen to open the process, process.stdin.write to pass standard input, but then I've gotten stuck trying to get output while the process is still running. process.communicate() stops the process, process.stdout.readline() seems to keep me in an infinite loop.
Here's a simplified example of what I'd like to do:
Let's say I have a bash script called exampleInput.sh.
#!/bin/bash
# exampleInput.sh
# Read a number from the input
read -p 'Enter a number: ' num
# Multiply the number by 5
ans1=$( expr $num \* 5 )
# Give the user the multiplied number
echo $ans1
# Ask the user whether they want to keep going
read -p 'Based on the previous output, would you like to continue? ' doContinue
if [ $doContinue == "yes" ]
then
echo "Okay, moving on..."
# [...] more code here [...]
else
exit 0
fi
Interacting with this through the command line, I'd run the script, type in "5" and then, if it returned "25", I'd type "yes" and, if not, I would type "no".
I want to run a python script where I pass exampleInput.sh "5" and, if it gives me "25" back, then I pass "yes"
So far, this is as close as I can get:
#!/home/user/miniconda3/bin/python2
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5")
answer = process.communicate()[0]
if answer == "25":
process.stdin.write("yes")
## I'd like to print the STDOUT here, but the process is already terminated
But that fails of course, because after `process.communicate()', my process isn't running anymore.
(Just in case/FYI): Actual problem
Chimera is usually a gui-based application to examine protein structure. If you run chimera --nogui, it'll open up a prompt and take input.
I often need to know what chimera outputs before I run my next command. For example, I will often try to generate a protein surface and, if Chimera can't generate a surface, it doesn't break--it just says so through STDOUT. So, in my python script, while I'm looping through many proteins to analyze, I need to check STDOUT to know whether to continue analysis on that protein.
In other use cases, I'll run lots of commands through Chimera to clean up a protein first, and then I'll want to run lots of separate commands to get different pieces of data, and use that data to decide whether to run other commands. I could get the data, close the subprocess, and then run another process, but that would require re-running all of those cleaning up commands each time.
Anyways, those are some of the real-world reasons why I want to be able to push STDIN to a subprocess, read the STDOUT, and still be able to push more STDIN.
Thanks for your time!
you don't need to use process.communicate in your example.
Simply read and write using process.stdin.write and process.stdout.read. Also make sure to send a newline, otherwise read won't return. And when you read from stdin, you also have to handle newlines coming from echo.
Note: process.stdout.read will block until EOF.
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5\n")
stdout = process.stdout.readline()
print(stdout)
if stdout == "25\n":
process.stdin.write("yes\n")
print(process.stdout.readline())
$ python2 test.py
25
Okay, moving on...
Update
When communicating with an program in that way, you have to pay special attention to what the application is actually writing. Best is to analyze the output in a hex editor:
$ chimera --nogui 2>&1 | hexdump -C
Please note that readline [1] only reads to the next newline (\n). In your case you have to call readline at least four times to get that first block of output.
If you just want to read everything up until the subprocess stops printing, you have to read byte by byte and implement a timeout. Sadly, neither read nor readline does provide such a timeout mechanism. This is probably because the underlying read syscall [2] (Linux) does not provide one either.
On Linux we can write a single-threaded read_with_timeout() using poll / select. For an example see [3].
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from fd until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
In case you need a reliable way to read non blocking under Windows and Linux, this answer might be helpful.
[1] from the python 2 docs:
readline(limit=-1)
Read and return one line from the stream. If limit is specified, at most limit bytes will be read.
The line terminator is always b'\n' for binary files; for text files, the newline argument to open() can be used to select the line terminator(s) recognized.
[2] from man 2 read:
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
[3] example
$ tree
.
├── prog.py
└── prog.sh
prog.sh
#!/usr/bin/env bash
for i in $(seq 3); do
echo "${RANDOM}"
sleep 1
done
sleep 3
echo "${RANDOM}"
prog.py
# talk_with_example_input.py
import subprocess
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from f until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
process = subprocess.Popen(
["./prog.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE
)
print(read_with_timeout(process.stdout, 1.5))
print('-----')
print(read_with_timeout(process.stdout, 3))
$ python2 prog.py
6194
14508
11293
-----
10506
I am still fairly new to the python world and know this should be an easy question to answer. I have this section of a script in python that calls a script in Perl. This Perl script is a SOAP service that fetches data from a web page. Everything works great and outputs what I want, but after a bit of trial and error I am confused to how I can capture the data with a python variable and not just output to the screen like it does now.
Any pointers appreciated!
Thank you,
Pablo
# SOAP SERVICE
# Fetch the perl script that will request the users email.
# This service will return a name, email, and certificate.
var = "soap.pl"
pipe = subprocess.Popen(["perl", "./soap.pl", var], stdin = subprocess.PIPE)
pipe.stdin.write(var)
print "\n"
pipe.stdin.close()
I am not sure what your code aims to do (with var in particular), but here are the basics.
There is the subprocess.check_output() function for this
import subprocess
out = subprocess.check_output(['ls', '-l'])
print out
If your Python is before 2.7 use Popen with the communicate() method
import subprocess
proc = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
out, err = proc.communicate()
print out
You can instead iterate proc.stdout but it appears that you want all output in one variable.
In both cases you provide the program's arguments in the list.
Or add stdin if needed
proc = subprocess.Popen(['perl', 'script.pl', 'arg'],\
stdin = subprocess.PIPE,\
stdout = subprocess.PIPE)
The purpose of stdin = subprocess.PIPE is to be able to feed the STDIN of the process that is started, as it runs. Then you would do proc.stdin.write(string) and this writes to the invoked program's STDIN. That program generally waits on its STDIN and after you send a newline it gets everything written to it (since the last newline) and runs relevant processing.
If you simply need to pass parameters/arguments to the script at its invocation then that generally doesn't need nor involve its STDIN.
Since Python 3.5 the recommended method is subprocess.run(), with a very similar full signature, and similar operation, to that of the Popen constructor.
is there a "nice" way to iterate over the output of a shell command?
I'm looking for the python equivalent for something like:
ls | while read file; do
echo $file
done
Note that 'ls' is only an example for a shell command which will return it's result in multiple lines and of cause 'echo' is just: do something with it.
I known of these alternatives: Calling an external command in Python but I don't know which one to use or if there is a "nicer" solution to this. (In fact "nicer" is the main focus of this question.)
This is for replacing some bash scripts with python.
you can open a pipe ( see doc ):
import os
with os.popen('ls') as pipe:
for line in pipe:
print (line.strip())
as in the document this syntax is depreciated and is replaced with more complicated subprocess.Popen
from subprocess import Popen, PIPE
pipe = Popen('ls', shell=True, stdout=PIPE)
for line in pipe.stdout:
print(line.strip())
check_output will give you back a string you can parse. call will simply re-use your existing stdout/stderr/stdin and simply return the process exit code.
Hello I'm really new to the Python programming language and i have encountered a problem writing one script. I want to save the output from stdout that i obtain when i run a tcpdump command in a variable in a Python script, but i want the tpcdump command to run continuously because i want to gather the length from all packets transferred that get filtered by tcpdump(with the filter i wrote).
I tried :
fin, fout = os.popen4(comand)
result = fout.read()
return result
But it just hangs.
I'm guessing that it hangs because os.popen4 doesn't return until the child process exits. You should be using subprocess.Popen instead.
import subprocess
import shlex #just so you don't need break "comand" into a list yourself ;)
p=subprocess.Popen(shlex.split(comand),stdout=subprocess.PIPE)
first_line_of_output=p.stdout.readline()
second_line_of_output=p.stdout.readline()
...
I want to capture stdout from a long-ish running process started via subprocess.Popen(...) so I'm using stdout=PIPE as an arg.
However, because it's a long running process I also want to send the output to the console (as if I hadn't piped it) to give the user of the script an idea that it's still working.
Is this at all possible?
Cheers.
The buffering your long-running sub-process is probably performing will make your console output jerky and very bad UX. I suggest you consider instead using pexpect (or, on Windows, wexpect) to defeat such buffering and get smooth, regular output from the sub-process. For example (on just about any unix-y system, after installing pexpect):
>>> import pexpect
>>> child = pexpect.spawn('/bin/bash -c "echo ba; sleep 1; echo bu"', logfile=sys.stdout); x=child.expect(pexpect.EOF); child.close()
ba
bu
>>> child.before
'ba\r\nbu\r\n'
The ba and bu will come with the proper timing (about a second between them). Note the output is not subject to normal terminal processing, so the carriage returns are left in there -- you'll need to post-process the string yourself (just a simple .replace!-) if you need \n as end-of-line markers (the lack of processing is important just in case the sub-process is writing binary data to its stdout -- this ensures all the data's left intact!-).
S. Lott's comment points to Getting realtime output using subprocess and Real-time intercepting of stdout from another process in Python
I'm curious that Alex's answer here is different from his answer 1085071.
My simple little experiments with the answers in the two other referenced questions has given good results...
I went and looked at wexpect as per Alex's answer above, but I have to say reading the comments in the code I was not left a very good feeling about using it.
I guess the meta-question here is when will pexpect/wexpect be one of the Included Batteries?
Can you simply print it as you read it from the pipe?
Inspired by pty.openpty() suggestion somewhere above, tested on python2.6, linux. Publishing since it took a while to make this working properly, w/o buffering...
def call_and_peek_output(cmd, shell=False):
import pty, subprocess
master, slave = pty.openpty()
p = subprocess.Popen(cmd, shell=shell, stdin=None, stdout=slave, close_fds=True)
os.close(slave)
line = ""
while True:
try:
ch = os.read(master, 1)
except OSError:
# We get this exception when the spawn process closes all references to the
# pty descriptor which we passed him to use for stdout
# (typically when it and its childs exit)
break
line += ch
sys.stdout.write(ch)
if ch == '\n':
yield line
line = ""
if line:
yield line
ret = p.wait()
if ret:
raise subprocess.CalledProcessError(ret, cmd)
for l in call_and_peek_output("ls /", shell=True):
pass
Alternatively, you can pipe your process into tee and capture only one of the streams.
Something along the lines of sh -c 'process interesting stuff' | tee /dev/stderr.
Of course, this only works on Unix-like systems.