stdin should not wait for "CTRL+D" - python

I got a simple python script which should read from stdin.
So if I redirect a stdout of a program to the stdin to my python script.
But the stuff that's logged by my program to the python script will only "reach" the python script when the program which is logging the stuff gets killed.
But actually I want to handle each line which is logged by my program as soon as it is available and not when my program which should actually run 24/7 quits.
So how can I make this happen? How can I make the stdin not wait for CTRL+D or EOF until they handle data?
Example
# accept_stdin.py
import sys
import datetime
for line in sys.stdin:
print datetime.datetime.now().second, line
# print_data.py
import time
print "1 foo"
time.sleep(3)
print "2 bar"
# bash
python print_data.py | python accept_stdin.py

Like all file objects, the sys.stdin iterator reads input in chunks; even if a line of input is ready, the iterator will try to read up to the chunk size or EOF before outputting anything. You can work around this by using the readline method, which doesn't have this behavior:
while True:
line = sys.stdin.readline()
if not line:
# End of input
break
do_whatever_with(line)
You can combine this with the 2-argument form of iter to use a for loop:
for line in iter(sys.stdin.readline, ''):
do_whatever_with(line)
I recommend leaving a comment in your code explaining why you're not using the regular iterator.

It is also an issue with your producer program, i.e. the one you pipe stdout to your python script.
Indeed, as this program only prints and never flushes, the data it prints is kept in the internal program buffers for stdout and not flushed to the system.
Add sys.stdout.flush() call right after you print statement in print_data.py.
You see the data when you quit the program as it automatically flushes on exit.
See this question for explanation,

As said by #user2357112 you need to use:
for line in iter(sys.stdin.readline, ''):
After that you need to start python with the -u flag to flush stdin and stdout immediately.
python -u print_data.py | python -u accept_stdin.py
You can also specify the flag in the shebang.

Related

Bypass a command subprocess that ask for user input ? (password) [duplicate]

I'm working with a piece of scientific software called Chimera. For some of the code downstream of this question, it requires that I use Python 2.7.
I want to call a process, give that process some input, read its output, give it more input based on that, etc.
I've used Popen to open the process, process.stdin.write to pass standard input, but then I've gotten stuck trying to get output while the process is still running. process.communicate() stops the process, process.stdout.readline() seems to keep me in an infinite loop.
Here's a simplified example of what I'd like to do:
Let's say I have a bash script called exampleInput.sh.
#!/bin/bash
# exampleInput.sh
# Read a number from the input
read -p 'Enter a number: ' num
# Multiply the number by 5
ans1=$( expr $num \* 5 )
# Give the user the multiplied number
echo $ans1
# Ask the user whether they want to keep going
read -p 'Based on the previous output, would you like to continue? ' doContinue
if [ $doContinue == "yes" ]
then
echo "Okay, moving on..."
# [...] more code here [...]
else
exit 0
fi
Interacting with this through the command line, I'd run the script, type in "5" and then, if it returned "25", I'd type "yes" and, if not, I would type "no".
I want to run a python script where I pass exampleInput.sh "5" and, if it gives me "25" back, then I pass "yes"
So far, this is as close as I can get:
#!/home/user/miniconda3/bin/python2
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5")
answer = process.communicate()[0]
if answer == "25":
process.stdin.write("yes")
## I'd like to print the STDOUT here, but the process is already terminated
But that fails of course, because after `process.communicate()', my process isn't running anymore.
(Just in case/FYI): Actual problem
Chimera is usually a gui-based application to examine protein structure. If you run chimera --nogui, it'll open up a prompt and take input.
I often need to know what chimera outputs before I run my next command. For example, I will often try to generate a protein surface and, if Chimera can't generate a surface, it doesn't break--it just says so through STDOUT. So, in my python script, while I'm looping through many proteins to analyze, I need to check STDOUT to know whether to continue analysis on that protein.
In other use cases, I'll run lots of commands through Chimera to clean up a protein first, and then I'll want to run lots of separate commands to get different pieces of data, and use that data to decide whether to run other commands. I could get the data, close the subprocess, and then run another process, but that would require re-running all of those cleaning up commands each time.
Anyways, those are some of the real-world reasons why I want to be able to push STDIN to a subprocess, read the STDOUT, and still be able to push more STDIN.
Thanks for your time!
you don't need to use process.communicate in your example.
Simply read and write using process.stdin.write and process.stdout.read. Also make sure to send a newline, otherwise read won't return. And when you read from stdin, you also have to handle newlines coming from echo.
Note: process.stdout.read will block until EOF.
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5\n")
stdout = process.stdout.readline()
print(stdout)
if stdout == "25\n":
process.stdin.write("yes\n")
print(process.stdout.readline())
$ python2 test.py
25
Okay, moving on...
Update
When communicating with an program in that way, you have to pay special attention to what the application is actually writing. Best is to analyze the output in a hex editor:
$ chimera --nogui 2>&1 | hexdump -C
Please note that readline [1] only reads to the next newline (\n). In your case you have to call readline at least four times to get that first block of output.
If you just want to read everything up until the subprocess stops printing, you have to read byte by byte and implement a timeout. Sadly, neither read nor readline does provide such a timeout mechanism. This is probably because the underlying read syscall [2] (Linux) does not provide one either.
On Linux we can write a single-threaded read_with_timeout() using poll / select. For an example see [3].
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from fd until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
In case you need a reliable way to read non blocking under Windows and Linux, this answer might be helpful.
[1] from the python 2 docs:
readline(limit=-1)
Read and return one line from the stream. If limit is specified, at most limit bytes will be read.
The line terminator is always b'\n' for binary files; for text files, the newline argument to open() can be used to select the line terminator(s) recognized.
[2] from man 2 read:
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
[3] example
$ tree
.
├── prog.py
└── prog.sh
prog.sh
#!/usr/bin/env bash
for i in $(seq 3); do
echo "${RANDOM}"
sleep 1
done
sleep 3
echo "${RANDOM}"
prog.py
# talk_with_example_input.py
import subprocess
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from f until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
process = subprocess.Popen(
["./prog.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE
)
print(read_with_timeout(process.stdout, 1.5))
print('-----')
print(read_with_timeout(process.stdout, 3))
$ python2 prog.py
6194
14508
11293
-----
10506

Iterating over standard in blocks until EOF is read

I have two scripts which are connected by Unix pipe. The first script writes strings to standard out, and these are consumed by the second script.
Consider the following
# producer.py
import sys
import time
for x in range(10):
sys.stdout.write("thing number %d\n"%x)
sys.stdout.flush()
time.sleep(1)
and
# consumer.py
import sys
for line in sys.stdin:
print line
Now, when I run: python producer.py | python consumer.py, I expect to see a new line of output each second. Instead, I wait 10 seconds, and I suddenly see all of the output at once.
Why can't I iterate over stdin one-item-at-a-time? Why do I have to wait until the producer gives me an EOF before the loop-body starts executing?
Note that I can get to the correct behavior if I change consumer.py to:
# consumer.py
import sys
def stream_stdin():
line = sys.stdin.readline()
while line:
yield line
line = sys.stdin.readline()
for line in stream_stdin():
print line
I'm wondering why I have to explicitly build a generator to stream the items of stdin. Why doesn't this implicitly happen?
According to the python -h help message:
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in
binary mode. Note that there is internal buffering in xread‐
lines(), readlines() and file-object iterators ("for line in
sys.stdin") which is not influenced by this option. To work
around this, you will want to use "sys.stdin.readline()" inside
a "while 1:" loop.

Reading from and writing to process using subprocess

I have been (unsuccessfully) trying to use Python's subprocess module to interact with an executable program. The program is a very simple command line based script.
It basically just acts in the following way: prompt user with text, wait for numeric input, prompt with more text, wait for next input, etc.
So I set up the subprocess like so
from subprocess import Popen, PIPE
p = Popen('filename.exe', stdin=PIPE, stdout=PIPE)
Then I get the first prompt
print p.stdout.readline()
Properly returns
Enter some value blah blah
Great! Then I try to enter the desired value
p.stdin.write('10.0')
It then completely hangs. I can try grabbing the next prompt
print p.stdout.readline()
but it still hangs no matter what.
What is the proper way to do this one line read/write business? I must be messing up the write line I think.
You are probably forgetting to output a newline:
p.stdin.write('10.0\n')
What happens is that your subprocess is receiving the data, but waiting for more input, until it finds a newline. If you wait for output from the process in this state, you deadlock the system.

How to read stdout output of ongoing process in Python?

Hello I'm really new to the Python programming language and i have encountered a problem writing one script. I want to save the output from stdout that i obtain when i run a tcpdump command in a variable in a Python script, but i want the tpcdump command to run continuously because i want to gather the length from all packets transferred that get filtered by tcpdump(with the filter i wrote).
I tried :
fin, fout = os.popen4(comand)
result = fout.read()
return result
But it just hangs.
I'm guessing that it hangs because os.popen4 doesn't return until the child process exits. You should be using subprocess.Popen instead.
import subprocess
import shlex #just so you don't need break "comand" into a list yourself ;)
p=subprocess.Popen(shlex.split(comand),stdout=subprocess.PIPE)
first_line_of_output=p.stdout.readline()
second_line_of_output=p.stdout.readline()
...

Popen does not give output immediately when available

I am trying to read from both stdout and stderr from a Popen and print them out. The command I am running with Popen is the following
#!/bin/bash
i=10
while (( i > 0 )); do
sleep 1s
echo heyo-$i
i="$((i-1))"
done
echo 'to error' >&2
When I run this in the shell, I get one line of output and then a second break and then one line again, etc. However, I am unable to recreate this using python. I am starting two threads, one each to read from stdout and stderr, put the lines read into a Queue and another thread that takes items from this queue and prints them out. But with this, I see that all the output gets printed out at once, after the subprocess ends. I want the lines to be printed as and when they are echo'ed.
Here's my python code:
# The `randoms` is in the $PATH
proc = sp.Popen(['randoms'], stdout=sp.PIPE, stderr=sp.PIPE, bufsize=0)
q = Queue()
def stream_watcher(stream, name=None):
"""Take lines from the stream and put them in the q"""
for line in stream:
q.put((name, line))
if not stream.closed:
stream.close()
Thread(target=stream_watcher, args=(proc.stdout, 'out')).start()
Thread(target=stream_watcher, args=(proc.stderr, 'err')).start()
def displayer():
"""Take lines from the q and add them to the display"""
while True:
try:
name, line = q.get(True, 1)
except Empty:
if proc.poll() is not None:
break
else:
# Print line with the trailing newline character
print(name.upper(), '->', line[:-1])
q.task_done()
print('-*- FINISHED -*-')
Thread(target=displayer).start()
Any ideas? What am I missing here?
Only stderr is unbuffered, not stdout. What you want cannot be done using the shell built-ins alone. The buffering behavior is defined in the stdio(3) C library, which applies line buffering only when the output is to a terminal. When the output is to a pipe, it is pipe-buffered, not line-buffered, and so the data is not transferred to the kernel and thence to the other end of the pipe until the pipe buffer fills.
Moreover, the shell has no access to libc’s buffer-controlling functions, such as setbuf(3) and friends. The only possible solution within the shell is to launch your co-process on a pseudo-tty, and pty management is a complex topic. It is much easier to rewrite the equivalent shell script in a language that does grant access to low-level buffering features for output streams than to arrange to run something over a pty.
However, if you call /bin/echo instead of the shell built-in echo, you may find it more to your liking. This works because now the whole line is flushed when the newly launched /bin/echo process terminates each time. This is hardly an efficient use of system resources, but may be an efficient use of your own.
IIRC, setting shell=True on Popen should do it.

Categories

Resources