Piping long-running processes - python

I'm fairly new to Linux/Python programming. I tried googling about this but could not find anything useful.
I wrote a simple script that reads lines from a serial port and prints them (as they are read) to stdout. Here's the relevant code:
ser = serial.Serial(args.port)
while True:
print(ser.readline())
I also wrote a script (this is only for testing purposes) that echoes lines read from stdin to stdout. Here's the code for that:
while True:
print(args.prefix + input())
I'm using python3, and the scripts are named serial.py and echo.py respectively.
What I would like to do is to pipe the output of serial to the input of echo (echo will later be replaced by a script that writes to a database), and leave those running indefinitely.
I tried both scripts separately and they work fine, but nothing gets printed when I pipe both commands:
./serial.py --port /dev/ttyACM0 | ./echo.py
It does work when I pipe echo to itself:
awer#napalm:~$ ./echo.py --prefix AAA | ./echo.py --prefix BBB
hi!
BBBAAAhi!
What am I doing wrong?
Thanks for any help on this.
Best regards

This could be an issue related to a buffered stdout. Try to run the serial.py using the '-u' flag of the python3 interpreter, which will force stdout and stderr to be unbuffered as stated by the doc:
-u Force the binary I/O layers of stdout and stderr to be
unbuffered. stdin is always buffered. The text I/O layer will
still be line-buffered.

Related

Redirection or piping of stdin or stdout is allowed only with -b

I have a simple paramiko script that connects to a server and executes a command:
command = "cd /path/to/command_file; sh command;"
stdin, stdout, stderr = ssh.exec_command(command)
stdin.write('test')
lines = stdout.readlines()
print(lines)
ssh.close()
That command file executes some things I don't understand much (separated in lines for readability):
/path/to/hex_file/hex -h 35
-cpterm iso8859-1
-cpstream ibm850
-pf /path/to/databases_list/database_list.pf
-p another_file.p <--- changed to '-b another_file.p' later
When it's executed, it returns:
Redirection or piping of stdin or stdout is allowed only with -b.
And when it's changed to -b, it returns:
Batch-mode X requires a startup procedure.
There's any idea to where I can start search for that procedure? Or there's any way to permit piping of stdin?
Batch-mode X requires a startup procedure.
Your question is more about your /path/to/hex_file/hex program. Which seems to be something proprietary. We cannot help you with that. Contact the program vendor.
The only thing I can suggest for your Python code is to enable a pseudo terminal using get_pty parameter, what might workaround the problem:
stdin, stdout, stderr = ssh.exec_command(command, get_pty=True)
But I do not recommend that, as it can bring you other undesired side effects.
For a similar problem, see Getting "must be run from a terminal" when switching to root user using Paramiko module in Python.

How to flush STDOUT in python CGI file hosted on PCF?

Due to Apache gateway timeouts, and a desire to display more information to the end user, I'd like to be able to flush STDOUT on a python CGI script hosted on PCF, essentially giving updates on the status of the script.
I have tried enabling the -u tag in python (#!/usr/python -u at head of my script), sys.stdout.flush() command, and even using subprocess.call to execute a perl script that is set to flush to STDOUT that prints some progress text ($| = 1; at beginning of perl script). Furthermore, I've double checked that I'm not using any Apache modules that would require buffering (no mod_deflate, for example). Finally, I'll mention that executing a standard perl CGI rather than a python CGI allows for the STDOUT flushing, so I figure it must be something with my python/Apache/PCF configuration.
I'm fresh out of ideas here, and would like some advice.
With any of these above methods, I would have thought stdout would flush. But, none of them have worked!
Thanks in advance for any assisstance.
You can disable buffering using something like this in your Python2 code:
# set stdout as non-buffered
if hasattr(sys.stdout, 'fileno'):
fileno = sys.stdout.fileno()
tmp_fd = os.dup(fileno)
sys.stdout.close()
os.dup2(tmp_fd, fileno)
os.close(tmp_fd)
sys.stdout = os.fdopen(fileno, "w", 0)
That is reopening sys.stdout with no buffer (i.e. the 0 as third arg). After you do that, anything writing to sys.stdout should not be buffered.

Python Popen waiting while it shouldn't (bg and output redirected)

When I run directly in a terminal:
echo "useful"; sleep 10 &> /tmp/out.txt & echo "more";
I get both outputs while sleep goes on in the background. I was this same behavious with Popen (python 2.7):
p = Popen('echo "useful"; sleep 10 &> /tmp/out.txt & echo "more";', shell = True, stdout = PIPE, stderr = PIPE)
print p.communicate()
It was my understanding that a background process with redirected stdout and stderr would achieve this, but it does not; I have to wait for sleep. Can someone explain?
I need the other output so changing stdout/stderr arguments in Python is not a solution.
EDIT: I understand now that the behaviour I want (get the output but stop when no more output rather than when completed) is not possible from Python.
However, the behaviour appears more or less automatically when using ssh:
ssh 1.2.3.4 "echo \'useful\'; cd ~/dicp/python; nohup sleep 5 &> /tmp/out.txt & echo \'more\';"
(I can ssh to this address without password). So it's not entirely impossible by working around Python; now I need a way to do it without ssh...
That's because the shell process still has to wait for the background process to finish.
You don't normally realize this is happening because you normally are working in the shell where you backgrounded something. You put a process in the background so you can get control of the shell again and continue to work with it.
In other words, the background process is relative to the shell, not your Python process.
As Martijn Pieters points out, this is not how Python behaves (or is meant to behave). However, because the desired behaviour appears when running the command through ssh with nohup, I found this similar trick:
p = Popen('bash -c "echo \'useful\'; cd ~/dicp/python; nohup sleep 5 &> /tmp/out.txt & echo \'more\';"', shell = True, stdout = PIPE, stderr = PIPE)
print p.communicate()
So if I understand correctly, this starts a new shell (bash -c), which then starts a process not attached to it (nohup). The shell terminates as soon as all other processes complete, but the nohup-process keeps running. Desires behaviour achieved!
Maybe not pretty and probably not efficient, but it works.
EDIT: assuming, of course, that you are using bash. A more general answer is welcome.
EDIT2: actually, if my explanation is correct, I am not sure why nohup does not detach the process even if not using bash -c... Seems like bash -c would be redundant, just detach it from the shell started by Popen, but that does not work.

Handling tcpdump output in python

Im trying to handle tcpdump output in python.
What I need is to run tcpdump (which captures the packets and gives me information) and read the output and process it.
The problem is that tcpdump keeps running forever and I need to read the packet info as soon as it outputs and continue doing it.
I tried looking into subprocess of python and tried calling tcpdump using popen and piping the stdout but it doesnt seem to work.
Any directions on how to proceed with this.
import subprocess
def redirect():
tcpdump = subprocess.Popen("sudo tcpdump...", stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
while True:
s = tcpdump.stdout.readline()
# do domething with s
redirect()
You can make tcpdump line-buffered with "-l". Then you can use subprocess to capture the output as it comes out.
import subprocess as sub
p = sub.Popen(('sudo', 'tcpdump', '-l'), stdout=sub.PIPE)
for row in iter(p.stdout.readline, b''):
print row.rstrip() # process here
By default, pipes are block buffered and interactive output is line buffered. It sounds like you need a line buffered pipe - coming from tcpdump in a subprocess.
In the old days, we'd recommend Dan Bernstein's "pty" program for this kind of thing. Today, it appears that pty hasn't been updated in a long time, but there's a new program called "emtpy" which is more or less the same idea:
http://empty.sourceforge.net/
You might try running tcpdump under empty in your subprocess to make tcpdump line buffered even though it's writing to a pipe.

Reading from flushed vs unflushed buffers

I've got a script parent.py trying to to read stdout from a subprocess sub.py in Python.
The parent parent.py:
#!/usr/bin/python
import subprocess
p = subprocess.Popen("sub.py", stdout=subprocess.PIPE)
print p.stdout.read(1)
And the subprocess, sub.py:
#!/usr/bin/python
print raw_input( "hello world!" )
I would expect running parent.py to print the 'h' from "hello world!". Actually, it hangs. I can only get my expected behaviour by adding -u to sub.py's she-bang line.
This confuses me because the -u switch makes no difference when sub.py is run directly from a shell; the shell is somehow privy to the un-flushed output stream, unlike parent.py.
My goal is to run a C program as the subprocess, so I won't be able to control whether or not it flushes stdout. How is it that a shell has better access to a process's stdout than Python running the same thing from subprocess.Popen? Am I going to be able to read such a stdout stream from a C program that doesn't flush its buffers?
EDIT:
Here is an updated example based on korylprince's comment...
## capitalize.sh ##
#!/bin/sh
while [ 1 ]; do
read s
echo $s | tr '[:lower:]' '[:upper:]'
done
########################################
## parent.py ##
#!/usr/bin/python
from subprocess import Popen, PIPE
# cmd = [ 'capitalize.sh' ] # This would work
cmd = [ 'script', '-q', '-f', '-c', 'capitalize.sh', '/dev/null']
p = Popen(cmd, stdin=PIPE)
p.stdin.write("some string\n")
p.wait()
When running through script, I get steady printing of newlines (and if this were a Python, subprocess, it'd raise an EOFerror).
An alternative is
p = subprocess.Popen(["python", "-u", "sub.py"], stdout=subprocess.PIPE)
or the suggestions here.
My experience is that yes, you will be able to read from most C programs without any extra effort.
The Python interpreter takes extra steps to buffer its output which is why it needs the -u switch to disable output buffering. Your typical C program won't do this.
I haven't run into any program (C or otherwise) other than the Python interpreter that I expected to work and didn't within a subshell.
The reason the shell can read output immediately, regardless of "-u" is because the program you're launching from the shell has its output connected to a TTY. When the stdout is connected to a TTY, it is unbuffered (because it is up to the TTY to buffer). When you launch the python subprocess from within python, you're connecting stdout to a pipe, which means you're at the mercy of the subprocess to flush its output when it feels like it.
If you're looking to do complicated interactions with a subprocess, look into this tutorial.

Categories

Resources