why does setting stderr=subprocess.STDOUT fix a subprocess.check_output call? - python

I have a python script running on a small server that is called in three different ways - from within another python script, by cron, or by gammu-smsd (an SMS daemon with the wonderful mobile utility [gammu]). The script is for maintenance and contained the following kludge to measure used space on the system (presumably this is possible from within Python, but this was quick and dirty):
reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], shell=True)[0:-1]
Oddly enough this line would only fail when the script was called by a shell script running from gammu-smsd. The line would fail with a CalledProcessError exception saying "returned exit status 2", even though the output attribute of the CalledProcessError object contained the correct output. The only command in the sequence of shell commands that would give such an error status would be awk, with status 2 indicating a fatal error.
If the python script with this line was called by cron, by another python script, or from the command line, this line would work fine. I broke my head trying to fix the environment for the script, thinking this must be the problem. Finally though I put in stderr=subprocess.STDOUT, like so:
reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], stderr=subprocess.STDOUT, shell=True)[0:-1]
This was a debug measure to help me figure out if some output was coming on stderr. But after this the script started working, even when called from gammu-smsd! Why might this be the case? I ask for future reference when using subprocess...

Gammu SMSD will call the script with all file descriptors closed (see documentation), that's probably reason for failure.

Related

retrieve value from python shell beginner question

I can use this python file from a library to make a read request of a temperature sensor value via BACnet protocol by running this command from terminal:
echo 'read 12345:2 analogInput:2 presentValue' | py -3.9 ReadProperty.py
And I can see in the console the value of 67.02999114990234 is returned as expected.
I apologize if this question seems real silly and entry level, but could I ever call this script and assign a value to the sensor reading? Any tips greatly appreciated.
for example if I run this:
import subprocess
read = "echo 'read 12345:2 analogInput:2 presentValue' | python ReadProperty.py"
sensor_reading = subprocess.check_output(read, shell=True)
print("sensor reading is: ",sensor_reading)
It will just print 0 but hoping to figure out a way to print the sensor reading of 67.02999114990234. I think what is happening under the hood is the BACnet library brings up some sort of shell scripting that is using std in/out/flush.
os.system does not return the output from stdout, but the exit code of the executed program/code.
See the docs for more information:
On Unix, the return value is the exit status of the process encoded in the format specified for wait().
On Windows, the return value is that returned by the system shell after running command.
For getting the output from stdout into your program, you have to use the subsystem module. There are plenty of tutorials outside on how to use subsystem, but this is an easy way:
import subprocess
read = "echo 'read 12345:2 analogInput:2 presentValue' | py -3.9 ReadProperty.py"
sensor_reading = subprocess.check_output(read, shell=True)
print(sensor_reading)

Python subprocess : in which case a segfault program returns -11 or 139?

I'm using Python 3.7 and subprocess library.
I have a binary my_prog which crashes with segfault :
$> ./my_prog
[1] 9328 segmentation fault ./my_prog
In my script main.py, I have these lines of code :
try:
output = subprocess.check_output(['./my_prog'], shell=True, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as exc:
print(exc.returncode)
print(exc.output)
In this case, I get
$> python3 main.py
-11
b''
Ok, subprocess catches the signal SIGSEGV.
Ok, no output. Why not.
But, if I want the same program to read on stdin, I have to modify my line in main.py (the file "text.txt" exists):
output = subprocess.check_output(['./my_prog < text.txt'], shell=True, stderr=subprocess.STDOUT)
And in this case I get :
$> python3 main.py
139
b'/bin/sh: line 1: 17235 Segmentation fault: 11 ./my_prog < text.txt\n'
I know it's 11 + 128, that means SIGSEGV too.
And, now, I have an output !
Even if 139 and -11 mean the same, why the returncode changes in these 2 different situations ? And why is there no output in the first case ?
Thanks :)
EDIT :
Add the difference on output issue.
For efficiency, the shell merely execs the last (or only) command it’s running in certain cases. Then the command is the same process as the shell—the direct child of your Python script—and reports signals in the usual fashion (entirely as the -11).
Redirecting input prevents this, perhaps to avoid issues with prematurely closing all file descriptors open on a terminal. Then the segfault from my_prog is reported to the shell: as if with a -11, but that’s actually a Python convention. The shell prints a message (note that /bin/sh appears in it) and converts that report into an exit status of 139.
It could re-present the signal, in the cases where it doesn’t exec, by killing itself with the same signal (strace does this), but it doesn’t bother with that extra check. (And this way you needn’t wonder whether the shell itself crashed.) It’s unfortunate that shells further restrict the range of usable exit statuses this way, but it’s long established.

Linux: cat to named pipe in a python script

I have a Java program that uses video from a framegrabber card. This program is launched through a python launcher.py.
The easiest way to read the video stream I found, is to make Java read on a named pipe, and this works perfectly. So my session is like:
$ mkfifo videopipe
$ cat /dev/video1>videopipe
and in a second terminal (since the cat command is blocking):
$ python launcher.py
I would like to automate this process. Unfortunately, the result is always the same: the Java application starts (confirmed through a print statement in the java program), but then the terminal stalls and nothing appears, exception or else.
Since this process works manually, I guess I am doing something wrong in the python program. To simplify things, I isolated the piping part:
from subprocess import call, Popen, PIPE, check_call
BASH_SWITCHTO_WINTV = ['v4l2-ctl', '-d /dev/video1', '-i 2', '--set-standard=4']
BASH_CREATE_FIFO_PIPE = ['mkfifo', 'videopipe']
BASH_PIPE_VIDEO = 'cat /dev/video1>videopipe'
def run():
try:
print('running bash commands...')
call(BASH_SWITCHTO_WINTV)
call(BASH_CREATE_FIFO_PIPE)
Popen(['cat', '/dev/video1'], stdout=open('videopipe', 'w'))
except:
raise RuntimeError('An error occured while piping the video')
if __name__ == '__main__':
run()
which when run, outputs:
running bash commands...
Failed to open /dev/video1: No such file or directory
A little help would be very much appreciated :-)
If you're using shell=True, just pass a string:
BASH_PIPE_VIDEO = 'cat /dev/video1 > videopipe'
Currently, cat is passed to the shell as your script, and /dev/video>videopipe is passed to that shell as a literal argument -- not parsed as part of the script text at all, and having no effect since the script (just calling cat) doesn't look at its arguments.
Alternately, to avoid needless shell use (and thus shell-related bugs such as shellshock, and potential for injection attacks if you were accepting any argument from a non-hardcoded source):
Popen(['cat', '/dev/video1'], stdout=open('videopipe, 'w'))
On a note unrelated to your "cat to named pipe" question -- be sure you get your spaces correct.
BASH_SWITCHTO_WINTV = ['v4l2-ctl', '-d /dev/video1', ...]
...uses the name <space>/dev/video1, with a leading space, as the input device; it's the same as running v4l2-ctl "-d /dev/video1" in shell, which would cause the same problem.
Be sure that you split your arguments correctly:
BASH_SWITCHTO_WINTV = ['v4l2-ctl', '-d', '/dev/video1', ...]

How to call a series of bash commands in python and store output

I am trying to run the following bash script in Python and store the readlist output. The readlist that I want to be stored as a python list, is a list of all files in the current directory ending in *concat_001.fastq.
I know it may be easier to do this in python (i.e.
import os
readlist = [f for f in os.listdir(os.getcwd()) if f.endswith("concat_001.fastq")]
readlist = sorted(readlist)
However, this is problematic, as I need Python to sort the list in EXACTLY the same was as bash, and I was finding that bash and Python sort certain things in different orders (eg Python and bash deal with capitalised and uncapitalised things differently - but when I tried
readlist = np.asarray(sorted(flist, key=str.lower))
I still found that two files starting with ML_ and M_ were sorted in different order with bash and Python. Hence trying to run my exact bash script through Python, then to use the sorted list generated with bash in my subsequent Python code.
input_suffix="concat_001.fastq"
ender=`echo $input_suffix | sed "s/concat_001.fastq/\*concat_001.fastq/g" `
readlist="$(echo $ender)"
I have tried
proc = subprocess.call(command1, shell=True, stdout=subprocess.PIPE)
proc = subprocess.call(command2, shell=True, stdout=subprocess.PIPE)
proc = subprocess.Popen(command3, shell=True, stdout=subprocess.PIPE)
But I just get: subprocess.Popen object at 0x7f31cfcd9190
Also - I don't understand the difference between subprocess.call and subprocess.Popen. I have tried both.
Thanks,
Ruth
So your question is a little confusing and does not exactly explain what you want. However, I'll try to give some suggestions to help you update it, or in my effort, answer it.
I will assume the following: your python script is passing to the command line 'input_suffix' and that you want your python program to receive the contents of 'readlist' when the external script finishes.
To make our lives simpler, and allow things to be more complicated, I would make the following bash script to contain your commands:
script.sh
#!/bin/bash
input_suffix=$1
ender=`echo $input_suffix | sed "s/concat_001.fastq/\*concat_001.fastq/g"`
readlist="$(echo $ender)"
echo $readlist
You would execute this as script.sh "concat_001.fastq", where $1 takes in the first argument passed on the command line.
To use python to execute external scripts, as you quite rightly found, you can use subprocess (or as noted by another response, os.system - although subprocess is recommended).
The docs tell you that subprocess.call:
"Wait for command to complete, then return the returncode attribute."
and that
"For more advanced use cases when these do not meet your needs, use the underlying Popen interface."
Given you want to pipe the output from the bash script to your python script, let's use Popen as suggested by the docs. As I posted the other stackoverflow answer, it could look like the following:
import subprocess
from subprocess import Popen, PIPE
# Execute out script and pipe the output to stdout
process = subprocess.Popen(['script.sh', 'concat_001.fastq'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# Obtain the standard out, and standard error
stdout, stderr = process.communicate()
and then:
>>> print stdout
*concat_001.fastq

IOError Input/Output Error When Printing

I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.

Categories

Resources