This must have an answer somewhere but I couldn't find it.
I would like my error/exception messages to be the last thing printed to the terminal, but it seems random whether they come out before all the text I have printed, after all the text I have printed, or somewhere in the middle of it.
I though a solution would be to use sys.stdout.flush(), so I tried the following:
if __name__ == '__main__':
import sys
try:
main()
except:
sys.stdout.flush()
raise
..But this doesn't work for some reason, it is still seemingly random in which order the error message and the text I have printed comes out.
Why? and how do I fix this?
EDIT: Here is a minimal reproducible example, which behaves as described above at least on my system:
import sys
import numpy as np
def print_garbage():
print(''.join(map(chr, np.random.randint(0, 256, 100))))
raise Exception
try:
print_garbage()
except:
sys.stdout.flush()
raise
EDIT: I am running Python version 3.10.0 on a windows machine, and the terminal I am using is the cmd through PyCharm terminal. My PyCharm version is Community version 2022.2
You can print traceback to stdout, so that there will be no out-of-sync problem:
import traceback
import sys
try:
print_garbage()
except:
traceback.print_exc(file=sys.stdout)
stdout and stderr are separate channels. stdout is a buffered channel and stderr is unbuffered. This is why you are seeing the stderr message before the stdout one. Python is outputting them in your desired order, but the stdout data is being buffered before it is printed.
See here for how to disable buffering.
Related
Summary: I want to start an external process from Python (version 3.6), poll the result nonblocking, and kill after a timeout.
Details: there is an external process with 2 "bad habits":
It prints out the relevant result after an undefined time.
It does not stop after it printed out the result.
Example: maybe the following simple application resembles mostly the actual program to be called (mytest.py; source code not available):
import random
import time
print('begin')
time.sleep(10*random.random())
print('result=5')
while True: pass
This is how I am trying to call it:
import subprocess, time
myprocess = subprocess.Popen(['python', 'mytest.py'], stdout=subprocess.PIPE)
for i in range(15):
time.sleep(1)
# check if something is printed, but do not wait to be printed anything
# check if the result is there
# if the result is there, then break
myprocess.kill()
I want to implement the logic in comment.
Analysis
The following are not appropriate:
Use myprocess.communicate(), as it waits for termination, and the subprocess does not terminate.
Kill the process and then call myprocess.communicate(), because we don't know when exactly the result is printed out
Use process.stdout.readline() because that is a blocikg statement, so it waits until something is printed. But here at the end does not print anything.
The type of the myprocess.stdout is io.BufferedReader. So the question practically is: is there a way to check if something is printed to the io.BufferedReader, and if so, read it, but otherwise do not wait?
I think I got the exact package you need.
Meet command_runner, which is a subprocess wrapper and allows:
Live stdout / stderr output
timeouts regardless of execution
process tree including child processes killing in case of timeout
stdout / stderr redirection to queues, files or callback functions
Install with pip install command_runner
Usage:
from command_runner import command_runner
def callback(stdout_output):
# Do whatever you want here with the output
print(stdout_output)
exit_code, output = command_runner("python mytest.py", timeout=300, stdout=callback, method='poller')
if exit_code == -254:
print("Oh no, we got a timeout")
print(output)
# Check for good exit_code and full stdout output here
If timeout is reached, you'll get exit_code -254 but still get to have output filled with whatever your subprocess wrote to stdout/stderr.
Disclaimer: I'm the author of command_runner
Additional non blocking examples using queues can be seen on the github page.
So this code in Python that I have currently works in returning my STDOUT in the variable "run":
run = subprocess.check_output(['Rscript','runData.R',meth,expr,norm])
But it still prints to the screen all this ugly text from having to install a package in R, etc, etc. So I would like for that to be ignored and sent into STDERR. Is there any way to do this? This is what I'm currently working on but it doesn't seem to work. Again, I just want it to ignore what it is printing to the screen except the results. So I want to ignore STDERR and keep STDOUT. Thank you!
run = subprocess.Popen(['Rscript','runData.R',meth,expr,norm],shell=False, stdout=subprocess.PIPE,stderr=devnull)
To avoid piping stderr entirely you may redirect it to os.devnull:
os.devnull
The file path of the null device. For example: '/dev/null' for POSIX, 'nul' for Windows. Also available via os.path.
import os
import subprocess
with open(os.devnull) as devnull:
subprocess.Popen([cmd arg], stdout=subprocess.PIPE, stderr=devnull)
I actually solved my problem as soon as I posted this! My apologies! This is how it worked:
output = subprocess.Popen(['Rscript','runData.R',meth,expr,norm],shell=False, stdout=subprocess.PIPE,stderr=subprocess.PIPE)
final = output.stdout.read()
This ignored the messy stuff from the command line and saved my results into final.
Thank you for everyone's quick replies!
I'm writing an alternative terminal window (using PySide), and I'm running the shell (bash) using:
subprocess.Popen(['/bin/bash','-i'],....
while setting the various stdio to subprocess.PIPE
I'm also disabling buffering on the output stdio (out,err) using
fcntl(s.fileno(),F_SETFL,os.O_NONBLOCK)
Then I'm using a timer to poll the output io for available data and pull it.
It works fairly well, but I'm getting some strange behavior some of the time. If at a prompt I issue a command (e.g. pwd), I get two distinct possible outputs:
/etc:$ pwd
/etc
/etc:$
And the other is
/etc:$ pwd/etc
/etc:$
As if the newline from the command and the rest of the output get swapped. This happens for basically any command, so for ls, for example, the first file appears right after the ls, and an empty line after the last file.
What bugs me is that it is not consistent.
EDIT: Added full code sample
#!/usr/bin/python
from PySide import QtCore
from PySide import QtGui
import fcntl
import os
import subprocess
import sys
class MyTerminal(QtGui.QDialog):
def __init__(self,parent=None):
super(MyTerminal,self).__init__(parent)
startPath=os.path.expanduser('~')
self.process=subprocess.Popen(['/bin/bash','-i'],cwd=startPath,stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.PIPE)
fcntl.fcntl(self.process.stdout.fileno(),fcntl.F_SETFL,os.O_NONBLOCK)
fcntl.fcntl(self.process.stderr.fileno(),fcntl.F_SETFL,os.O_NONBLOCK)
self.timer=QtCore.QTimer(self)
self.connect(self.timer,QtCore.SIGNAL("timeout()"),self.onTimer)
self.started=False
def keyPressEvent(self,event):
text=event.text()
if len(text)>0:
if not self.started:
self.timer.start(10)
self.started=True
self.sendKeys(text)
event.accept()
def sendKeys(self,text):
self.process.stdin.write(text)
def output(self,text):
sys.stdout.write(text)
sys.stdout.flush()
def readOutput(self,io):
try:
text=io.read()
if len(text)>0:
self.output(text)
except IOError:
pass
def onTimer(self):
self.readOutput(self.process.stdout)
self.readOutput(self.process.stderr)
def main():
app=QtGui.QApplication(sys.argv)
t=MyTerminal()
t.show()
app.exec_()
if __name__=='__main__':
main()
After trying to create a small code example to paste (added above), I noticed that the problem arises because of synchronization between the stdout and stderr.
A little bit of searching led me to the following question:
Merging a Python script's subprocess' stdout and stderr while keeping them distinguishable
I tried the first answer there and used the polling method, but this didn't solve things, as I was getting events mixing in the same manner as before.
What solved the problem was the answer by mossman which basically redirected the stderr to the stdout, which in my case is good enough.
I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.
I am writing a small app and I need to quit the program multiple number of times.
Should I use:
sys.stderr.write('Ok quitting')sys.exit(1)
Or should I just do a:
print 'Error!'sys.exit(1)
Which is better and why? Note that I need to do this a lot. The program should completely quit.
sys.exit('Error!')
Note from the docs:
If another type of object is passed,
None is equivalent to passing zero,
and any other object is printed to
sys.stderr and results in an exit code
of 1. In particular, sys.exit("some
error message") is a quick way to exit
a program when an error occurs.
They're two different ways of showing messages.
print generally goes to sys.stdout and you know where sys.stderr is going. It's worth knowing the difference between stdin, stdout, and stderr.
stdout should be used for normal program output, whereas stderr should be reserved only for error messages (abnormal program execution). There are utilities for splitting these streams, which allows users of your code to differentiate between normal output and errors.
print can print on any file-like object, including sys.stderr:
print >> sys.stderr, 'My error message'
The advantages of using sys.stderr for errors instead of sys.stdout are:
If the user redirected stdout to a file, they still see errors on the screen.
It's unbuffered, so if sys.stderr is redirected to a log file there is less chance that the program will crash before the error was logged.
It's worth noting that there's a third way you can provide a closing message:
sys.exit('My error message')
This will send a message to stderr and exit.
If it's an error message, it should normally go to stderr - but whether this is necessary depends on your use case. If you expect users to redirect stdin, stderr and stdout, for example when running your program from a different tool, then you should make sure that status information and error messages are separated cleanly.
If it's just you using the program, you probably don't need to bother. In that case, you might as well just raise an exception, and the program will terminate on its own.
By the way, you can do
print >>sys.stderr, "fatal error" # Python 2.x
print("fatal error", file=sys.stderr) # Python 3.x