How to exit a program: sys.stderr.write() or print - python

I am writing a small app and I need to quit the program multiple number of times.
Should I use:
sys.stderr.write('Ok quitting')sys.exit(1)
Or should I just do a:
print 'Error!'sys.exit(1)
Which is better and why? Note that I need to do this a lot. The program should completely quit.

sys.exit('Error!')
Note from the docs:
If another type of object is passed,
None is equivalent to passing zero,
and any other object is printed to
sys.stderr and results in an exit code
of 1. In particular, sys.exit("some
error message") is a quick way to exit
a program when an error occurs.

They're two different ways of showing messages.
print generally goes to sys.stdout and you know where sys.stderr is going. It's worth knowing the difference between stdin, stdout, and stderr.
stdout should be used for normal program output, whereas stderr should be reserved only for error messages (abnormal program execution). There are utilities for splitting these streams, which allows users of your code to differentiate between normal output and errors.
print can print on any file-like object, including sys.stderr:
print >> sys.stderr, 'My error message'
The advantages of using sys.stderr for errors instead of sys.stdout are:
If the user redirected stdout to a file, they still see errors on the screen.
It's unbuffered, so if sys.stderr is redirected to a log file there is less chance that the program will crash before the error was logged.
It's worth noting that there's a third way you can provide a closing message:
sys.exit('My error message')
This will send a message to stderr and exit.

If it's an error message, it should normally go to stderr - but whether this is necessary depends on your use case. If you expect users to redirect stdin, stderr and stdout, for example when running your program from a different tool, then you should make sure that status information and error messages are separated cleanly.
If it's just you using the program, you probably don't need to bother. In that case, you might as well just raise an exception, and the program will terminate on its own.
By the way, you can do
print >>sys.stderr, "fatal error" # Python 2.x
print("fatal error", file=sys.stderr) # Python 3.x

Related

Blocking sys.stdout and stderr does not prevent C code from printing

I am including in my python code a function compiled in c via a cython wrapper. I have to take that function as given and cannot change it. Unfortunately, when I run that function, I see output that is bothering me.
I have tried a lot of tricks that are supposed to get rid of it, all of which play with sys.stdout or sys.stderr -- most noteably, the new contextlib.redirect_stdout. However, nothing I tried managed to silence the output.
At the most basic level, I simply tried setting
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, 'w')
Which is not a safe, or practicable way of doing it, but it should shut the function up. Unfortunately, I can still see the output. What am I missing? Is there perhaps another "output type" besides stdout that this function might be using?
If it helps, I am inside a Pycharm debugging session and see this output in my debugging console.
Updated question to reflect that changing stderr did not help
A C function prints to a file descriptor (1 for stdout, 2 for stderr). If you want to prevent the printing, redirect that FD, that can be done also temporarily. Here is a litte demo:
import os
STDOUT = 1
saved_fd = os.dup(STDOUT)
null_fd = os.open(os.devnull, os.O_WRONLY)
os.dup2(null_fd, STDOUT)
os.system('echo TEST 1') # redirected to /dev/null
os.dup2(saved_fd, STDOUT)
os.system('echo TEST 2') # normal
# note: close the null_fd, saved_fd when no longer needed
If the C code opens the terminal device itself, there is very little you can do to prevent it. But that would be very unusual (I would even say a bug) unless there is a specific reason to do so.
Is there perhaps another "output type" besides stdout that this
function might be using?
Yes, there exist stderr, which would be unaffected by stdout redirect, simple example, let printer.py content be
import sys
sys.stderr.write("printing to stderr")
then running in terminal
python printer.py > output.txt
lead to appearance of
printing to stderr
as > output.txt redirects only stdout.

Logging with warning and error sent to stdout, without BrokenPipeError

I want to use logging, but with stderr redirection to stdout, as in:
import logging
import sys
logging.basicConfig(stream=sys.stdout)
for i in range(1, 100):
logging.warning("foo") # this should go to stdout
However, this setup is insufficient: if I run this script with grep -q foo, for instance, it will fail with BrokenPipeError: [Errno 32] Broken pipe.
Even if I wrap the whole for block around a try ... except, the error still happens.
As mentioned in this question, solutions such as sys.stderr.close() are not ideal, since they mask useful errors.
The solution from the question above (wrap a try ... except and then do sys.stdout = None) does not work in the case of the logging setup above. Neither does calling logging.shutdown() in the except block.
This logging-related question about redirecting stdout and stderr to a logger seems to indicate that it is necessary to write a class and several methods. Is it necessary even in my case? One of the answers suggests that using contextlib.redirect_stderr might help, but I tried and the error still happens (it happens inside the TextIOWrapper used by the logger, so it seems I cannot catch it).
Finally, when googling the "exception ignored message" sent by the logger, I find this SO question, but its solution is specific to the az command mentioned in the question.
So, I still couldn't find a workable solution: what's the simplest, correct way to setup a logger which sends warnings and errors to stdout?
Edit: on Windows, it's even worse: the broken pipe error may become an EINVAL (OSError: [Errno 22] Invalid argument). Apparently the only way to prevent it would be to code a custom stream (via TextIOBase) and then use that stream in a StreamHandler. So, replacing sys.stdout with something very similar, but which allows me to ignore the broken pipe error (possibly quitting execution if needed).

Suppress stdout from check_output but write it to a log instead

I have the following code:
try:
subprocess.check_output(command.split())
except subprocess.CalledProcessError as e:
count_failure.increment()
logger.error(e.__dict__)
return
When check_output() fails, then I would like to suppress that message from stdout, but write it to my logger instead.
Right now the stdout error message messes up my tqdm progress bar:
[hobbes3#hobbes3 bin]$ ./mass_index.py
34%|█████████████████████████████████████████▋ | 13/38 [00:00<00:14, 1.75it/s]
unable to open file: path='/mnt/data/samples/irs_990/foo.xml' error='Permission denied'
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:02<00:00, 5.96it/s]
Also, the actual message Permission denied isn't stored inside e. My e.__dict__ only says
{'returncode': 22, 'cmd': ['/opt/splunk/bin/splunk', 'add', 'oneshot', '/mnt/data/samples/irs_990/foo.xml', '-index', 'main', '-sourcetype', 'irs_990'], 'output': b'', 'stderr': None}
that is because the command you're running is issuing error messages to standard error stream.
check_output only captures standard output, unless you use an extra parameter. So either:
subprocess.check_output(command.split(),stderr=subprocess.STDOUT)
so errors are also in the output, or (python 3):
subprocess.check_output(command.split(),stderr=subprocess.DEVNULL)
to suppress this error message completely.
To get a proper exception message with standard error in it, you would have to redirect error stream to a specific pipe so you wouldn't have stderr=None
subprocess.check_output(command.split(),stderr=subprocess.PIPE)
But that could cause deadlocks between output and error streams (depending on how the program outputs to output or error, if the pipes aren't read in a smart way (ex: with threading), one write could block because of buffer full while you're reading the other one which is empty).
Maybe in your case you'd be better off with subprocess.Popen and communicate which handles that case nicely (with threads or whatever works underneath)
p = subprocess.Popen(command.split(),stderr=subprocess.PIPE,stdout=subprocess.PIPE)
output,error = p.communicate()
(and keep the same exception handling)

IOError Input/Output Error When Printing

I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.

Why do I get a ValueError when explicitly closing stdout?

Python newbie here. I'm writing a script that can dump some output to either a file or stdout, depending on the arguments passed to it. When interpreting arguments, I assign either an open'ed file or stdout to a global variable named output_file, which can be used by the rest of the script to write output regardless of what type of stream was selected. At the very end of the script I close output_file. This is proper to do for a file stream, and though it's redundant for stdout, my experience with other programming languages suggests that there's no harm in explicitly closing stdout immediately before the program ends.
However, whenever stdout is used for output (and subsequently closed), I get a "ValueError: 'I/O operation on closed file.'". I know this error is not directly produced by my call to close stdout, but occurs after my script returns. My question is: Why does this happen, and is there a way to manually close stdout without triggering it? (I'm aware that I can easily work around the problem by conditionally closing the stream only when a file was selected, but I want to know if/why this is necessary.)
Very simple demonstrative snippet:
from sys import stdout
stdout.close()
The problem is that on python-3.2 there's an attempt at shutdown to flush stdout without checking if it was closed.
The issue13444 is about this.
You shouldn't have this problem in python-2.7 in releases after the fixing patch.
Once you've closed stdout in this manner, you have to be absolutely sure that nothing will attempt to print anything to stdout. If something does, you get that exception.
My recommendation would be to close output_file conditionally:
if output_file != sys.stdout:
output_file.close()
edit Here is an example where sys.stdout is closed right at the very end of the script, and that nonetheless produces a ValueError: 'I/O operation on closed file when run.
import atexit
#atexit.register
def goodbye():
print "You are now leaving the Python sector."
import sys
sys.stdout.close()
Before closing you can check output_file.closed file:
if not output_file.closed:
output_file.close()
And make sure you have no I/O calls to output_file after closing.
Two things seem necessary to avoid this error: reset (i) reset stdout; (ii) don't close stdout, close the file to which it was redirected.
f=open(filename, 'w')
sys.stdout = f
print("la la-la"), file = sys.stdout)
f.close()
sys.stdout = sys.__stdout__
Various solutions to this problem suggest copying the 'original' stdout pointer to a variable before assigning stdout to a file (i.e. original = stdout ... stdout = f) and then copying it back afterwards (stdout = original). But they neglect to mention the final operation in their routine, which is wasted hours pulling your hair out.
Found the solution here.

Categories

Resources