Why do I get a ValueError when explicitly closing stdout? - python

Python newbie here. I'm writing a script that can dump some output to either a file or stdout, depending on the arguments passed to it. When interpreting arguments, I assign either an open'ed file or stdout to a global variable named output_file, which can be used by the rest of the script to write output regardless of what type of stream was selected. At the very end of the script I close output_file. This is proper to do for a file stream, and though it's redundant for stdout, my experience with other programming languages suggests that there's no harm in explicitly closing stdout immediately before the program ends.
However, whenever stdout is used for output (and subsequently closed), I get a "ValueError: 'I/O operation on closed file.'". I know this error is not directly produced by my call to close stdout, but occurs after my script returns. My question is: Why does this happen, and is there a way to manually close stdout without triggering it? (I'm aware that I can easily work around the problem by conditionally closing the stream only when a file was selected, but I want to know if/why this is necessary.)
Very simple demonstrative snippet:
from sys import stdout
stdout.close()

The problem is that on python-3.2 there's an attempt at shutdown to flush stdout without checking if it was closed.
The issue13444 is about this.
You shouldn't have this problem in python-2.7 in releases after the fixing patch.

Once you've closed stdout in this manner, you have to be absolutely sure that nothing will attempt to print anything to stdout. If something does, you get that exception.
My recommendation would be to close output_file conditionally:
if output_file != sys.stdout:
output_file.close()
edit Here is an example where sys.stdout is closed right at the very end of the script, and that nonetheless produces a ValueError: 'I/O operation on closed file when run.
import atexit
#atexit.register
def goodbye():
print "You are now leaving the Python sector."
import sys
sys.stdout.close()

Before closing you can check output_file.closed file:
if not output_file.closed:
output_file.close()
And make sure you have no I/O calls to output_file after closing.

Two things seem necessary to avoid this error: reset (i) reset stdout; (ii) don't close stdout, close the file to which it was redirected.
f=open(filename, 'w')
sys.stdout = f
print("la la-la"), file = sys.stdout)
f.close()
sys.stdout = sys.__stdout__
Various solutions to this problem suggest copying the 'original' stdout pointer to a variable before assigning stdout to a file (i.e. original = stdout ... stdout = f) and then copying it back afterwards (stdout = original). But they neglect to mention the final operation in their routine, which is wasted hours pulling your hair out.
Found the solution here.

Related

Blocking sys.stdout and stderr does not prevent C code from printing

I am including in my python code a function compiled in c via a cython wrapper. I have to take that function as given and cannot change it. Unfortunately, when I run that function, I see output that is bothering me.
I have tried a lot of tricks that are supposed to get rid of it, all of which play with sys.stdout or sys.stderr -- most noteably, the new contextlib.redirect_stdout. However, nothing I tried managed to silence the output.
At the most basic level, I simply tried setting
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, 'w')
Which is not a safe, or practicable way of doing it, but it should shut the function up. Unfortunately, I can still see the output. What am I missing? Is there perhaps another "output type" besides stdout that this function might be using?
If it helps, I am inside a Pycharm debugging session and see this output in my debugging console.
Updated question to reflect that changing stderr did not help
A C function prints to a file descriptor (1 for stdout, 2 for stderr). If you want to prevent the printing, redirect that FD, that can be done also temporarily. Here is a litte demo:
import os
STDOUT = 1
saved_fd = os.dup(STDOUT)
null_fd = os.open(os.devnull, os.O_WRONLY)
os.dup2(null_fd, STDOUT)
os.system('echo TEST 1') # redirected to /dev/null
os.dup2(saved_fd, STDOUT)
os.system('echo TEST 2') # normal
# note: close the null_fd, saved_fd when no longer needed
If the C code opens the terminal device itself, there is very little you can do to prevent it. But that would be very unusual (I would even say a bug) unless there is a specific reason to do so.
Is there perhaps another "output type" besides stdout that this
function might be using?
Yes, there exist stderr, which would be unaffected by stdout redirect, simple example, let printer.py content be
import sys
sys.stderr.write("printing to stderr")
then running in terminal
python printer.py > output.txt
lead to appearance of
printing to stderr
as > output.txt redirects only stdout.

Selectively silence python output

I've written a python script to process some data. The script accepts two parameters, one for the input file and second for the output file. I want to extend to allow writing to stdout so output can be piped to other processes. My problem is that currently I also output progress information during processing as it can take some time. Ideally, I'd like to only suppress progress information if the script is configured to output to stdout. One advantage is that all the real output is emitted at once at the end of execution and not interspersed during execution.
The way I see it have two options, both of which I've tried (and work) but not sure they're the most pythonic way. Either I can overload calls to print() or I can redirect stdout to /dev/null during processing when the output file is configured to stdout.
Overload print()
if args.output_file.name == '<stdout>':
def silent(*args, **kwargs):
return
global print
print = silent
Redirect stdout
if args.output_file.name == '<stdout>':
out = open('/dev/null', 'w')
else:
out = sys.stdout
with redirect_stdout(out):
data = process_data()
write_output(data, args.output_file)
Overloading print() seems the least pythonic but at least it only effects calls to print() and not writing to stdout as a file. However, if I wanted to print() to stderr then that would also be suppressed.
Redirecting stdout seems cleaner except for the fact where I'm redirecting stdout to stdout if the script is writing to a regular file.

How to write to stdout after redirecting FD

Please take a look at this python code below.
so = open('/tmp/test.log', 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
After executing that piece of code I would like to still have possibility to print something on standard stdout.
I already tried:
print('Foo\n', file=sys.__stdout__)
As according to documentation could be a way to go.
sys.__stdin__
sys.__stdout__
sys.__stderr__
These objects contain the original values of stdin, stderr and stdout
at the start of the program. They are used during finalization, and
could be useful to print to the actual standard stream no matter if
the sys.std* object has been redirected.
But it's not in that case. Its still logging to my test.log file.
Python version: 3.4.8
Any help would be greatly appreciated.
The reason sys.__stdout__ did not work is because you replaced the original stdout's file descriptor with os.dup2(), so any Python file object that is based on it would print to the newly-opened file. The old stdout actually gets closed when you do dup2, so there is no restoring it.
If you want to have all Python code that uses sys.stdout to print to /tmp/test.log, open a new file and assign it to sys.stdout.
sys.stdout = open('/tmp/test.log', 'a+')
The original sys.stdout will remain available as sys.__stdout__.
f you want also to redirect any code that writes directly to sys.stdout.fileno(), including subprocesses that you start with os.system() or subprocess.call(), while keeping access to the original stdout, things get more complex: you would need to dup() stdout first and save that, then use your dup2() call to replace FD 1:
saved_stdout_fd = os.dup(sys.stdout.fileno())
saved_stdout = os.fdopen(saved_stdout_fd,'w')
so = open('/tmp/test.log', 'a+')
os.dup2(so.fileno(), sys.stdout.fileno())
Now sys.stdout goes to your /tmp/test.log and saved_stdout will write to the original stdout.

Silence loggers and printing to screen - Python

I'm having a problem with my python script.
It's printing massive amounts of data on the screen, and I would like to prevent all sorts of printing to screen.
Edit:
The library I'm using is mechanize, and it's printing a LOT of data on screen.
I have set these to false with no luck!
br.set_debug_redirects(False)
br.set_debug_responses(False)
br.set_debug_http(False)
Any ideas?
Help would be amazing and very much appreciated!
(Based on your 2nd edit)
If you don't want to disable all output, you can try to be specific to mechanize itself. http://wwwsearch.sourceforge.net/mechanize/ provides a snippet, which I've modified (though I'm not sure if it will work):
import logging
logger = logging.getLogger("mechanize")
# only log really bad events
logger.setLevel(logging.ERROR)
When you print something it goes to the screen through the sys.stdout file. You can change this file to any other file (eg, a log file you open) so that nothing is printed to the screen:
import sys
# save the old stdout so you can print later (do sys.stdout = OLD_STDOUT)
OLD_STDOUT = sys.stdout
sys.stdout = open("logfile.txt", 'w')
Of course, if you're talking about some library that you're calling, it may be printing to sys.stderr. Luckily, you can do the exact same thing for this one (continuing from above):
OLD_STDERR = sys.stderr
sys.stderr = open("errorLog.txt", 'w')
Now if, for some reason, you want to completely ignore stdout (or stderr) and never see it again, you can define your own file-like classes that simply discard the objects:
class Discarder(object):
def write(self, text):
pass # do nothing
# now discard everything coming out of stdout
sys.stdout = Discarder()
And, to add to the din of possible solutions, here is a solution that works in Unix shells:
# discards all input (change /dev/null to a file name to keep track of output)
python yourScript.py > /dev/null
You may redirect sys.stdout and sys.stderr to a file or any file like object of yours e.g.
class EatLog(object):
def write(self):
pass
sys.stdout = EatLog()
but i would not recommend that, simpler option is to use OS level redirection e.g.
python myscript.py > out.log
you can use the StringIO module, too, instead of rolling your own stdout stream. Occasionally, the stdout needs more than a write method (flush is another common one), which StringIO will handle.
import StringIO
import sys
sys.stdout = StringIO.StringIO()

How do you make Python wait so that you can read the output?

I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python.
My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read.
(My apologies if this question is a silly one, but I'm very new at Python and programming in general.)
If you don't want to use raw_input() or input() you could log your output (stdout, stderr) to a file or files.
You could either use the logging module, or just redirect sys.stdout and sys.stderr.
I would suggest using a combination of the logging and traceback if you want to log errors with their trace stack.
Something like this maybe:
import logging, traceback
logging.basicConfig(filename=r'C:\Temp\log.txt', level=logging.DEBUG)
try:
#do some stuff
logging.debug('I did some stuff!')
except SomeException:
logging.error(traceback.format_exc())
Here's an example of redirecting stdout and stderr:
if __name__ == '__main__':
save_out = sys.stdout # save the original stdout so you can put it back later
out_file = open(r'C:\Temp\out.txt', 'w')
sys.stdout = out_file
save_err = sys.stderr
err_file = open(r'C:\Temp\err.txt', 'w')
sys.stderr = err_file
main() #call your main function
sys.stdout = save_out # set stdout back to it's original object
sys.stderr = save_err
out_file.close()
err_file.close()
I'm going to point out that this is not the easiest or most straight forward way to go.
This is a "problem" with Notepad2, not Python itself.
Unless you want to use input()/sleep (or any other blocking function) in your scripts, I think you have to turn to the settings in Notepad2 and see what that has to offer.
you could start in the command window. e.g.:
c:\tmp\python>main.py
adding raw_input() (or input() in py3k) at the end of your script will let you freeze it for until enter is pressed, but it's not a good thing to do.
You can add a call to raw_input() to the end of your script in order to make it wait until you press Enter.

Categories

Resources