Suppressing output of module calling outside library - python

I have an annoying problem when using machine learning library PyML. PyML uses libsvm to train the SVM classifier. The problem is that libsvm outputs some text to standard output. But because that is outside of Python I cannot intercept it. I tried using methods described in problem Silence the stdout of a function in Python without trashing sys.stdout and restoring each function call but none of those help.
Is there any way how to do this. Modifying PyML is not an option.

Open /dev/null for writing, use os.dup() to copy stdout, and use os.dup2() to copy your open /dev/null to stdout. Use os.dup2() to copy your copied stdout back to the real stdout after.
devnull = open('/dev/null', 'w')
oldstdout_fno = os.dup(sys.stdout.fileno())
os.dup2(devnull.fileno(), 1)
makesomenoise()
os.dup2(oldstdout_fno, 1)

Dave Smith gave a wonderful answer to that on his blog. Basically, it wraps Ignacio's answer nicely:
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
Now, you can surround any function that garbles unwanted noise into stdout like this:
print "You can see this"
with suppress_stdout():
print "You cannot see this"
print "And you can see this again"
For Python 3 you can use:
from contextlib import contextmanager
import os
import sys
#contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout

I had the same problem and fixed it like that:
from cStringIO import StringIO
def wrapped_svm_predict(*args):
"""Run :func:`svm_predict` with no *stdout* output."""
so, sys.stdout = sys.stdout, StringIO()
ret = svm_predict(*args)
sys.stdout = so
return ret

I had a similar problem with portaudio/PyAudio initialization. I started with Reid's answer, which worked. Although I needed to redirect stderr instead. So, here is an updated, cross-platform version that redirects both:
import sys, os
# hide diagnostic output
with open(os.devnull, 'w') as devnull:
# suppress stdout
orig_stdout_fno = os.dup(sys.stdout.fileno())
os.dup2(devnull.fileno(), 1)
# suppress stderr
orig_stderr_fno = os.dup(sys.stderr.fileno())
os.dup2(devnull.fileno(), 2)
print('*** stdout should be hidden! ****')
print('*** stderr should be too! ****',
file=sys.stderr)
os.dup2(orig_stdout_fno, 1) # restore stdout
os.dup2(orig_stderr_fno, 2) # restore stderr
print('done.')
Should be easy to comment out a part you don't need.
Edit: These might help, don't have time to look at the moment:
https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout

Related

Jupyter Notebook exports the output of a cell to a file in REAL TIME [duplicate]

How do I redirect stdout to an arbitrary file in Python?
When a long-running Python script (e.g, web application) is started from within the ssh session and backgounded, and the ssh session is closed, the application will raise IOError and fail the moment it tries to write to stdout. I needed to find a way to make the application and modules output to a file rather than stdout to prevent failure due to IOError. Currently, I employ nohup to redirect output to a file, and that gets the job done, but I was wondering if there was a way to do it without using nohup, out of curiosity.
I have already tried sys.stdout = open('somefile', 'w'), but this does not seem to prevent some external modules from still outputting to terminal (or maybe the sys.stdout = ... line did not fire at all). I know it should work from simpler scripts I've tested on, but I also didn't have time yet to test on a web application yet.
If you want to do the redirection within the Python script, setting sys.stdout to a file object does the trick:
# for python3
import sys
with open('file', 'w') as sys.stdout:
print('test')
A far more common method is to use shell redirection when executing (same on Windows and Linux):
$ python3 foo.py > file
There is contextlib.redirect_stdout() function in Python 3.4+:
from contextlib import redirect_stdout
with open('help.txt', 'w') as f:
with redirect_stdout(f):
print('it now prints to `help.text`')
It is similar to:
import sys
from contextlib import contextmanager
#contextmanager
def redirect_stdout(new_target):
old_target, sys.stdout = sys.stdout, new_target # replace sys.stdout
try:
yield new_target # run some code with the replaced stdout
finally:
sys.stdout = old_target # restore to the previous value
that can be used on earlier Python versions. The latter version is not reusable. It can be made one if desired.
It doesn't redirect the stdout at the file descriptors level e.g.:
import os
from contextlib import redirect_stdout
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, redirect_stdout(f):
print('redirected to a file')
os.write(stdout_fd, b'not redirected')
os.system('echo this also is not redirected')
b'not redirected' and 'echo this also is not redirected' are not redirected to the output.txt file.
To redirect at the file descriptor level, os.dup2() could be used:
import os
import sys
from contextlib import contextmanager
def fileno(file_or_fd):
fd = getattr(file_or_fd, 'fileno', lambda: file_or_fd)()
if not isinstance(fd, int):
raise ValueError("Expected a file (`.fileno()`) or a file descriptor")
return fd
#contextmanager
def stdout_redirected(to=os.devnull, stdout=None):
if stdout is None:
stdout = sys.stdout
stdout_fd = fileno(stdout)
# copy stdout_fd before it is overwritten
#NOTE: `copied` is inheritable on Windows when duplicating a standard stream
with os.fdopen(os.dup(stdout_fd), 'wb') as copied:
stdout.flush() # flush library buffers that dup2 knows nothing about
try:
os.dup2(fileno(to), stdout_fd) # $ exec >&to
except ValueError: # filename
with open(to, 'wb') as to_file:
os.dup2(to_file.fileno(), stdout_fd) # $ exec > to
try:
yield stdout # allow code to be run with the redirected stdout
finally:
# restore stdout to its previous value
#NOTE: dup2 makes stdout_fd inheritable unconditionally
stdout.flush()
os.dup2(copied.fileno(), stdout_fd) # $ exec >&copied
The same example works now if stdout_redirected() is used instead of redirect_stdout():
import os
import sys
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, stdout_redirected(f):
print('redirected to a file')
os.write(stdout_fd, b'it is redirected now\n')
os.system('echo this is also redirected')
print('this is goes back to stdout')
The output that previously was printed on stdout now goes to output.txt as long as stdout_redirected() context manager is active.
Note: stdout.flush() does not flush
C stdio buffers on Python 3 where I/O is implemented directly on read()/write() system calls. To flush all open C stdio output streams, you could call libc.fflush(None) explicitly if some C extension uses stdio-based I/O:
try:
import ctypes
from ctypes.util import find_library
except ImportError:
libc = None
else:
try:
libc = ctypes.cdll.msvcrt # Windows
except OSError:
libc = ctypes.cdll.LoadLibrary(find_library('c'))
def flush(stream):
try:
libc.fflush(None)
stream.flush()
except (AttributeError, ValueError, IOError):
pass # unsupported
You could use stdout parameter to redirect other streams, not only sys.stdout e.g., to merge sys.stderr and sys.stdout:
def merged_stderr_stdout(): # $ exec 2>&1
return stdout_redirected(to=sys.stdout, stdout=sys.stderr)
Example:
from __future__ import print_function
import sys
with merged_stderr_stdout():
print('this is printed on stdout')
print('this is also printed on stdout', file=sys.stderr)
Note: stdout_redirected() mixes buffered I/O (sys.stdout usually) and unbuffered I/O (operations on file descriptors directly). Beware, there could be buffering issues.
To answer, your edit: you could use python-daemon to daemonize your script and use logging module (as #erikb85 suggested) instead of print statements and merely redirecting stdout for your long-running Python script that you run using nohup now.
you can try this too much better
import sys
class Logger(object):
def __init__(self, filename="Default.log"):
self.terminal = sys.stdout
self.log = open(filename, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
sys.stdout = Logger("yourlogfilename.txt")
print "Hello world !" # this is should be saved in yourlogfilename.txt
The other answers didn't cover the case where you want forked processes to share your new stdout.
To do that:
from os import open, close, dup, O_WRONLY
old = dup(1)
close(1)
open("file", O_WRONLY) # should open on 1
..... do stuff and then restore
close(1)
dup(old) # should dup to 1
close(old) # get rid of left overs
Quoted from PEP 343 -- The "with" Statement (added import statement):
Redirect stdout temporarily:
import sys
from contextlib import contextmanager
#contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
Used as follows:
with open(filename, "w") as f:
with stdout_redirected(f):
print "Hello world"
This isn't thread-safe, of course, but neither is doing this same dance manually. In single-threaded programs (for example in scripts) it is a popular way of doing things.
import sys
sys.stdout = open('stdout.txt', 'w')
Here is a variation of Yuda Prawira answer:
implement flush() and all the file attributes
write it as a contextmanager
capture stderr also
.
import contextlib, sys
#contextlib.contextmanager
def log_print(file):
# capture all outputs to a log file while still printing it
class Logger:
def __init__(self, file):
self.terminal = sys.stdout
self.log = file
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def __getattr__(self, attr):
return getattr(self.terminal, attr)
logger = Logger(file)
_stdout = sys.stdout
_stderr = sys.stderr
sys.stdout = logger
sys.stderr = logger
try:
yield logger.log
finally:
sys.stdout = _stdout
sys.stderr = _stderr
with log_print(open('mylogfile.log', 'w')):
print('hello world')
print('hello world on stderr', file=sys.stderr)
# you can capture the output to a string with:
# with log_print(io.StringIO()) as log:
# ....
# print('[captured output]', log.getvalue())
You need a terminal multiplexer like either tmux or GNU screen
I'm surprised that a small comment by Ryan Amos' to the original question is the only mention of a solution far preferable to all the others on offer, no matter how clever the python trickery may be and how many upvotes they've received. Further to Ryan's comment, tmux is a nice alternative to GNU screen.
But the principle is the same: if you ever find yourself wanting to leave a terminal job running while you log-out, head to the cafe for a sandwich, pop to the bathroom, go home (etc) and then later, reconnect to your terminal session from anywhere or any computer as though you'd never been away, terminal multiplexers are the answer. Think of them as VNC or remote desktop for terminal sessions. Anything else is a workaround. As a bonus, when the boss and/or partner comes in and you inadvertently ctrl-w / cmd-w your terminal window instead of your browser window with its dodgy content, you won't have lost the last 18 hours-worth of processing!
Based on this answer: https://stackoverflow.com/a/5916874/1060344, here is another way I figured out which I use in one of my projects. For whatever you replace sys.stderr or sys.stdout with, you have to make sure that the replacement complies with file interface, especially if this is something you are doing because stderr/stdout are used in some other library that is not under your control. That library may be using other methods of file object.
Check out this way where I still let everything go do stderr/stdout (or any file for that matter) and also send the message to a log file using Python's logging facility (but you can really do anything with this):
class FileToLogInterface(file):
'''
Interface to make sure that everytime anything is written to stderr, it is
also forwarded to a file.
'''
def __init__(self, *args, **kwargs):
if 'cfg' not in kwargs:
raise TypeError('argument cfg is required.')
else:
if not isinstance(kwargs['cfg'], config.Config):
raise TypeError(
'argument cfg should be a valid '
'PostSegmentation configuration object i.e. '
'postsegmentation.config.Config')
self._cfg = kwargs['cfg']
kwargs.pop('cfg')
self._logger = logging.getlogger('access_log')
super(FileToLogInterface, self).__init__(*args, **kwargs)
def write(self, msg):
super(FileToLogInterface, self).write(msg)
self._logger.info(msg)
Programs written in other languages (e.g. C) have to do special magic (called double-forking) expressly to detach from the terminal (and to prevent zombie processes). So, I think the best solution is to emulate them.
A plus of re-executing your program is, you can choose redirections on the command-line, e.g. /usr/bin/python mycoolscript.py 2>&1 1>/dev/null
See this post for more info: What is the reason for performing a double fork when creating a daemon?
I know this question is answered (using python abc.py > output.log 2>&1 ), but I still have to say:
When writing your program, don't write to stdout. Always use logging to output whatever you want. That would give you a lot of freedom in the future when you want to redirect, filter, rotate the output files.
As mentioned by #jfs, most solutions will not properly handle some types of stdout output such as that from C extensions. There is a module that takes care of all this on PyPI called wurlitzer. You just need its sys_pipes context manager. It's as easy as using:
from contextlib import redirect_stdout
import os
from wurlitzer import sys_pipes
log = open("test.log", "a")
with redirect_stdout(log), sys_pipes():
print("print statement")
os.system("echo echo call")
Based on previous answers on this post I wrote this class for myself as a more compact and flexible way of redirecting the output of pieces of code - here just to a list - and ensure that the output is normalized afterwards.
class out_to_lt():
def __init__(self, lt):
if type(lt) == list:
self.lt = lt
else:
raise Exception("Need to pass a list")
def __enter__(self):
import sys
self._sys = sys
self._stdout = sys.stdout
sys.stdout = self
return self
def write(self,txt):
self.lt.append(txt)
def __exit__(self, type, value, traceback):
self._sys.stdout = self._stdout
Used as:
lt = []
with out_to_lt(lt) as o:
print("Test 123\n\n")
print(help(str))
Updating. Just found a scenario where I had to add two extra methods, but was easy to adapt:
class out_to_lt():
...
def isatty(self):
return True #True: You're running in a real terminal, False:You're being piped, redirected, cron
def flush(self):
pass
There are other versions using context but nothing this simple. I actually just googled to double check it would work and was surprised not to see it, so for other people looking for a quick solution that is safe and directed at only the code within the context block, here it is:
import sys
with open('test_file', 'w') as sys.stdout:
print('Testing 1 2 3')
Tested like so:
$ cat redirect_stdout.py
import sys
with open('test_file', 'w') as sys.stdout:
print('Testing 1 2 3')
$ python redirect_stdout.py
$ cat test_file
Testing 1 2 3

How to take input from stdin, display something using curses and output to stdout?

I'm trying to make a python script that takes input from stdin, displays GUI in terminal using curses and then when user finishes interaction outputs the result to the stdout. Good example of this behaviour is selecta but it's written in ruby.
I can't make curses display anything. This is minimal (it only displays one character and waits for one character) example of what I tried so far:
import os, sys
import curses
c = None
old_out = sys.__stdout__
old_in = sys.__stdin__
old_err = sys.__stderr__
sys.__stdout__ = sys.stdout = open('/dev/tty', 'w')
sys.__stdin__ = sys.stdin = open('/dev/tty')
sys.__stderr__ = sys.stderr = open('/dev/tty')
def show_a(s):
global c
s.addch(ord('a'))
c = s.getch()
curses.wrapper(show_a)
sys.stdin.flush()
sys.stdout.flush()
sys.stderr.flush()
sys.stdin.close()
sys.stdout.close()
sys.stderr.close()
sys.__stdout__ = sys.stdout = old_out
sys.__stdin__ = sys.stdin = old_in
sys.__stderr__ = sys.stderr = old_err
print(c)
When I try to use echo $(python3 show_a.py) nothing is displayed but after pressing any key its number is displayed:
Is something like this even possible using curses, if so how to do this?
It doesn't work because the print statement is writing to the same standard output as curses.wrapper. You can either defer that print until after you have restored sys.stdout, or you could use the file= property something like this:
printf(s.getch(), file=old_out)
For the other (ordering) problem, it sounds as if you need to amend the code to do a refresh after the getch (to make curses display it), and depending on what version of curses, a flush of the standard output would then be helpful.
Further reading:
Temporarily Redirect stdout/stderr

How do I get sys.stdout to be monitored by GObject.io_add_watch?

How do I get sys.stdout to be monitored by GObject.io_add_watch?
code:
GObject.io_add_watch(os.fdopen(self.builder), GObject.IO_IN, self.write_to_buffer)
I tried to get the stdout stream of my main GTK process via os.fdopen(self.builder),
yet it raises the exception:
TypeError: an integer is required
Could you please explain how I have to properly advise GObject.io_add_watch to
watch the sys.stdout stream?
Thanks!
The API will let you pass sys.stdout directly, since it's a file-like object with a fileno() method. It won't work right, though, because you can't read from sys.stdout. If you want to capture all attempts to write to stdout, you can replace it with a pipe:
import gobject
import sys
import os
def ok(stream, condition):
s = os.fdopen(stream)
data = s.readline()
with open("called.txt", "w") as f:
f.write("got {} ".format(data))
return False
def do_print():
print "hey there"
sys.stdout.flush()
rpipe, wpipe = os.pipe()
wpipe = os.fdopen(wpipe, "w", 0)
sys.stdout = wpipe # Replace sys.stdout
gobject.io_add_watch(rpipe, gobject.IO_IN, ok)
gobject.idle_add(do_print)
gobject.MainLoop().run()
Contents of "called.txt" after running the script:
got hey there

writing the console output to a file in python [duplicate]

How do I redirect stdout to an arbitrary file in Python?
When a long-running Python script (e.g, web application) is started from within the ssh session and backgounded, and the ssh session is closed, the application will raise IOError and fail the moment it tries to write to stdout. I needed to find a way to make the application and modules output to a file rather than stdout to prevent failure due to IOError. Currently, I employ nohup to redirect output to a file, and that gets the job done, but I was wondering if there was a way to do it without using nohup, out of curiosity.
I have already tried sys.stdout = open('somefile', 'w'), but this does not seem to prevent some external modules from still outputting to terminal (or maybe the sys.stdout = ... line did not fire at all). I know it should work from simpler scripts I've tested on, but I also didn't have time yet to test on a web application yet.
If you want to do the redirection within the Python script, setting sys.stdout to a file object does the trick:
# for python3
import sys
with open('file', 'w') as sys.stdout:
print('test')
A far more common method is to use shell redirection when executing (same on Windows and Linux):
$ python3 foo.py > file
There is contextlib.redirect_stdout() function in Python 3.4+:
from contextlib import redirect_stdout
with open('help.txt', 'w') as f:
with redirect_stdout(f):
print('it now prints to `help.text`')
It is similar to:
import sys
from contextlib import contextmanager
#contextmanager
def redirect_stdout(new_target):
old_target, sys.stdout = sys.stdout, new_target # replace sys.stdout
try:
yield new_target # run some code with the replaced stdout
finally:
sys.stdout = old_target # restore to the previous value
that can be used on earlier Python versions. The latter version is not reusable. It can be made one if desired.
It doesn't redirect the stdout at the file descriptors level e.g.:
import os
from contextlib import redirect_stdout
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, redirect_stdout(f):
print('redirected to a file')
os.write(stdout_fd, b'not redirected')
os.system('echo this also is not redirected')
b'not redirected' and 'echo this also is not redirected' are not redirected to the output.txt file.
To redirect at the file descriptor level, os.dup2() could be used:
import os
import sys
from contextlib import contextmanager
def fileno(file_or_fd):
fd = getattr(file_or_fd, 'fileno', lambda: file_or_fd)()
if not isinstance(fd, int):
raise ValueError("Expected a file (`.fileno()`) or a file descriptor")
return fd
#contextmanager
def stdout_redirected(to=os.devnull, stdout=None):
if stdout is None:
stdout = sys.stdout
stdout_fd = fileno(stdout)
# copy stdout_fd before it is overwritten
#NOTE: `copied` is inheritable on Windows when duplicating a standard stream
with os.fdopen(os.dup(stdout_fd), 'wb') as copied:
stdout.flush() # flush library buffers that dup2 knows nothing about
try:
os.dup2(fileno(to), stdout_fd) # $ exec >&to
except ValueError: # filename
with open(to, 'wb') as to_file:
os.dup2(to_file.fileno(), stdout_fd) # $ exec > to
try:
yield stdout # allow code to be run with the redirected stdout
finally:
# restore stdout to its previous value
#NOTE: dup2 makes stdout_fd inheritable unconditionally
stdout.flush()
os.dup2(copied.fileno(), stdout_fd) # $ exec >&copied
The same example works now if stdout_redirected() is used instead of redirect_stdout():
import os
import sys
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, stdout_redirected(f):
print('redirected to a file')
os.write(stdout_fd, b'it is redirected now\n')
os.system('echo this is also redirected')
print('this is goes back to stdout')
The output that previously was printed on stdout now goes to output.txt as long as stdout_redirected() context manager is active.
Note: stdout.flush() does not flush
C stdio buffers on Python 3 where I/O is implemented directly on read()/write() system calls. To flush all open C stdio output streams, you could call libc.fflush(None) explicitly if some C extension uses stdio-based I/O:
try:
import ctypes
from ctypes.util import find_library
except ImportError:
libc = None
else:
try:
libc = ctypes.cdll.msvcrt # Windows
except OSError:
libc = ctypes.cdll.LoadLibrary(find_library('c'))
def flush(stream):
try:
libc.fflush(None)
stream.flush()
except (AttributeError, ValueError, IOError):
pass # unsupported
You could use stdout parameter to redirect other streams, not only sys.stdout e.g., to merge sys.stderr and sys.stdout:
def merged_stderr_stdout(): # $ exec 2>&1
return stdout_redirected(to=sys.stdout, stdout=sys.stderr)
Example:
from __future__ import print_function
import sys
with merged_stderr_stdout():
print('this is printed on stdout')
print('this is also printed on stdout', file=sys.stderr)
Note: stdout_redirected() mixes buffered I/O (sys.stdout usually) and unbuffered I/O (operations on file descriptors directly). Beware, there could be buffering issues.
To answer, your edit: you could use python-daemon to daemonize your script and use logging module (as #erikb85 suggested) instead of print statements and merely redirecting stdout for your long-running Python script that you run using nohup now.
you can try this too much better
import sys
class Logger(object):
def __init__(self, filename="Default.log"):
self.terminal = sys.stdout
self.log = open(filename, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
sys.stdout = Logger("yourlogfilename.txt")
print "Hello world !" # this is should be saved in yourlogfilename.txt
The other answers didn't cover the case where you want forked processes to share your new stdout.
To do that:
from os import open, close, dup, O_WRONLY
old = dup(1)
close(1)
open("file", O_WRONLY) # should open on 1
..... do stuff and then restore
close(1)
dup(old) # should dup to 1
close(old) # get rid of left overs
Quoted from PEP 343 -- The "with" Statement (added import statement):
Redirect stdout temporarily:
import sys
from contextlib import contextmanager
#contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
Used as follows:
with open(filename, "w") as f:
with stdout_redirected(f):
print "Hello world"
This isn't thread-safe, of course, but neither is doing this same dance manually. In single-threaded programs (for example in scripts) it is a popular way of doing things.
import sys
sys.stdout = open('stdout.txt', 'w')
Here is a variation of Yuda Prawira answer:
implement flush() and all the file attributes
write it as a contextmanager
capture stderr also
.
import contextlib, sys
#contextlib.contextmanager
def log_print(file):
# capture all outputs to a log file while still printing it
class Logger:
def __init__(self, file):
self.terminal = sys.stdout
self.log = file
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def __getattr__(self, attr):
return getattr(self.terminal, attr)
logger = Logger(file)
_stdout = sys.stdout
_stderr = sys.stderr
sys.stdout = logger
sys.stderr = logger
try:
yield logger.log
finally:
sys.stdout = _stdout
sys.stderr = _stderr
with log_print(open('mylogfile.log', 'w')):
print('hello world')
print('hello world on stderr', file=sys.stderr)
# you can capture the output to a string with:
# with log_print(io.StringIO()) as log:
# ....
# print('[captured output]', log.getvalue())
You need a terminal multiplexer like either tmux or GNU screen
I'm surprised that a small comment by Ryan Amos' to the original question is the only mention of a solution far preferable to all the others on offer, no matter how clever the python trickery may be and how many upvotes they've received. Further to Ryan's comment, tmux is a nice alternative to GNU screen.
But the principle is the same: if you ever find yourself wanting to leave a terminal job running while you log-out, head to the cafe for a sandwich, pop to the bathroom, go home (etc) and then later, reconnect to your terminal session from anywhere or any computer as though you'd never been away, terminal multiplexers are the answer. Think of them as VNC or remote desktop for terminal sessions. Anything else is a workaround. As a bonus, when the boss and/or partner comes in and you inadvertently ctrl-w / cmd-w your terminal window instead of your browser window with its dodgy content, you won't have lost the last 18 hours-worth of processing!
Based on this answer: https://stackoverflow.com/a/5916874/1060344, here is another way I figured out which I use in one of my projects. For whatever you replace sys.stderr or sys.stdout with, you have to make sure that the replacement complies with file interface, especially if this is something you are doing because stderr/stdout are used in some other library that is not under your control. That library may be using other methods of file object.
Check out this way where I still let everything go do stderr/stdout (or any file for that matter) and also send the message to a log file using Python's logging facility (but you can really do anything with this):
class FileToLogInterface(file):
'''
Interface to make sure that everytime anything is written to stderr, it is
also forwarded to a file.
'''
def __init__(self, *args, **kwargs):
if 'cfg' not in kwargs:
raise TypeError('argument cfg is required.')
else:
if not isinstance(kwargs['cfg'], config.Config):
raise TypeError(
'argument cfg should be a valid '
'PostSegmentation configuration object i.e. '
'postsegmentation.config.Config')
self._cfg = kwargs['cfg']
kwargs.pop('cfg')
self._logger = logging.getlogger('access_log')
super(FileToLogInterface, self).__init__(*args, **kwargs)
def write(self, msg):
super(FileToLogInterface, self).write(msg)
self._logger.info(msg)
Programs written in other languages (e.g. C) have to do special magic (called double-forking) expressly to detach from the terminal (and to prevent zombie processes). So, I think the best solution is to emulate them.
A plus of re-executing your program is, you can choose redirections on the command-line, e.g. /usr/bin/python mycoolscript.py 2>&1 1>/dev/null
See this post for more info: What is the reason for performing a double fork when creating a daemon?
I know this question is answered (using python abc.py > output.log 2>&1 ), but I still have to say:
When writing your program, don't write to stdout. Always use logging to output whatever you want. That would give you a lot of freedom in the future when you want to redirect, filter, rotate the output files.
As mentioned by #jfs, most solutions will not properly handle some types of stdout output such as that from C extensions. There is a module that takes care of all this on PyPI called wurlitzer. You just need its sys_pipes context manager. It's as easy as using:
from contextlib import redirect_stdout
import os
from wurlitzer import sys_pipes
log = open("test.log", "a")
with redirect_stdout(log), sys_pipes():
print("print statement")
os.system("echo echo call")
Based on previous answers on this post I wrote this class for myself as a more compact and flexible way of redirecting the output of pieces of code - here just to a list - and ensure that the output is normalized afterwards.
class out_to_lt():
def __init__(self, lt):
if type(lt) == list:
self.lt = lt
else:
raise Exception("Need to pass a list")
def __enter__(self):
import sys
self._sys = sys
self._stdout = sys.stdout
sys.stdout = self
return self
def write(self,txt):
self.lt.append(txt)
def __exit__(self, type, value, traceback):
self._sys.stdout = self._stdout
Used as:
lt = []
with out_to_lt(lt) as o:
print("Test 123\n\n")
print(help(str))
Updating. Just found a scenario where I had to add two extra methods, but was easy to adapt:
class out_to_lt():
...
def isatty(self):
return True #True: You're running in a real terminal, False:You're being piped, redirected, cron
def flush(self):
pass
There are other versions using context but nothing this simple. I actually just googled to double check it would work and was surprised not to see it, so for other people looking for a quick solution that is safe and directed at only the code within the context block, here it is:
import sys
with open('test_file', 'w') as sys.stdout:
print('Testing 1 2 3')
Tested like so:
$ cat redirect_stdout.py
import sys
with open('test_file', 'w') as sys.stdout:
print('Testing 1 2 3')
$ python redirect_stdout.py
$ cat test_file
Testing 1 2 3

Temporarily Redirect stdout/stderr

Is it possible to temporarily redirect stdout/stderr in Python (i.e. for the duration of a method)?
Edit:
The problem with the current solutions (which I at first remembered but then forgot) is that they don't redirect; rather, they just replace the streams in their entirety. Hence, if a method has a local copy of one the variable for any reason (e.g. because the stream was passed as a parameter to something), it won't work.
Any solutions?
You can also put the redirection logic in a contextmanager.
import os
import sys
class RedirectStdStreams(object):
def __init__(self, stdout=None, stderr=None):
self._stdout = stdout or sys.stdout
self._stderr = stderr or sys.stderr
def __enter__(self):
self.old_stdout, self.old_stderr = sys.stdout, sys.stderr
self.old_stdout.flush(); self.old_stderr.flush()
sys.stdout, sys.stderr = self._stdout, self._stderr
def __exit__(self, exc_type, exc_value, traceback):
self._stdout.flush(); self._stderr.flush()
sys.stdout = self.old_stdout
sys.stderr = self.old_stderr
if __name__ == '__main__':
devnull = open(os.devnull, 'w')
print('Fubar')
with RedirectStdStreams(stdout=devnull, stderr=devnull):
print("You'll never see me")
print("I'm back!")
starting from python 3.4 there is the context manager contextlib.redirect_stdout:
from contextlib import redirect_stdout
with open('yourfile.txt', 'w') as f:
with redirect_stdout(f):
# do stuff...
to completely silence stdout this works:
from contextlib import redirect_stdout
with redirect_stdout(None):
# do stuff...
To solve the issue that some function might have cached sys.stdout stream as a local variable and therefore replacing the global sys.stdout won't work inside that function, you could redirect at a file descriptor level (sys.stdout.fileno()) e.g.:
from __future__ import print_function
import os
import sys
def some_function_with_cached_sys_stdout(stdout=sys.stdout):
print('cached stdout', file=stdout)
with stdout_redirected(to=os.devnull), merged_stderr_stdout():
print('stdout goes to devnull')
some_function_with_cached_sys_stdout()
print('stderr also goes to stdout that goes to devnull', file=sys.stderr)
print('stdout is back')
some_function_with_cached_sys_stdout()
print('stderr is back', file=sys.stderr)
stdout_redirected() redirects all output for sys.stdout.fileno() to a given filename, file object, or file descriptor (os.devnull in the example).
stdout_redirected() and merged_stderr_stdout() are defined here.
I am not sure what temporary redirection means. But, you can reassign streams like this and reset it back.
temp = sys.stdout
sys.stdout = sys.stderr
sys.stderr = temp
Also to write to sys.stderr within print stmts like this.
print >> sys.stderr, "Error in atexit._run_exitfuncs:"
Regular print will to stdout.
It's possible with a decorator such as the following:
import sys
def redirect_stderr_stdout(stderr=sys.stderr, stdout=sys.stdout):
def wrap(f):
def newf(*args, **kwargs):
old_stderr, old_stdout = sys.stderr, sys.stdout
sys.stderr = stderr
sys.stdout = stdout
try:
return f(*args, **kwargs)
finally:
sys.stderr, sys.stdout = old_stderr, old_stdout
return newf
return wrap
Use as:
#redirect_stderr_stdout(some_logging_stream, the_console):
def fun(...):
# whatever
or, if you don't want to modify the source for fun, call it directly as
redirect_stderr_stdout(some_logging_stream, the_console)(fun)
But note that this is not thread-safe.
Here's a context manager that I found useful. The nice things about this are that you can use it with the with statement and it also handles redirecting for child processes.
import contextlib
#contextlib.contextmanager
def stdchannel_redirected(stdchannel, dest_filename):
"""
A context manager to temporarily redirect stdout or stderr
e.g.:
with stdchannel_redirected(sys.stderr, os.devnull):
...
"""
try:
oldstdchannel = os.dup(stdchannel.fileno())
dest_file = open(dest_filename, 'w')
os.dup2(dest_file.fileno(), stdchannel.fileno())
yield
finally:
if oldstdchannel is not None:
os.dup2(oldstdchannel, stdchannel.fileno())
if dest_file is not None:
dest_file.close()
The context for why I created this is at this blog post.
Raymond Hettinger shows us a better way[1]:
import sys
with open(filepath + filename, "w") as f: #replace filepath & filename
with f as sys.stdout:
print("print this to file") #will be written to filename & -path
After the with block the sys.stdout will be reset
[1]: http://www.youtube.com/watch?v=OSGv2VnC0go&list=PLQZM27HgcgT-6D0w6arhnGdSHDcSmQ8r3
Look at contextlib.redirect_stdout(new_target) and contextlib.redirect_stderr(new_target). redirect_stderr is new in Python 3.5.
We'll use the PHP syntax of ob_start and ob_get_contents functions in python3, and redirect the input into a file.
The outputs are being stored in a file, any type of stream could be used as well.
from functools import partial
output_buffer = None
print_orig = print
def ob_start(fname="print.txt"):
global print
global output_buffer
print = partial(print_orig, file=output_buffer)
output_buffer = open(fname, 'w')
def ob_end():
global output_buffer
close(output_buffer)
print = print_orig
def ob_get_contents(fname="print.txt"):
return open(fname, 'r').read()
Usage:
print ("Hi John")
ob_start()
print ("Hi John")
ob_end()
print (ob_get_contents().replace("Hi", "Bye"))
Would print
Hi John
Bye John

Categories

Resources