I use a internal library that prints a lot (one script could print 40000 lines in total), and I suppose it may have bad impact in performance. This is a library developed by another team in my company and do a lot of calculations, they print to debug errors (and I know this is not a good habit but it's too late because of 100 scripts already on production)
and I'm developing a script that uses 100 scripts to produce the result.
How can I decide to turn all this print off ?
I'm not asking how to print these lines to file, but completely omit it
Replace sys.stdout with an object that eats the output:
import sys
class Null:
def write(self, text):
pass
def flush(self):
pass
print "One" # This gets output OK
old_stdout = sys.stdout
sys.stdout = Null()
print "Two" # This disappears
sys.stdout = old_stdout
print "Three" # Output, back to normal
The best way is to simply remove the print statements as there
is no overhead whatsoever.
Alternatively, you can redirect the output to /dev/null, which will
effectively remove the I/O overhead but will not remove the syscall.
To spare the syscall you can replace sys.stdout with a Writer which does nothing.
For example:
class NullWriter():
def write(self, s): pass
sys.stdout = NullWriter()
Apparently, this has been asked and solved before. Also here.
In case you're using python 3, you can overload the print function as
seen in this answer:
def print(*args, **kwargs):
pass
Related
When I run the following code, no error messages are printed, and it seems to fail silently. I expected the string "BLAH" to be printed to the console.
from contextlib import redirect_stdout
import io # the initialism `io` represents `INPUT OUTPUT LIBRARY`
def run_solver():
print("solver is running")
with io.StringIO() as fake_std_out:
with redirect_stdout(fake_std_out):
print("BLAH") # THIS IS NEVER PRINTED
run_solver()
data = fake_std_out.getvalue()
print(data)
print("THE END")
The output I expect is:
BLAH
solver is running
THE END
Instead, we have:
THE END
Edits
I realize now that I wanted to copy standard output, not re-direct it.
Using print to print the contents of the string stream won't work because the destination of the print function is now the string stream instead of the system console. After calling getvalue() it makes no sense to attempt to use print statements anymore.
The code is working fine. print writes to standard output, not the console directly. You can check for the values written to fake_std_out yourself following the inner with statement.
from contextlib import redirect_stdout
import io
def run_solver():
print("solver is running")
with io.StringIO() as fake_std_out:
with redirect_stdout(fake_std_out):
print("BLAH")
run_solver()
assert fake_std_out.getvalue() == 'BLAH\nsolver is running\n'
print("THE END")
Long story short, I am writing python code that occasionally causes an underlying module to spit out complaints in the terminal that I want my code to respond to. My question is if there is some way that I can take in all terminal outputs as a string while the program is running so that I might parse it and execute some handler code. Its not errors that crash the program entirely and not a situation where I can simply do a try catch. Thanks for any help!
Edit: Running on Linux
there are several solutions to your need. the easiest would be to use a shared buffer of sort and get all your package output there instead of stdout (with regular print) thus keeping your personal streams under your package control.
since you probably already have some code with print or you want for it to work with minimal change, i suggest use the contextlib.redirect_stdout with a context manager.
give it a shared io.StringIO instance and wrap all your method with it.
you can even create a decorator to do it automatically.
something like:
// decorator
from contextlib import redirect_stdout
import io
import functools
SHARED_BUFFER = io.StringIO()
def std_redirecter(func):
#functools.wrap(func)
def inner(*args, **kwargs):
with redirect_stdout(SHARED_BUFFER) as buffer:
print('foo')
print('bar')
func(*args, **kwargs)
return inner
// your files
#std_redirecter
def writing_to_stdout_func():
print('baz')
// invocation
writing_to_stdout_func()
string = SHARED_BUFFER.getvalue() // 'foo\nbar\nbaz\n'
I am using a module that when it outputs an error it just prints that error and continues the script, I would like to put that error into a variable, however because of this behavour I cant just do except Exception as e, so I'm looking for a way to put the previously printed line into a variable
note: I tried looking in the module for where it prints this, but couldnt find it
Well, it ain't pretty, but you could (hoping the write does not do anything funny) try to hijack sys.stdout.write() (or stderr, depending on where you script writes to).
This could be one way to do so, as ugly as it may be:
import sys
class Wrap:
def __init__(self):
self.last_line = None
self._next_line = ''
def write(self, text, *args, **kwargs):
sys.__stdout__.write(text, *args, **kwargs)
self._next_line += text
try:
self.last_line = self._next_line.split('\n')[-2]
self.next_line = self._next_line.split('\n')[-1]
except IndexError:
# We did not have \n yet and _next_line did not split
# into at least two items
pass
save_stdout = sys.stdout
sys.stdout = Wrap()
print('xxx\nzzz') # This was that function you wanted to call
last_line = sys.stdout.last_line
sys.stdout = save_stdout
print(last_line)
This will give you zzz as output. I.e. last line printed (w/o newline (re)added) while sys.stdout was our wrapper.
You can of course just write a function wrapper and use that to formalize the hack a bit.
I'm working on an open source python library that uses a verbose_print command to log outputs in the console. Currently it looks like this:
def sys_write_flush(s):
""" Writes and flushes without delay a text in the console """
sys.stdout.write(s)
sys.stdout.flush()
def verbose_print(verbose, s):
""" Only prints s (with sys_write_flush) if verbose is True."""
if verbose:
sys_write_flush(s)
I proposed a change that looks like this:
def verbose_print(verbose, *args):
""" Prints everything passed except the first argument if verbose is True."""
if verbose:
print(*args)
Apart from the fact that it fails on Python 2 (bonus point for fixing this!), I thought that this would be better and more idiomatic. The advantages are, that you can treat verbose_print exactly like print, except that the first argument has to be True or False.
The repo owner replied with this message:
I should have documented this one, but basically the issue was that in
some consoles (and in the IPython notebook, at least at the time),
"print" commands get delayed, while stdout.flush are instantaneous, so
my method was better at providing feedback.
I would be against changing it to print unless it solves some known
issues.
Is this still a valid concern? Would print() followed by sys.stdout.flush() avoid the delay? Are there any better ways to write this?
Source
Quote from the docs:
print evaluates each expression in turn and writes the resulting
object to standard output.
Standard output is defined as the file object named stdout in the
built-in module sys. If no such object exists, or if it does not
have a write() method, a RuntimeError exception is raised.
According to this, print writes to sys.stdout, so, yes, doing a sys.stdout.flush() after printing will have the same effect as flushing after sys.stdout.write-ing.
The syntax print(*a) fails in Python 2 because print isn't a function, but a statement, and that fun(*stuff) construct is only applicable to functions.
In Python 3 print(*a) passes whatever a contains to the function print as separate arguments, but this is equal to passing a big string:
separator = ' '
print separator.join(map(str, iterable))
So, your code could look like this:
def verbose_print(verbose, *args):
""" Prints everything passed except the first argument if verbose is True."""
if verbose:
print " ".join(map(str, args))
sys.stdout.flush()
Although I don't see why this can be faster or more readable than the original.
I'm trying to replace an ad-hoc logging system with Python's logging module. I'm using the logging system to output progress information for a long task on a single line so you can tail the log or watch it in a console. I've done this by having a flag on my logging function which suppresses the newline for that log message and build the line piece by piece.
All the logging is done from a single thread so there's no serialisation issues.
Is it possible to do this with Python's logging module? Is it a good idea?
If you wanted to do this you can change the logging handler terminator. I'm using Python 3.4. This was introduced in Python 3.2 as stated by Ninjakannon.
handler = logging.StreamHandler()
handler.terminator = ""
When the StreamHandler writes it writes the terminator last.
Let's start with your last question: No, I do not believe it's a good idea.
IMO, it hurts the readability of the logfile in the long run.
I suggest sticking with the logging module and using the '-f' option on your 'tail' command to watch the output from the console. You will probably end up using the FileHandler. Notice that the default argument for 'delay' is False meaning the output won't be buffered.
If you really needed to suppress newlines, I would recommend creating your own Handler.
The new line, \n, is inserted inside the StreamHandler class.
If you're really set on fixing this behaviour, then here's an example of how I solved this by monkey patching the emit(self, record) method inside the logging.StreamHandler class.
A monkey patch is a way to extend or modify the run-time code of dynamic languages without altering the original source code. This process has also been termed duck punching.
Here is the custom implementation of emit() that omits line breaks:
def customEmit(self, record):
# Monkey patch Emit function to avoid new lines between records
try:
msg = self.format(record)
if not hasattr(types, "UnicodeType"): #if no unicode support...
self.stream.write(msg)
else:
try:
if getattr(self.stream, 'encoding', None) is not None:
self.stream.write(msg.encode(self.stream.encoding))
else:
self.stream.write(msg)
except UnicodeError:
self.stream.write(msg.encode("UTF-8"))
self.flush()
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
Then you would make a custom logging class (in this case, subclassing from TimedRotatingFileHandler).
class SniffLogHandler(TimedRotatingFileHandler):
def __init__(self, filename, when, interval, backupCount=0,
encoding=None, delay=0, utc=0):
# Monkey patch 'emit' method
setattr(StreamHandler, StreamHandler.emit.__name__, customEmit)
TimedRotatingFileHandler.__init__(self, filename, when, interval,
backupCount, encoding, delay, utc)
Some people might argue that this type of solution is not Pythonic, or whatever. It might be so, so be careful.
Also, be aware that this will globally patch SteamHandler.emit(...), so if you are using multiple logging classes, then this patch will affect the other logging classes as well!
Check out these for further reading:
What is monkey-patching?
Is monkeypatching considered good programming practice?
Monkeypatching For Humans
Hope that helps.
Python 3.5.9
class MFileHandler(logging.FileHandler):
"""Handler that controls the writing of the newline character"""
special_code = '[!n]'
def emit(self, record) -> None:
if self.special_code in record.msg:
record.msg = record.msg.replace( self.special_code, '' )
self.terminator = ''
else:
self.terminator = '\n'
return super().emit(record)
Then
fHandler = MFileHandler(...)
Example:
# without \n
log.info( 'waiting...[!n]' )
...
log.info( 'OK' )
# with \n
log.info( 'waiting...' )
...
log.info( 'OK' )
log.txt:
waiting...OK
waiting...
OK
I encountered a need to log a certain section in a single line as I iterated through a tuple, but wanted to retain the overall logger.
Collected the output first into a single string and later sent it out to logger once I was out of the section. Example of concept
for fld in object._fields:
strX = (' {} --> {} ').format(fld, formattingFunction(getattr(obj,fld)))
debugLine += strX
logger.debug('{}'.format(debugLine))