How to implement the --verbose or -v option into a script? - python

I know the --verbose or -v from several tools and I'd like to implement this into some of my own scripts and tools.
I thought of placing:
if verbose:
print ...
through my source code, so that if a user passes the -v option, the variable verbose will be set to True and the text will be printed.
Is this the right approach or is there a more common way?
Addition: I am not asking for a way to implement the parsing of arguments. That I know how it is done. I am only interested specially in the verbose option.

My suggestion is to use a function. But rather than putting the if in the function, which you might be tempted to do, do it like this:
if verbose:
def verboseprint(*args):
# Print each argument separately so caller doesn't need to
# stuff everything to be printed into a single string
for arg in args:
print arg,
print
else:
verboseprint = lambda *a: None # do-nothing function
(Yes, you can define a function in an if statement, and it'll only get defined if the condition is true!)
If you're using Python 3, where print is already a function (or if you're willing to use print as a function in 2.x using from __future__ import print_function) it's even simpler:
verboseprint = print if verbose else lambda *a, **k: None
This way, the function is defined as a do-nothing if verbose mode is off (using a lambda), instead of constantly testing the verbose flag.
If the user could change the verbosity mode during the run of your program, this would be the wrong approach (you'd need the if in the function), but since you're setting it with a command-line flag, you only need to make the decision once.
You then use e.g. verboseprint("look at all my verbosity!", object(), 3) whenever you want to print a "verbose" message.
If you are willing and able to use the Python -O flag to turn verbosity on and off when launching the "production" version of the script (or set the PYTHONOPTIMIZE environment variable) then the better way is to test the __debug__ flag everywhere you want to print a verbose output:
if __debug__: print("Verbosity enabled")
When run without optimization, these statements are executed (and the if is stripped out, leaving only the body of the statement). When run with optimization, Python actually strips those statements out entirely. They have no performance impact whatsoever! See my blog post on __debug__ and -O for a more in-depth discussion.

Use the logging module:
import logging as log
…
args = p.parse_args()
if args.verbose:
log.basicConfig(format="%(levelname)s: %(message)s", level=log.DEBUG)
log.info("Verbose output.")
else:
log.basicConfig(format="%(levelname)s: %(message)s")
log.info("This should be verbose.")
log.warning("This is a warning.")
log.error("This is an error.")
All of these automatically go to stderr:
% python myprogram.py
WARNING: This is a warning.
ERROR: This is an error.
% python myprogram.py -v
INFO: Verbose output.
INFO: This should be verbose.
WARNING: This is a warning.
ERROR: This is an error.
For more info, see the Python Docs and the tutorials.

Building and simplifying #kindall's answer, here's what I typically use:
v_print = None
def main()
parser = argparse.ArgumentParser()
parser.add_argument('-v', '--verbosity', action="count",
help="increase output verbosity (e.g., -vv is more than -v)")
args = parser.parse_args()
if args.verbosity:
def _v_print(*verb_args):
if verb_args[0] > (3 - args.verbosity):
print verb_args[1]
else:
_v_print = lambda *a: None # do-nothing function
global v_print
v_print = _v_print
if __name__ == '__main__':
main()
This then provides the following usage throughout your script:
v_print(1, "INFO message")
v_print(2, "WARN message")
v_print(3, "ERROR message")
And your script can be called like this:
% python verbose-tester.py -v
ERROR message
% python verbose=tester.py -vv
WARN message
ERROR message
% python verbose-tester.py -vvv
INFO message
WARN message
ERROR message
A couple notes:
Your first argument is your error level, and the second is your message. It has the magic number of 3 that sets the upper bound for your logging, but I accept that as a compromise for simplicity.
If you want v_print to work throughout your program, you have to do the junk with the global. It's no fun, but I challenge somebody to find a better way.

What I do in my scripts is check at runtime if the 'verbose' option is set, and then set my logging level to debug. If it's not set, I set it to info. This way you don't have 'if verbose' checks all over your code.

I stole the logging code from virtualenv for a project of mine. Look in main() of virtualenv.py to see how it's initialized. The code is sprinkled with logger.notify(), logger.info(), logger.warn(), and the like. Which methods actually emit output is determined by whether virtualenv was invoked with -v, -vv, -vvv, or -q.

It might be cleaner if you have a function, say called vprint, that checks the verbose flag for you. Then you just call your own vprint function any place you want optional verbosity.

#kindall's solution does not work with my Python version 3.5. #styles correctly states in his comment that the reason is the additional optional keywords argument. Hence my slightly refined version for Python 3 looks like this:
if VERBOSE:
def verboseprint(*args, **kwargs):
print(*args, **kwargs)
else:
verboseprint = lambda *a, **k: None # do-nothing function

There could be a global variable, likely set with argparse from sys.argv, that stands for whether the program should be verbose or not.
Then a decorator could be written such that if verbosity was on, then the standard input would be diverted into the null device as long as the function were to run:
import os
from contextlib import redirect_stdout
verbose = False
def louder(f):
def loud_f(*args, **kwargs):
if not verbose:
with open(os.devnull, 'w') as void:
with redirect_stdout(void):
return f(*args, **kwargs)
return f(*args, **kwargs)
return loud_f
#louder
def foo(s):
print(s*3)
foo("bar")
This answer is inspired by this code; actually, I was going to just use it as a module in my program, but I got errors I couldn't understand, so I adapted a portion of it.
The downside of this solution is that verbosity is binary, unlike with logging, which allows for finer-tuning of how verbose the program can be.
Also, all print calls are diverted, which might be unwanted for.

What I need is a function which prints an object (obj), but only if global variable verbose is true, else it does nothing.
I want to be able to change the global parameter "verbose" at any time. Simplicity and readability to me are of paramount importance. So I would proceed as the following lines indicate:
ak#HP2000:~$ python3
Python 3.4.3 (default, Oct 14 2015, 20:28:29)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> verbose = True
>>> def vprint(obj):
... if verbose:
... print(obj)
... return
...
>>> vprint('Norm and I')
Norm and I
>>> verbose = False
>>> vprint('I and Norm')
>>>
Global variable "verbose" can be set from the parameter list, too.

Related

Disable pytest warnings capture in a single test

I generally like the pytest warnings capture hook, as I can use it to force my test suite to not have any warnings triggered. However, I have one test that requires the warnings to print to stderr correctly to work.
How can I disable the warnings capture for just the one test?
For instance, something like
def test_warning():
mystderr = StringIO()
sys.stderr = mystderr
warnings.warn('warning')
assert 'UserWarning: warning' in mystderr.getvalue()
(I know I can use capsys, I just want to show the basic idea)
Thanks to the narrowing down in this discussion, I think the question might better be titled "In pytest, how to capture warnings and their standard error output in a single test?". Given that suggested rewording, I think the answer is "it can't, you need a separate test".
If there were no standard error capture requirement, you should be able to use the #pytest.mark.filterwarnings annotation for this.
#pytest.mark.filterwarnings("ignore")
def test_one():
assert api_v1() == 1
From:
https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
#wim points out in a comment this will not capture the warning, though, and the answer he lays out captures and asserts on the warnings in a standard way.
If there were stderr output but not Python warnings thrown, capsys would be the technique, as you say
https://docs.pytest.org/en/latest/capture.html
I don't think it's meaningful to do both in a pytest test, because of the nature of the pytest implementation.
As previously noted pytest redirects stderr etc to an internal recorder. Secondly, it defines its own warnings handler
https://github.com/pytest-dev/pytest/blob/master/src/_pytest/warnings.py#L59
It is similar in idea to the answer to this question:
https://stackoverflow.com/a/5645133/5729872
I had a little poke around with redefining warnings.showwarning(), which worked fine from vanilla python, but pytest deliberately reinitializes that as well.
won't work in pytest, only straight python -->
def func(x):
warnings.warn('wwarn')
print(warnings.showwarning.__doc__)
# print('ewarn', file=sys.stderr)
return x + 1
sworig = warnings.showwarning
def showwarning_wrapper(message, category, filename, lineno, file=None, line=None):
"""Local override for showwarning()"""
print('swwrapper({})'.format(file) )
sworig(message,category,filename,lineno,file,line)
warnings.showwarning = showwarning_wrapper
<-- won't work in pytest, only straight python
You could probably put a warnings handler in your test case that reoutput to stderr ... but that doesn't prove much about the code under test, at that point.
It is your system at the end of the day. If after consideration of the point made by #wim that testing stderr as such may not prove much, you decide you still need it, I suggest separating the testing of the Python warning object (python caller layer) and the contents of stderr (calling shell layer). The first test would look at Python warning objects only. The new second test case would call the library under test as a script, through popen() or similar, and assert on the resulting standard error and output.
I'll encourage you to think about this problem in a different way.
When you want to assert that some of your code triggers warnings, you should be using a pytest.warns context for that. Check the warning message by using the match keyword, and avoid the extra complications of trying to capture it from stderr.
import re
import warnings
import pytest
def test_warning():
expected_warning_message = "my warning"
match = re.escape(expected_warning_message)
with pytest.warns(UserWarning, match=match):
warnings.warn("my warning", UserWarning)
That should be the edge of your testing responsibility. It is not your responsibility to test that the warnings module itself prints some output to stderr, because that behavior is coming from standard library code and it should be tested by Python itself.

Using signal module without pylint's warnings (W0621 & W0613)

I am discovering python's signal module and I wrote this script for my first implementation:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
""" First implementation of signal module """
import time
import signal
import os
import sys
def cls():
""" console clearing """
os.system('clear')
return
def handler(signal, frame):
""" Catch <ctrl+c> signal for clean stop"""
print("{}, script stops".format(time.strftime('%H:%M:%S')))
sys.exit(0)
signal.signal(signal.SIGINT, handler)
START_TIME = time.strftime('%H:%M:%S')
PROGRESSION = str()
while True:
time.sleep(2)
PROGRESSION += "."
cls()
print("{}, script starts\n{}".format(START_TIME, PROGRESSION))
Except the annoying ^C string appearing after interrupt, the script work as expected:
14:38:01, script starts
......
^C14:38:14, script stops
However pylint3 checking gives me this return:
testsignal.py:16: [W0621(redefined-outer-name), handler] Redefining name 'signal' from outer scope (line 5)
testsignal.py:16: [W0613(unused-argument), handler] Unused argument 'signal'
testsignal.py:16: [W0613(unused-argument), handler] Unused argument 'frame'
According to the signal documentation I did it right.
If I change line 16, with a trailing underscore in signal argument (as mentioned in PEP8, I solve the warning W0621.
Is it a side effect of pylint or did I miss something?
By the way, if someone knows how to avoid the ^C string, I'll be glad too.
pylint3 --version
pylint3 1.5.2,
astroid 1.4.4
Python 3.4.2 (default, Oct 8 2014, 10:45:20)
[GCC 4.9.1]
pylint warns you that you have a function with two parameters you are not using inside the function, which is true and is a smell for common errors.
It also warns about using a local function name that is equal to an outer scope name, which can sometimes lead to errors, because you might be inadvertently hiding the outer name. Sometimes you do it on purpose, so pylint just annoys a bit, but you can also rename the local, as you did and get rid of the danger.
Those are just warnings not errors. Usually it is good to be warned about possible problems, even if they don't exist.
The static checker does not know how will your handler be called by the signals library. But the warning hasn't to do with that. The static tool just noticed you claim to receive two parameters but you are not using them in the handler body. Usually when you receive a parameter you want to use it right? Except that as you are registering a handler in a library for callback, you have to respect the library "protocol", or you'll get a runtime error when the callback is done from there. The static tool does not know you don't care about the signals received info, and are just printing something else; It is just saying to you: that looks strange, are you sure?

custom sys.excepthook doesn't work with pytest

I wanted to put results of pytest aserts into log.
First I tried this solution
def logged_assert(self, testval, msg=None):
if not testval:
if msg is None:
try:
assert testval
except AssertionError as e:
self.logger.exception(e)
raise e
self.logger.error(msg)
assert testval, msg
It work's fine but I need to use my own msg for every assert instead if build in. The problem is that testval evaluates when it passed into function and error msg is
AssertionError: False
I found an excellent way to solve the problem http://code.activestate.com/recipes/577074-logging-asserts/ here in first comment.
And I wrote this function in my logger wrapper module
def logged_excepthook(er_type, value, trace):
print('HOOK!')
if isinstance(er_type, AssertionError):
current = sys.modules[sys._getframe(1).f_globals['__name__']]
if 'logger' in sys.modules[current]:
sys.__excepthook__(er_type, value, trace)
sys.modules[current].error(exc_info=(er_type, value, trace))
else:
sys.__excepthook__(er_type, value, trace)
else:
sys.__excepthook__(er_type, value, trace)
and then
sys.excepthook = logged_excepthook
In test module, where i have asserts output of
import sys
print(sys.excepthook, sys.__excepthook__, logged_excepthook)
is
<function logged_excepthook at 0x02D672B8> <built-in function excepthook> <function logged_excepthook at 0x02D672B8>
But there is no 'Hook' message in my output. And also no ERROR message in my log files. All works like with builtin sys.excepthook.
I looked through pytest sources but sys.excepthook doesn't changed there.
But if I interrupt my code execution with Cntrl-C I got 'Hook' message in stdout.
The main question is why builtin sys.excepthook called isntead my custom function and how can I fix that.
But it also intresting to me if another way to log assert errors exists.
I am using python3.2 (32bit) at 64bit windows 8.1.
excepthook is only triggered if there is an unhandled exception, i.e. the one that normally terminates your program. Any exceptions in a test are handled by the test framework.
See Asserting with the assert statement - pytest documentation on how the feature is intended to be used. A custom message is specified the standard way: assert condition, failure_message.
If you're not satisfied with the way pytest handles asserts, you need to either
use a wrapper, or
hook the assert statement.
pytest uses an assert hook as well. Its logic is located in Lib\site-packages\_pytest\assertion (a stock plugin). It's probably enough to wrap/replace a few functions in there. To avoid patching the code base, you may be able to do with your own plugin: patch the exceptions plugin at runtime, or
disable it and reuse its functionality yourself instead.

How can I strip Python logging calls without commenting them out?

Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks).
I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so:
[Leave ERROR and above]
my_module.SomeClass.method_with_lots_of_warn_calls
[Leave WARN and above]
my_module.SomeOtherClass.method_with_lots_of_info_calls
[Leave INFO and above]
my_module.SomeWeirdClass.method_with_lots_of_debug_calls
Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this?
Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)
What about using logging.disable?
I've also found I had to use logging.isEnabledFor if the logging message is expensive to create.
Use pypreprocessor
Which can also be found on PYPI (Python Package Index) and be fetched using pip.
Here's a basic usage example:
from pypreprocessor import pypreprocessor
pypreprocessor.parse()
#define nologging
#ifdef nologging
...logging code you'd usually comment out manually...
#endif
Essentially, the preprocessor comments out code the way you were doing it manually before. It just does it on the fly conditionally depending on what you define.
You can also remove all of the preprocessor directives and commented out code from the postprocessed code by adding 'pypreprocessor.removeMeta = True' between the import and
parse() statements.
The bytecode output (.pyc) file will contain the optimized output.
SideNote: pypreprocessor is compatible with python2x and python3k.
Disclaimer: I'm the author of pypreprocessor.
I've also seen assert used in this fashion.
assert logging.warn('disable me with the -O option') is None
(I'm guessing that warn always returns none.. if not, you'll get an AssertionError
But really that's just a funny way of doing this:
if __debug__: logging.warn('disable me with the -O option')
When you run a script with that line in it with the -O option, the line will be removed from the optimized .pyo code. If, instead, you had your own variable, like in the following, you will have a conditional that is always executed (no matter what value the variable is), although a conditional should execute quicker than a function call:
my_debug = True
...
if my_debug: logging.warn('disable me by setting my_debug = False')
so if my understanding of debug is correct, it seems like a nice way to get rid of unnecessary logging calls. The flipside is that it also disables all of your asserts, so it is a problem if you need the asserts.
As an imperfect shortcut, how about mocking out logging in specific modules using something like MiniMock?
For example, if my_module.py was:
import logging
class C(object):
def __init__(self, *args, **kw):
logging.info("Instantiating")
You would replace your use of my_module with:
from minimock import Mock
import my_module
my_module.logging = Mock('logging')
c = my_module.C()
You'd only have to do this once, before the initial import of the module.
Getting the level specific behaviour would be simple enough by mocking specific methods, or having logging.getLogger return a mock object with some methods impotent and others delegating to the real logging module.
In practice, you'd probably want to replace MiniMock with something simpler and faster; at the very least something which doesn't print usage to stdout! Of course, this doesn't handle the problem of module A importing logging from module B (and hence A also importing the log granularity of B)...
This will never be as fast as not running the log statements at all, but should be much faster than going all the way into the depths of the logging module only to discover this record shouldn't be logged after all.
You could try something like this:
# Create something that accepts anything
class Fake(object):
def __getattr__(self, key):
return self
def __call__(self, *args, **kwargs):
return True
# Replace the logging module
import sys
sys.modules["logging"] = Fake()
It essentially replaces (or initially fills in) the space for the logging module with an instance of Fake which simply takes in anything. You must run the above code (just once!) before the logging module is attempted to be used anywhere. Here is a test:
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S',
filename='/temp/myapp.log',
filemode='w')
logging.debug('A debug message')
logging.info('Some information')
logging.warning('A shot across the bows')
With the above, nothing at all was logged, as was to be expected.
I'd use some fancy logging decorator, or a bunch of them:
def doLogging(logTreshold):
def logFunction(aFunc):
def innerFunc(*args, **kwargs):
if LOGLEVEL >= logTreshold:
print ">>Called %s at %s"%(aFunc.__name__, time.strftime("%H:%M:%S"))
print ">>Parameters: ", args, kwargs if kwargs else ""
try:
return aFunc(*args, **kwargs)
finally:
print ">>%s took %s"%(aFunc.__name__, time.strftime("%H:%M:%S"))
return innerFunc
return logFunction
All you need is to declare LOGLEVEL constant in each module (or just globally and just import it in all modules) and then you can use it like this:
#doLogging(2.5)
def myPreciousFunction(one, two, three=4):
print "I'm doing some fancy computations :-)"
return
And if LOGLEVEL is no less than 2.5 you'll get output like this:
>>Called myPreciousFunction at 18:49:13
>>Parameters: (1, 2)
I'm doing some fancy computations :-)
>>myPreciousFunction took 18:49:13
As you can see, some work is needed for better handling of kwargs, so the default values will be printed if they are present, but that's another question.
You should probably use some logger module instead of raw print statements, but I wanted to focus on the decorator idea and avoid making code too long.
Anyway - with such decorator you get function-level logging, arbitrarily many log levels, ease of application to new function, and to disable logging you only need to set LOGLEVEL. And you can define different output streams/files for each function if you wish. You can write doLogging as:
def doLogging(logThreshold, outStream=sys.stdout):
.....
print >>outStream, ">>Called %s at %s" etc.
And utilize log files defined on a per-function basis.
This is an issue in my project as well--logging ends up on profiler reports pretty consistently.
I've used the _ast module before in a fork of PyFlakes (http://github.com/kevinw/pyflakes) ... and it is definitely possible to do what you suggest in your question--to inspect and inject guards before calls to logging methods (with your acknowledged caveat that you'd have to do some runtime type checking). See http://pyside.blogspot.com/2008/03/ast-compilation-from-python.html for a simple example.
Edit: I just noticed MetaPython on my planetpython.org feed--the example use case is removing log statements at import time.
Maybe the best solution would be for someone to reimplement logging as a C module, but I wouldn't be the first to jump at such an...opportunity :p
:-) We used to call that a preprocessor and although C's preprocessor had some of those capablities, the "king of the hill" was the preprocessor for IBM mainframe PL/I. It provided extensive language support in the preprocessor (full assignments, conditionals, looping, etc.) and it was possible to write "programs that wrote programs" using just the PL/I PP.
I wrote many applications with full-blown sophisticated program and data tracing (we didn't have a decent debugger for a back-end process at that time) for use in development and testing which then, when compiled with the appropriate "runtime flag" simply stripped all the tracing code out cleanly without any performance impact.
I think the decorator idea is a good one. You can write a decorator to wrap the functions that need logging. Then, for runtime distribution, the decorator is turned into a "no-op" which eliminates the debugging statements.
Jon R
I am doing a project currently that uses extensive logging for testing logic and execution times for a data analysis API using the Pandas library.
I found this string with a similar concern - e.g. what is the overhead on the logging.debug statements even if the logging.basicConfig level is set to level=logging.WARNING
I have resorted to writing the following script to comment out or uncomment the debug logging prior to deployment:
import os
import fileinput
comment = True
# exclude files or directories matching string
fil_dir_exclude = ["__","_archive",".pyc"]
if comment :
## Variables to comment
source_str = 'logging.debug'
replace_str = '#logging.debug'
else :
## Variables to uncomment
source_str = '#logging.debug'
replace_str = 'logging.debug'
# walk through directories
for root, dirs, files in os.walk('root/directory') :
# where files exist
if files:
# for each file
for file_single in files :
# build full file name
file_name = os.path.join(root,file_single)
# exclude files with matching string
if not any(exclude_str in file_name for exclude_str in fil_dir_exclude) :
# replace string in line
for line in fileinput.input(file_name, inplace=True):
print "%s" % (line.replace(source_str, replace_str)),
This is a file recursion that excludes files based on a list of criteria and performs an in place replace based on an answer found here: Search and replace a line in a file in Python
I like the 'if __debug_' solution except that putting it in front of every call is a bit distracting and ugly. I had this same problem and overcame it by writing a script which automatically parses your source files and replaces logging statements with pass statements (and commented out copies of the logging statements). It can also undo this conversion.
I use it when I deploy new code to a production environment when there are lots of logging statements which I don't need in a production setting and they are affecting performance.
You can find the script here: http://dound.com/2010/02/python-logging-performance/

What is the best way to toggle python prints?

I'm running Python 2.4 in a game engine and I want to be able to turn off all prints if needed. For example I'd like to have the prints on for a debug build, and then turned off for a release build.
It's also imperative that it's as transparent as possible.
My solution to this in the C code of the engine is having the printf function inside a vararg macro, and defining that to do nothing in a release build.
This is my current solution:
DebugPrints = True
def PRINT (*args):
global DebugPrints
if DebugPrints:
string = ""
for arg in args:
string += " " + str(arg)
print string
It makes it easy to toggle print outs, but there is possibly a better way to format the string. My main issue is that this is actually adding a lot more function calls to the program.
I'm wondering if there is anything you can do to how the print keyword works?
yes, you can assign sys.stdout to whatever you want. Create a little class with a write method that does nothing:
class DevNull(object):
def write(self, arg):
pass
import sys
sys.stdout = DevNull()
print "this goes to nirvana!"
With the same technique you can also have your prints logged to a file by setting sys.stdout to an opened file object.
I know an answer has already been marked as correct, but Python has a debug flag that provides a cleaner solution. You use it like this:
if __debug__:
print "whoa"
If you invoke Python with -O or -OO (as you normally would for a release build), __debug__ is set to False. What's even better is that __debug__ is a special case for the interpreter; it will actually strip out that code when it writes the pyc/pyo files, making the resulting code smaller/faster. Note that you can't assign values to __debug__, so it's entirely based off those command-line arguments.
The logging module is the "best" way.. although I quite often just use something simple like..
class MyLogger:
def _displayMessage(self, message, level = None):
# This can be modified easily
if level is not None:
print "[%s] %s" % (level, message
else:
print "[default] %s" % (message)
def debug(self, message):
self._displayMessage(message, level = "debug")
def info(self, message):
self._displayMessage(message, level = "info")
log = MyLogger()
log.info("test")
Don't use print, but make a console class which handles all printing. Simply make calls to the console and the console can decide whether or not to actually print them. A console class is also useful for things like error and warning messages or redirecting where the output goes.
If you really want to toggle printing:
>>> import sys
>>> import os
>>> print 'foo'
foo
>>> origstdout = sys.stdout
>>> sys.stdout = open(os.devnull, 'w')
>>> print 'foo'
>>> sys.stdout = origstdout
>>> print 'foo'
foo
However, I recommend you only use print for throwaway code. Use logging for real apps like the original question describes. It allows for a sliding scale of verbosity, so you can have only the important logging, or more verbose logging of usually less important details.
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> logging.debug('foo')
DEBUG:root:foo
For example, usage might be to put debugging logs throughout your code. Then to silence them, change your level to a higher level, one of INFO, WARN or WARNING, ERROR, or CRITICAL or FATAL
>>> logging.root.setLevel(logging.INFO)
>>> logging.debug('foo')
>>>
In a script you'll want to just set this in the basicConfig like this:
import logging
logging.basicConfig(level=logging.INFO)
logging.debug('foo') # this will be silent
More sophisticated usage of logging can be found in the docs.
I've answered this question on a different post but was the question was marked duplicate:
anyhow, here's what I would do:
from __future__ import print_function
DebugPrints = 0
def print(*args, **kwargs):
if DebugPrints:
return __builtins__.print(*args, **kwargs)
print('foo') # doesn't get printed
DebugPrints = 1
print('bar') # gets printed
sadly you can't keep the py2 print syntax print 'foo'

Categories

Resources