Being pythonic with errors [duplicate] - python

A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?

The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.

Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.

I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning

I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.

Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!

Related

How can I throw an error in ply-yacc inside a special parse function?

I am working with library ply to make a parser for commands. Code is
def p_expr(v):
'expr : token'
... # action with token, set expr value and return
??? raise error ??? # yacc must stop parse and call p_error
def p_error(p):
print('Error!')
How to raise error?
Note that Ply does not "stop the parse" when a syntax error is encountered. It tries to do error recovery, if the grammar was built with error recovery, and as a last resort it restarts the parse as though the input had been deleted up to that point. To see this, try typing 1+2 error 3+4 into the calculator example in the Ply distribution. It will print an error message for error (because it's expecting an operator) and then restart the parse, so that it prints 7.
If you want the parser to stop when it finds an error, you'll need to raise an exception from p_error. It's generally best to define your own exception class for this sort of thing. Avoid the use of SyntaxError because Ply handles that specially in certain cases, as we'll see below.
Usually, just stopping the parser is not really what you want to do -- if you don't want to try error recovery, you should normally at least do something like throw away the rest of the input line before restarting the parse.
If you do just want to stop the parser and you raise an exception in p_error to accomplish that, then the best way to signal your own error from a parser action is to call p_error with the current token.
If you want to trigger Ply's normal error recovery procedure, you an raise SyntaxError from a parser action. However, as the manual indicates, this does not call p_error, so you'll have to do that yourself before raising the error.
If you do panic mode recovery in p_error, then you're might be out-of-luck with custom error detection. Raising SyntaxError by-passes the call p_error; while you are free to call p_error yourself, there is no mechanism to pass a new token back to the parser. (Panic mode recovery is not an optimal error recovery technique, and it is not always necessary to return a replacement token. So this paragraph might not apply.)

What's the Pythonic way to report nonfatal errors in a parser?

A parser I created reads recorded chess games from a file. The API is used like this:
import chess.pgn
pgn_file = open("games.pgn")
first_game = chess.pgn.read_game(pgn_file)
second_game = chess.pgn.read_game(pgn_file)
# ...
Sometimes illegal moves (or other problems) are encountered. What is a good Pythonic way to handle them?
Raising exceptions as soon as the error is encountered. However, this makes every problem fatal, in that execution stops. Often, there is still useful data that has been parsed and could be returned. Also, you can not simply continue parsing the next data set, because we are still in the middle of some half-read data.
Accumulating exceptions and raising them at the end of the game. This makes the error fatal again, but at least you can catch it and continue parsing the next game.
Introduce an optional argument like this:
game = chess.pgn.read_game(pgn_file, parser_info)
if parser_info.error:
# This appears to be quite verbose.
# Now you can at least make the best of the sucessfully parsed parts.
# ...
Are some of these or other methods used in the wild?
The most Pythonic way is the logging module. It has been mentioned in comments but unfortunately without stressing this hard enough. There are many reasons it's preferable to warnings:
Warnings module is intended to report warnings about potential code issues, not bad user data.
First reason is actually enough. :-)
Logging module provides adjustable message severity: not only warnings, but anything from debug messages to critical errors can be reported.
You can fully control output of logging module. Messages can be filtered by their source, contents and severity, formatted in any way you wish, sent to different output targets (console, pipes, files, memory etc)...
Logging module separates actual error/warning/message reporting and output: your code can generate messages of appropriate type and doesn't have to bother how they're presented to end user.
Logging module is the de-facto standard for Python code. Everyone everywhere is using it. So if your code is using it, combining it with 3rd party code (which is likely using logging too) will be a breeze. Well, maybe something stronger than breeze, but definitely not a category 5 hurricane. :-)
A basic use case for logging module would look like:
import logging
logger = logging.getLogger(__name__) # module-level logger
# (tons of code)
logger.warning('illegal move: %s in file %s', move, file_name)
# (more tons of code)
This will print messages like:
WARNING:chess_parser:illegal move: a2-b7 in file parties.pgn
(assuming your module is named chess_parser.py)
The most important thing is that you don't need to do anything else in your parser module. You declare that you're using logging system, you're using a logger with a specific name (same as your parser module name in this example) and you're sending warning-level messages to it. Your module doesn't have to know how these messages are processed, formatted and reported to user. Or if they're reported at all. For example, you can configure logging module (usually at the very start of your program) to use a different format and dump it to file:
logging.basicConfig(filename = 'parser.log', format = '%(name)s [%(levelname)s] %(message)s')
And suddenly, without any changes to your module code, your warning messages are saved to a file with a different format instead of being printed to screen:
chess_parser [WARNING] illegal move: a2-b7 in file parties.pgn
Or you can suppress warnings if you wish:
logging.basicConfig(level = logging.ERROR)
And your module's warnings will be ignored completely, while any ERROR or higher-level messages from your module will still be processed.
Actually, those are fatal errors -- at least, as far as being able to reproduce a correct game; on the other hand, maybe the player actually did make the illegal move and nobody noticed at the time (which would make it a warning, not a fatal error).
Given the possibility of both fatal errors (file is corrupted) and warnings (an illegal move was made, but subsequent moves show consistency with that move (in other words, user error and nobody caught it at the time)) I recommend a combination of the first and second options:
raise an exception when continued parsing isn't an option
collect any errors/warnings that don't preclude further parsing until the end
If you don't encounter a fatal error then you can return the game, plus any warnings/non-fatal errors, at the end:
return game, warnings, errors
But what if you do hit a fatal error?
No problem: create a custom exception to which you can attach the usable portion of the game and any other warnings/non-fatal errors to:
raise ParsingError(
'error explanation here',
game=game,
warnings=warnings,
errors=errors,
)
then when you catch the error you can access the recoverable portion of the game, along with the warnings and errors.
The custom error might be:
class ParsingError(Exception):
def __init__(self, msg, game, warnings, errors):
super().__init__(msg)
self.game = game
self.warnings = warnings
self.errors = errors
and in use:
try:
first_game, warnings, errors = chess.pgn.read_game(pgn_file)
except chess.pgn.ParsingError as err:
first_game = err.game
warnings = err.warnings
errors = err.errors
# whatever else you want to do to handle the exception
This is similar to how the subprocess module handles errors.
For the ability to retrieve and parse subsequent games after a game fatal error I would suggest a change in your API:
have a game iterator that simply returns the raw data for each game (it only has to know how to tell when one game ends and the next begins)
have the parser take that raw game data and parse it (so it's no longer in charge of where in the file you happen to be)
This way if you have a five-game file and game two dies, you can still attempt to parse games 3, 4, and 5.
I offered the bounty because I'd like to know if this is really the best way to do it. However, I'm also writing a parser and so I need this functionality, and this is what I've come up with.
The warnings module is exactly what you want.
What turned me away from it at first was that every example warning used in the docs looks like these:
Traceback (most recent call last):
File "warnings_warn_raise.py", line 15, in <module>
warnings.warn('This is a warning message')
UserWarning: This is a warning message
...which is undesirable because I don't want it to be a UserWarning, I want my own custom warning name.
Here's the solution to that:
import warnings
class AmbiguousStatementWarning(Warning):
pass
def x():
warnings.warn("unable to parse statement syntax",
AmbiguousStatementWarning, stacklevel=3)
print("after warning")
def x_caller():
x()
x_caller()
which gives:
$ python3 warntest.py
warntest.py:12: AmbiguousStatementWarning: unable to parse statement syntax
x_caller()
after warning
I'm not sure if the solution is pythonic or not, but I use it rather often with slight modifications: a parser does its job within a generator and yields results and a status code. The receiving code makes decisions what to to with failed items:
def process_items(items)
for item in items:
try:
#process item
yield processed_item, None
except StandardError, err:
yield None, (SOME_ERROR_CODE, str(err), item)
for processed, err in process_items(items):
if err:
# process and log err, collect failed items, etc.
continue
# further process processed
A more general approach is to practice in using design patterns. A simplified version of Observer (when you register callbacks for specific errors) or a kind of Visitor (where the visitor has methods for procesing specific errors, see SAX parser for insights) might be a clear and well understood solution.
Without libraries, it is difficult to do this cleanly, but still possible.
There are different methods of handling this, depending on the situation.
Method 1:
Put all contents of while loop inside the following:
while 1:
try:
#codecodecode
except Exception as detail:
print detail
Method 2:
Same as Method 1, except having multiple try/except thingies, so it doesn't skip too much code & you know the exact location of the error.
Sorry, in a rush, hope this helps!

Ignore exceptions thrown and caught inside a library

The Python standard library and other libraries I use (e.g. PyQt) sometimes use exceptions for non-error conditions. Look at the following except of the function os.get_exec_path(). It uses multiple try statements to catch exceptions that are thrown while trying to find some environment data.
try:
path_list = env.get('PATH')
except TypeError:
path_list = None
if supports_bytes_environ:
try:
path_listb = env[b'PATH']
except (KeyError, TypeError):
pass
else:
if path_list is not None:
raise ValueError(
"env cannot contain 'PATH' and b'PATH' keys")
path_list = path_listb
if path_list is not None and isinstance(path_list, bytes):
path_list = fsdecode(path_list)
These exceptions do not signify an error and are thrown under normal conditions. When using exception breakpoints for one of these exceptions, the debugger will also break in these library functions.
Is there a way in PyCharm or in Python in general to have the debugger not break on exceptions that are thrown and caught inside a library without any involvement of my code?
in PyCharm, go to Run-->View Breakpoints, and check "On raise" and "Ignore library files".
The first option makes the debugger stop whenever an exception is raised, instead of just when the program terminates, and the second option gives PyCharm the policy to ignore library files, thus searching mainly in your code.
The solution was found thanks to CrazyCoder's link to the feature request, which has since been added.
For a while I had a complicated scheme which involved something like the following:
try( Closeable ignore = Debugger.newBreakSuppression() )
{
... library call which may throw ...
} <-- exception looks like it is thrown here
This allowed me to never be bothered by exceptions that were thrown and swallowed within library calls. If an exception was thrown by a library call and was not caught, then it would appear as if it occurred at the closing curly bracket.
The way it worked was as follows:
Closeable is an interface which extends AutoCloseable without declaring any checked exceptions.
ignore is just a name that tells IntelliJ IDEA to not complain about the unused variable, and it is necessary because silly java does not support try( Debugger.newBreakSuppression() ).
Debugger is my own class with debugging-related helper methods.
newBreakSuppression() was a method which would create a thread-local instance of some BreakSuppression class which would take note of the fact that we want break-on-exception to be temporarily suspended.
Then I had an exception breakpoint with a break condition that would invoke my Debugger class to ask whether it is okay to break, and the Debugger class would respond with a "no" if any BreakSuppression objects were instantiated.
That was extremely complicated, because the VM throws exceptions before my code has loaded, so the filter could not be evaluated during program startup, and the debugger would pop up a dialog complaining about that instead of ignoring it. (I am not complaining about that, I hate silent errors.) So, I had to have a terrible, horrible, do-not-try-this-at-home hack where the break condition would look like this: java.lang.System.err.equals( this ) Normally, this would never return
true, because System.err is not equal to a thrown exception, therefore the debugger would never break. However, when my Debugger class would get initialized, it would replace System.err with a class of its own,
which provided an implementation for equals(Object) and returned true if the debugger should break. So, essentially, I was using System.err as an eternal global variable.
Eventually I ditched this whole scheme because it is overly complicated and it performs very bad, because exceptions apparently get thrown very often in the java software ecosystem, so evaluating an expression every time an exception is thrown tremendously slows down everything.
This feature is not implemented yet, you can vote for it:
add ability to break (add breakpoint) on exceptions only for my files
There is another SO answer with a solution:
Debugging with pycharm, how to step into project, without entering django libraries
It is working for me, except I still go into the "_pydev_execfile.py" file, but I haven't stepped into other files after adding them to the exclusion in the linked answer.

How can I strip Python logging calls without commenting them out?

Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks).
I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so:
[Leave ERROR and above]
my_module.SomeClass.method_with_lots_of_warn_calls
[Leave WARN and above]
my_module.SomeOtherClass.method_with_lots_of_info_calls
[Leave INFO and above]
my_module.SomeWeirdClass.method_with_lots_of_debug_calls
Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this?
Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)
What about using logging.disable?
I've also found I had to use logging.isEnabledFor if the logging message is expensive to create.
Use pypreprocessor
Which can also be found on PYPI (Python Package Index) and be fetched using pip.
Here's a basic usage example:
from pypreprocessor import pypreprocessor
pypreprocessor.parse()
#define nologging
#ifdef nologging
...logging code you'd usually comment out manually...
#endif
Essentially, the preprocessor comments out code the way you were doing it manually before. It just does it on the fly conditionally depending on what you define.
You can also remove all of the preprocessor directives and commented out code from the postprocessed code by adding 'pypreprocessor.removeMeta = True' between the import and
parse() statements.
The bytecode output (.pyc) file will contain the optimized output.
SideNote: pypreprocessor is compatible with python2x and python3k.
Disclaimer: I'm the author of pypreprocessor.
I've also seen assert used in this fashion.
assert logging.warn('disable me with the -O option') is None
(I'm guessing that warn always returns none.. if not, you'll get an AssertionError
But really that's just a funny way of doing this:
if __debug__: logging.warn('disable me with the -O option')
When you run a script with that line in it with the -O option, the line will be removed from the optimized .pyo code. If, instead, you had your own variable, like in the following, you will have a conditional that is always executed (no matter what value the variable is), although a conditional should execute quicker than a function call:
my_debug = True
...
if my_debug: logging.warn('disable me by setting my_debug = False')
so if my understanding of debug is correct, it seems like a nice way to get rid of unnecessary logging calls. The flipside is that it also disables all of your asserts, so it is a problem if you need the asserts.
As an imperfect shortcut, how about mocking out logging in specific modules using something like MiniMock?
For example, if my_module.py was:
import logging
class C(object):
def __init__(self, *args, **kw):
logging.info("Instantiating")
You would replace your use of my_module with:
from minimock import Mock
import my_module
my_module.logging = Mock('logging')
c = my_module.C()
You'd only have to do this once, before the initial import of the module.
Getting the level specific behaviour would be simple enough by mocking specific methods, or having logging.getLogger return a mock object with some methods impotent and others delegating to the real logging module.
In practice, you'd probably want to replace MiniMock with something simpler and faster; at the very least something which doesn't print usage to stdout! Of course, this doesn't handle the problem of module A importing logging from module B (and hence A also importing the log granularity of B)...
This will never be as fast as not running the log statements at all, but should be much faster than going all the way into the depths of the logging module only to discover this record shouldn't be logged after all.
You could try something like this:
# Create something that accepts anything
class Fake(object):
def __getattr__(self, key):
return self
def __call__(self, *args, **kwargs):
return True
# Replace the logging module
import sys
sys.modules["logging"] = Fake()
It essentially replaces (or initially fills in) the space for the logging module with an instance of Fake which simply takes in anything. You must run the above code (just once!) before the logging module is attempted to be used anywhere. Here is a test:
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S',
filename='/temp/myapp.log',
filemode='w')
logging.debug('A debug message')
logging.info('Some information')
logging.warning('A shot across the bows')
With the above, nothing at all was logged, as was to be expected.
I'd use some fancy logging decorator, or a bunch of them:
def doLogging(logTreshold):
def logFunction(aFunc):
def innerFunc(*args, **kwargs):
if LOGLEVEL >= logTreshold:
print ">>Called %s at %s"%(aFunc.__name__, time.strftime("%H:%M:%S"))
print ">>Parameters: ", args, kwargs if kwargs else ""
try:
return aFunc(*args, **kwargs)
finally:
print ">>%s took %s"%(aFunc.__name__, time.strftime("%H:%M:%S"))
return innerFunc
return logFunction
All you need is to declare LOGLEVEL constant in each module (or just globally and just import it in all modules) and then you can use it like this:
#doLogging(2.5)
def myPreciousFunction(one, two, three=4):
print "I'm doing some fancy computations :-)"
return
And if LOGLEVEL is no less than 2.5 you'll get output like this:
>>Called myPreciousFunction at 18:49:13
>>Parameters: (1, 2)
I'm doing some fancy computations :-)
>>myPreciousFunction took 18:49:13
As you can see, some work is needed for better handling of kwargs, so the default values will be printed if they are present, but that's another question.
You should probably use some logger module instead of raw print statements, but I wanted to focus on the decorator idea and avoid making code too long.
Anyway - with such decorator you get function-level logging, arbitrarily many log levels, ease of application to new function, and to disable logging you only need to set LOGLEVEL. And you can define different output streams/files for each function if you wish. You can write doLogging as:
def doLogging(logThreshold, outStream=sys.stdout):
.....
print >>outStream, ">>Called %s at %s" etc.
And utilize log files defined on a per-function basis.
This is an issue in my project as well--logging ends up on profiler reports pretty consistently.
I've used the _ast module before in a fork of PyFlakes (http://github.com/kevinw/pyflakes) ... and it is definitely possible to do what you suggest in your question--to inspect and inject guards before calls to logging methods (with your acknowledged caveat that you'd have to do some runtime type checking). See http://pyside.blogspot.com/2008/03/ast-compilation-from-python.html for a simple example.
Edit: I just noticed MetaPython on my planetpython.org feed--the example use case is removing log statements at import time.
Maybe the best solution would be for someone to reimplement logging as a C module, but I wouldn't be the first to jump at such an...opportunity :p
:-) We used to call that a preprocessor and although C's preprocessor had some of those capablities, the "king of the hill" was the preprocessor for IBM mainframe PL/I. It provided extensive language support in the preprocessor (full assignments, conditionals, looping, etc.) and it was possible to write "programs that wrote programs" using just the PL/I PP.
I wrote many applications with full-blown sophisticated program and data tracing (we didn't have a decent debugger for a back-end process at that time) for use in development and testing which then, when compiled with the appropriate "runtime flag" simply stripped all the tracing code out cleanly without any performance impact.
I think the decorator idea is a good one. You can write a decorator to wrap the functions that need logging. Then, for runtime distribution, the decorator is turned into a "no-op" which eliminates the debugging statements.
Jon R
I am doing a project currently that uses extensive logging for testing logic and execution times for a data analysis API using the Pandas library.
I found this string with a similar concern - e.g. what is the overhead on the logging.debug statements even if the logging.basicConfig level is set to level=logging.WARNING
I have resorted to writing the following script to comment out or uncomment the debug logging prior to deployment:
import os
import fileinput
comment = True
# exclude files or directories matching string
fil_dir_exclude = ["__","_archive",".pyc"]
if comment :
## Variables to comment
source_str = 'logging.debug'
replace_str = '#logging.debug'
else :
## Variables to uncomment
source_str = '#logging.debug'
replace_str = 'logging.debug'
# walk through directories
for root, dirs, files in os.walk('root/directory') :
# where files exist
if files:
# for each file
for file_single in files :
# build full file name
file_name = os.path.join(root,file_single)
# exclude files with matching string
if not any(exclude_str in file_name for exclude_str in fil_dir_exclude) :
# replace string in line
for line in fileinput.input(file_name, inplace=True):
print "%s" % (line.replace(source_str, replace_str)),
This is a file recursion that excludes files based on a list of criteria and performs an in place replace based on an answer found here: Search and replace a line in a file in Python
I like the 'if __debug_' solution except that putting it in front of every call is a bit distracting and ugly. I had this same problem and overcame it by writing a script which automatically parses your source files and replaces logging statements with pass statements (and commented out copies of the logging statements). It can also undo this conversion.
I use it when I deploy new code to a production environment when there are lots of logging statements which I don't need in a production setting and they are affecting performance.
You can find the script here: http://dound.com/2010/02/python-logging-performance/

Python: Warnings and logging verbose limit

I want to unify the whole logging facility of my app. Any warning is raise an exception, next I catch it and pass it to the logger. But the question: Is there in logging any mute facility? Sometimes logger becomes too verbose. Sometimes for the reason of too noisy warnings, is there are any verbose limit in warnings?
http://docs.python.org/library/logging.html
http://docs.python.org/library/warnings.html
Not only are there log levels, but there is a really flexible way of configuring them. If you are using named logger objects (e.g., logger = logging.getLogger(...)) then you can configure them appropriately. That will let you configure verbosity on a subsystem-by-subsystem basis where a subsystem is defined by the logging hierarchy.
The other option is to use logging.Filter and Warning filters to limit the output. I haven't used this method before but it looks like it might be a better fit for your needs.
Give PEP-282 a read for a good prose description of the Python logging package. I think that it describes the functionality much better than the module documentation does.
Edit after Clarification
You might be able to handle the logging portion of this using a custom class based on logging.Logger and registered with logging.setLoggerClass(). It really sounds like you want something similar to syslog's "Last message repeated 9 times". Unfortunately I don't know of an implementation of this anywhere. You might want to see if twisted.python.log supports this functionality.
from the very source you mentioned.
there are the log-levels, use the wisely ;-)
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
This will be a problem if you plan to make all logging calls from some blind error handler that doesn't know anything about the code that raised the error, which is what your question sounds like. How will you decide which logging calls get made and which don't?
The more standard practice is to use such blocks to recover if possible, and log an error (really, if it is an error that you weren't specifically prepared for, you want to know about it; use a high level). But don't rely on these blocks for all your state/debug information. Better to sprinkle your code with logging calls before it gets to the error-handler. That way, you can observe useful run-time information about a system when it is NOT failing and you can make logging calls of different severity. For example:
import logging
from traceback import format_exc
logger = logging.getLogger() # Gives the root logger. Change this for better organization
# Add your appenders or what have you
def handle_error(e):
logger.error("Unexpected error found")
logger.warn(format_exc()) #put the traceback in the log at lower level
... #Your recovery code
def do_stuff():
logger.info("Program started")
... #Your main code
logger.info("Stuff done")
if __name__ == "__main__":
try:
do_stuff()
except Exception,e:
handle_error(e)

Categories

Resources