I'm trying to broadcast my error log with a SocketHandler, and it's working great if I put this in the main.py of my app engine project, but if I just write a simple stand alone python script and run it, it doesn't seem to successfully broadcast the logs.
import logging
import logging.handlers
sh = logging.handlers.SocketHandler('localhost', logging.handlers.DEFAULT_TCP_LOGGING_PORT)
rootLogger = logging.getLogger('')
rootLogger.addHandler(sh)
logging.info(['Test', 'This', 'List'])
logging.info(dict(Test = 'Blah', Test2 = 'Blah2', Test3 = dict(Test4 = 'Blah4')))
I'm at a loss as to how to go about debugging this. Any ideas?
The default level for logging is WARNING, so INFO messages will not be seen by default. You need a rootLogger.setLevel(logging.INFO) to see INFO messages, for example, or set the level to logging.DEBUG to see all messages.
Update: It's hard to see how logging.error() would fail to output where logging.info() does, unless there is a filter involved or an error during the logging.error() call. Please post a runnable script which demonstrates the issue when run with the logging socket server described in the docs.
Further update: logging doesn't automatically capture exceptions. To log an exception, the usual thing for the developer to do is to use a try: except: block and to call logging.exception(...) in the exception handling code.
Related
As recommended by Sentry's docs [1][2] for their new unified python sdk (sentry_sdk), I've configured it with my Django application to capture events on all exceptions or "error"-level logs:
import sentry_sdk
import logging
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.logging import LoggingIntegration
sentry_logging = LoggingIntegration(
level=logging.DEBUG,
event_level=logging.ERROR
)
sentry_sdk.init(
dsn="{{sentry_dsn}}",
integrations=[DjangoIntegration(), sentry_logging]
)
However, since this hooks directly into python's logging module and internal exception handling, it means anything that uses this Django environment will be shipping events to sentry. There are some tasks (such as interactive manage.py commands, or work in the REPL) that need the Django environment, but for which I don't want events created in Sentry.
Is there a way to indicate to sentry that I'd like it to not capture events from exceptions or logging calls for the current task? Or a way to temporarily disable it after it's been globally configured?
You can run sentry_sdk.init() (notably without a DSN) again to disable the SDK.
The accepted answer to call init with no args did not work for me.
I had to explicitly pass an empty DSN:
import sentry_sdk
sentry_sdk.init(dsn="")
Tested with sentry_sdk 0.19.1:
>>> import sentry_sdk
>>> sentry_sdk.Hub.current.client.dsn
"https://...redacted...#sentry.io/..."
>>> sentry_sdk.init(dsn="")
>>> sentry_sdk.Hub.current.client.dsn
''
After calling init with no args, the DSN remained unchanged.
Maybe there’s a better way but in any file you can import logging and disable it like so: logging.disable(logging.CRITICAL). This will disabling logging at the level at or below the parameter (since CRITICAL is the highest it’ll disable all logging).
According to the docs for .NET # https://getsentry.github.io/sentry-dotnet/api/Sentry.SentrySdk.html#Sentry_SentrySdk_Init_System_String_ sentry_sdk.init() "An empty string is interpreted as a disabled SDK". Yes, I know this is a Python question but their API is generally consistent across languages
Very late to the party, but for anyone that comes to this from April 2021 you can simply:
import sentry_sdk
sentry_sdk.Hub.current.stop_auto_session_tracking()
The call to stop_auto_session_tracking() will always stop sentry errors as long as you include it (1) after you do sentry_sdk.init (2) Before any errors in your application occur.
I seem to be running into a problem when I am logging data after invoking another module in an application I am working on. I'd like assistance in understanding what may be happening here.
To replicate the issue, I have developed the following script...
#!/usr/bin/python
import sys
import logging
from oletools.olevba import VBA_Parser, VBA_Scanner
from cloghandler import ConcurrentRotatingFileHandler
# set up logger for application
dbg_h = logging.getLogger('dbg_log')
dbglog = '%s' % 'dbg.log'
dbg_rotateHandler = ConcurrentRotatingFileHandler(dbglog, "a")
dbg_h.addHandler(dbg_rotateHandler)
dbg_h.setLevel(logging.ERROR)
# read some document as a buffer
buff = sys.stdin.read()
# generate issue
dbg_h.error('Before call to module....')
vba = VBA_Parser('None', data=buff)
dbg_h.error('After call to module....')
When I run this, I get the following...
cat somedocument.doc | ./replicate.py
ERROR:dbg_log:After call to module....
For some reason, my last dbg_h logger write attempt is getting output to the console as well as getting written to my dbg.log file? This only appears to happen AFTER the call to VBA_Parser.
cat dbg.log
Before call to module....
After call to module....
Anyone have any idea as to why this might be happening? I reviewed the source code of olevba and did not see anything that stuck out to me specifically.
Could this be a problem I should raise with the module author? Or am I doing something wrong with how I am using the cloghandler?
The oletools codebase is littered with calls to the root logger though calls to logging.debug(...), logging.error(...), and so on. Since the author didn't bother to configure the root logger, the default behavior is to dump to sys.stderr. Since sys.stderr defaults to the console when running from the command line, you get what you're seeing.
You should contact the author of oletools since they're not using the logging system effectively. Ideally they would use a named logger and push the messages to that logger. As a work-around to suppress the messages you could configure the root logger to use your handler.
# Set a handler
logger.root.addHandler(dbg_rotateHandler)
Be aware that this may lead to duplicated log messages.
try:
print blah
except KeyError:
traceback.print_exc()
I used to debug like this. I'd print to the console. Now, I want to log everything instead of print, since Apache doesn't allow printing. So, how do I log this entire traceback?
You can use python's logging mechanism:
import logging
...
logger = logging.getLogger("blabla")
...
try:
print blah # You can use logger.debug("blah") instead of print
except KeyError:
logger.exception("An error occurred")
This will print the stack trace and will work with apache.
If you're running Django's trunk version (or 1.3 when it's released), there are a number of default logging configurations built in which are integrated with Python's standard logging module. For that, all you need to do is import logging, call logger = logging.getLogger(__name__) and then call logger.exception(msg) and you'll get both your message and a stack trace. Docs for Django's logging functionality and Python's logger.exception method would be handy references.
If you don't want to use Python's logging module, you can also import sys and write to sys.stderr rather than using print. On the command line this goes to the screen, when running under Apache it'll go into your error logs.
I am currently writing an API and an application which uses the API. I have gotten suggestions from people stating that I should perform logging using handlers in the application and use a "logger" object for logging from the API.
In light of the advice I received above, is the following implementation correct?
class test:
def __init__(self, verbose):
self.logger = logging.getLogger("test")
self.logger.setLevel(verbose)
def do_something(self):
# do something
self.logger.log("something")
# by doing this i get the error message "No handlers could be found for logger "test"
The implementation i had in mind was as follows:
#!/usr/bin/python
"""
....
....
create a logger with a handler
....
....
"""
myobject = test()
try:
myobject.do_something()
except SomeError:
logger.log("cant do something")
I'd like to get my basics strong, I'd be grateful for any help and suggestions for code you might recommend I look up.
Thanks!
It's not very clear whether your question is about the specifics of how to use logging or about logging exceptions, but if the latter, I would agree with Adam Crossland that log-and-swallow is a pattern to be avoided.
In terms of the mechanics of logging, I would make the following observations:
You don't need to have a logger as an instance member. It's more natural to declare loggers at module level using logger = logging.getLogger(__name__), and this will also work as expected in sub-packages.
Your call logger.log("message") would likely fail anyway because the log method has a level as the first argument, rather than a message.
You should declare handlers, and if your usage scenario is fairly simple you can do this in your main method or if __name__ == '__main__': clause by adding for example
logging.basicConfig(filename='/tmp/myapp.log', level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(name)s %(message)s')
and then elsewhere in your code, just do for example
import logging
logger = logging.getLogger(__name__)
once at the top of each module where you want to use logging, and then
logger.debug('message with %s', 'arguments') # or .info, .warning, .error etc.
in your code wherever needed.
The danger with the pattern that you are thinking about is that you may end up effectively hiding exceptions by putting them in a log. Many exceptions really should crash your program because they represent a problem that needs to be fixed. Generally, it is more useful to be able to step into code with a debugger to find out what caused the exception.
If there are cases that an exception represents an expected condition that does not affect the stability of the app or the correctness of its behavior, doing nothing but writing a notation to the log is OK. But be very, very careful about how you use this.
I usually do the following:
import logging
import logging.config
logging.config.fileConfig('log.congig')
# for one line log records
G_LOG = logging.getLogger(__name__)
# for records with stacktraces
ST_LOG = logging.getLogger('stacktrace.' + __name__)
try:
# some code
G_LOG.info('some message %s %s', param1, param2)
except (StandardError,):
message = 'some message'
G_LOG.error(message)
# exc_info appends stacktrace to the log message
ST_LOG.error(message, exc_info=True)
Format of the config file can be seen in the python manual
I want to unify the whole logging facility of my app. Any warning is raise an exception, next I catch it and pass it to the logger. But the question: Is there in logging any mute facility? Sometimes logger becomes too verbose. Sometimes for the reason of too noisy warnings, is there are any verbose limit in warnings?
http://docs.python.org/library/logging.html
http://docs.python.org/library/warnings.html
Not only are there log levels, but there is a really flexible way of configuring them. If you are using named logger objects (e.g., logger = logging.getLogger(...)) then you can configure them appropriately. That will let you configure verbosity on a subsystem-by-subsystem basis where a subsystem is defined by the logging hierarchy.
The other option is to use logging.Filter and Warning filters to limit the output. I haven't used this method before but it looks like it might be a better fit for your needs.
Give PEP-282 a read for a good prose description of the Python logging package. I think that it describes the functionality much better than the module documentation does.
Edit after Clarification
You might be able to handle the logging portion of this using a custom class based on logging.Logger and registered with logging.setLoggerClass(). It really sounds like you want something similar to syslog's "Last message repeated 9 times". Unfortunately I don't know of an implementation of this anywhere. You might want to see if twisted.python.log supports this functionality.
from the very source you mentioned.
there are the log-levels, use the wisely ;-)
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
This will be a problem if you plan to make all logging calls from some blind error handler that doesn't know anything about the code that raised the error, which is what your question sounds like. How will you decide which logging calls get made and which don't?
The more standard practice is to use such blocks to recover if possible, and log an error (really, if it is an error that you weren't specifically prepared for, you want to know about it; use a high level). But don't rely on these blocks for all your state/debug information. Better to sprinkle your code with logging calls before it gets to the error-handler. That way, you can observe useful run-time information about a system when it is NOT failing and you can make logging calls of different severity. For example:
import logging
from traceback import format_exc
logger = logging.getLogger() # Gives the root logger. Change this for better organization
# Add your appenders or what have you
def handle_error(e):
logger.error("Unexpected error found")
logger.warn(format_exc()) #put the traceback in the log at lower level
... #Your recovery code
def do_stuff():
logger.info("Program started")
... #Your main code
logger.info("Stuff done")
if __name__ == "__main__":
try:
do_stuff()
except Exception,e:
handle_error(e)