import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info('good')
logger.warning('bad')
It only prints out 'bad', not 'good'. What's the issue of the setLevel(logging.INFO)?
You never configured handlers, so the logging system is using the "last resort" handler, which defaults to a level of WARNING. Even if the logger levels say to log a message, it won't be handled unless there's a handler configured to handle it.
Run logging.basicConfig with the settings you want, to perform basic handler configuration.
Related
for my perspective, I would initialize a logger to log message for my app
import logging
logger = logging.getLogger('my_app')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s %(name)s %(message)s'))
logger.addHandler(handler)
when errors happend:
try:
blah()
except Exceptions as e:
logger.warning(e)
But I'm using some third-party module like sqlalchemy, sqlalchemy may log warning infos when error happends(e.g, varchar is too long and being truncated) and it use a separate logger (so does some other modules, like requests)
sqlalchemy/log.py
This may leads some bad issues and it's not trackable.
In case I'm using lots of third-party modules, how can I log all third-party message to a separate file to help me doing trouble shooting works?
You can set up a handler to log to a file and attach that to the root logger, but that will log all messages, even ones from your code. To leave your messages out of that file, use a logging.Filter subclass which filters out all messages which aren't from one of your code's top-level package namespaces. Attach that filter to your handler.
I am starting to explore Python 3.5's logging package. I set logging up with these two commands in the main file
import logging
logging.basicConfig(filename=r'fractal.log', level=logging.DEBUG)
In addition I inserted
import logging
in another module.
All my desired log entries showed up correctly in the log file, but in addition to those I got these two, which were not AFAIK generated by my logging code:
DEBUG:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ichart.finance.yahoo.com
DEBUG:requests.packages.urllib3.connectionpool:http://ichart.finance.yahoo.com:80 "GET /table.csv?d=11&b=1&e=6&s=%5EIRX&g=d&c=1900&a=0&f=2016&ignore=.csv HTTP/1.1" 200 None
These log entries were clearly not generated in the main module. Also, they do not show up when I set the logging level to INFO.
Does anyone know why I am getting these log entries? Are they evidence of problems with my code?
This is because you are setting the level of the root logger, and the requests module delegates all logging to it. You can fix this by explicitly shutting down the requests logger:
logging.getLogger('requests').setLevel(logging.ERROR)
This will silence all the DEBUG-level messages from requests.
Alternatively you could use your own logger instead of relying on the root logger, there are various ways to do this (e.g. logging.config.dictConfig could be used) but you can easily do this manually, something along the lines of:
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.FileHandler('fractal.log'))
logger.setLevel(logging.DEBUG)
I am new to this logging module.
logging.basicConfig(level=logging.DEBUG)
logging.disable = True
As per my understanding this should disable debug logs. But when it is executed it prints debug logs also.
I have only debug logs to print. I dont have critical or info logs. So how i can disable this debug logs.
logging.disable is method, not a configurable attribute.
You can disable logging with :
https://docs.python.org/2/library/logging.html#logging.disable
To disable all, call:
logging.disable(logging.DEBUG)
This will disable all logs of level DEBUG and below.
To enable all logging, do logging.disable(logging.NOTSET) as it is the lowest level.
the level argument in logging.basicConfig you've set to logging.DEBUG is the lowest level of logging which will be displayed.
the order of logging levels is documented here.
if you don't want to display DEBUG, you can either set logging.basicConfig(level=logging.INFO), or specify levels to be disabled via logging.disable(logging.DEBUG)
You can change to level=logging.CRITICAL and receive only critical logs
I have a Pylons/TurboGears app. I would like to log the same logger (as specified by the qualname property) to use two different log handlers, each with their own log level.
The Sentry / Raven logger should receive only WARN+ level SQLAlchemy messages, and the console logger should receive INFO+ level SQLAlchemy messages.
Here's my abbreviated ini file:
[loggers]
keys = root, sqlalchemy_console, sqlalchemy_sentry
[handlers]
keys = console, sentry
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console, sentry
[logger_sqlalchemy_console]
level = INFO
handlers = console
qualname = sqlalchemy.engine
propagate = 0
[logger_sqlalchemy_sentry]
level = WARN
handlers = sentry
qualname = sqlalchemy.engine
propagate = 0
However, the logger_sqlalchemy_sentry seems to override logger_sqlalchemy_console and steal its messages. This occurs regardless of the order of loggers in the ini file.
Is it possible using Pylons to log the same logger/qualname to multiple places with different levels?
If so, is it possible for Sentry/Raven to be one of those loggers? Is there something wrong with my ini file, or is there a bug in Raven?
The problem you're having is that you're configuring the sqlalchemy.engine Logger twice. The logger sections correspond to instances of logging.Logger, things that are returned by logging.getLogger(qualname). Only one object can be returned by that call, you can't possibly set up more than one of them with the same qualname.
What you need is multiple handlers for that logger, in the same way that you gave your root logger multiple handlers. You can then specify the desired log level on the individual handlers.
Unfortunately, fileConfig() doesn't give you an easy way to configure the same handler with different log levels depending on the logger that originated the record, you'll need to set up duplicate handler sections for both root and the sqlalchemy.engine loggers in order to have different log levels for them.
You're getting loggers and handlers mixed up - as TokenMacGuy says, you need two handlers for the logger named sqlalchemy.engine. Configure the StreamHandler (console) with level INFO and a SentryHandler (sentry) with level WARNING, and the sqlalchemy.engine logger with a level of DEBUG or INFO. Then you should get the desired result. (Even when DEBUG messages are logged, the levels on the handlers will prevent them emitting events which are less than their level.)
Unless you're limited to Python 2.6 or earlier, it's worth considering using the logging.config.dictConfig API in preference to logging.config.fileConfig, if you can. The dictConfig API allows better control over logging configuration than the older fileConfig API, which will not be further developed.
I'm looking at how to log to syslog from within my Python app, and I found there are two ways of doing it:
Using syslog.syslog() routines
Using the logger module SysLogHandler
Which is the best option to use, advantages/disadvantages of each one, etc, because I really don't know which one should I use.
syslog.syslog() can only be used to send messages to the local syslogd. SysLogHandler can be used as part of a comprehensive, configurable logging subsystem, and can log to remote machines.
The logging module is a more comprehensive solution that can potentially handle all of your log messages, and is very flexible. For instance, you can setup multiple handers for your logger and each can be set to log at a different level. You can have a SysLogHandler for sending errors to syslog, and a FileHandler for debugging logs, and an SMTPHandler to email the really critical messages to ops. You can also define a hierarchy of loggers within your modules, and each one has its own level so you can enable/disable messages from specific modules, such as:
import logging
logger = logging.getLogger('package.stable_module')
logger.setLevel(logging.WARNING)
And in another module:
import logging
logger = logging.getLogger('package.buggy_module')
logger.setLevel(logging.DEBUG)
The log messages in both of the these modules will be sent, depending on the level, to the 'package' logger and ultimately to the handlers you've defined. You can also add handlers directly to the module loggers, and so on. If you've followed along this far and are still interested, then I recommend jumping to the logging tutorial for more details.
So far, there is a disadvantage in logging.handlers.SysLogHander which is not mentioned yet. That is I can't set options like LOG_ODELAY or LOG_NOWAIT or LOG_PID. On the other hands, LOG_CONS and LOG_PERROR can be achieved with adding more handlers, and LOG_NDELAY is already set by default, because the connection opens when the handler is instantiated.