I don't know why it can't log that message, i think everything is correctly set.
And logging.DEBUG is defined under logging module
import logging
import sys
logger = logging.getLogger('collega_GUI')
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s %(levelname)s --file: %(module)s --riga: %(lineno)d, %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.debug('def __init__')
But if i try to run this one, it works:
logger.warning('def __init__')
Where is the problem with this level variable?
The problem is that the debug level message was filtered out by the logger before it ever got to the handler. The problem is fixed by changing handler.setLevel(logging.DEBUG) to logger.setLevel(logging.DEBUG).
You can filter by log level in several different places as a log message is passed down the chain. By default, loggers only pass INFO and above and handlers accept everything. Allowing handlers to use different log levels is useful if you want different levels of logging to go to different places. For example, you could set your logger to DEBUG and then create one handler that logs to the screen at WARN and above, and another handler that logs to a file at DEBUG and above. The user gets a little info and the log file is chatty.
Related
I've been using a logger to log messages to a file and also output them to the console. I don't believe I did anything wrong, but the log file hasn't been being created for a few of the runs that I did.
The code I have is:
log_msg_format = '[%(asctime)s - %(levelname)s - %(filename)s: %(lineno)d] %(message)s'
handlers = [logging.FileHandler(filename=args.log_filename), logging.StreamHandler()]
logging.basicConfig(format=log_msg_format,
level=logging.INFO,
handlers=handlers)
When I check the logging level for each of the handlers, they are both NOTSET. I thought that this might be the problem, but the script is still outputting log messages to the console which means that this shouldn't be a problem.
What might be going wrong?
Just started to look up on the logging module and created a dummy program to understand the logger, handler and formatter. Here is the code
# logging_example.py
import logging
from datetime import datetime
import os
extension = datetime.now().strftime("%d-%b-%Y_%H_%M_%S_%p")
logfile = os.path.join("logs", f"demo_logging_{extension}.txt")
logger = logging.getLogger(__name__)
ch = logging.StreamHandler()
fh = logging.FileHandler(logfile)
ch.setLevel(logging.DEBUG)
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
ch.setFormatter(formatter)
fh.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(fh)
logger.info("Hello World")
When i execute the program the logs directory has the files but content is empty and nothing gets printed on the screen to. I am pretty sure I am missing something basic but am not able to catch it though :( .
I would appreciate any help.
Thank you
You have added log-level to the handlers but not to the logger. Which means, handler would have logged the message had the logger passed it. But since the logger threshold is higher it got dropped.
See this link
Add log-level to the logger also.
After adding the handlers:
logger.setLevel(logging.DEBUG)
I'm using Python's Tornado library for my web service and I want every log that is created from my code as well as from Tornado to be json formatted. I've tried setting the formatter on the root logger, setting the formatter and all the other loggers. This is the hack I'm currently trying to get working. It seems to me like it should work... However, when I run the app all of the logs from Tornado are still in their standard format.
import logging
from tornado.log import access_log, app_log, gen_log
import logmatic
loggers = [
logging.getLogger(),
logging.getLogger('tornado.access'),
logging.getLogger('tornado.application'),
logging.getLogger('tornado.general'),
access_log,
gen_log,
app_log
]
json_formatter = logmatic.JsonFormatter()
for logger in loggers:
for hand in logger.handlers:
hand.setFormatter(json_formatter)
logging.getLogger('tornado.access').warning('All the things')
# WARNING:tornado.access (172.26.0.6) 0.47ms
# NOT JSON???
NOTE: When I include the loggers for my service logging.getLogger('myservice') in the loggers list and run this they do get the updated formatter and spit out json. This rules out problems with the logmatic formatter. Can't get the formatter to work for the Tornado loggers.
Tornado's loggers don't have any handlers before calling loop.start(), so you should add a handler with predefined formatting to loggers.
formatter = logging.Formatter(...)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
for l in loggers:
l.setLevel(logging.WARNING)
l.addHandler(handler)
I try to use logging in Python to write some log, but strangely, only the error will be logged, the info will be ignored no matter whichn level I set.
code:
import logging
import logging.handlers
if __name__ == "__main__":
logger = logging.getLogger()
fh = logging.handlers.RotatingFileHandler('./logtest.log', maxBytes=10240, backupCount=5)
fh.setLevel(logging.DEBUG)#no matter what level I set here
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('INFO')
logger.error('ERROR')
The result is:
2014-01-14 11:47:38,990 - root - ERROR - ERROR
According to http://docs.python.org/2/library/logging.html#logging-levels
The INFO should be logged too.
The problem is that the logger's level is still set to the default. So the logger discards the message before it even gets to the handlers. The fact that the handler would have accepted the message if it received it doesn't matter, because it never receives it.
So, just add this:
logger.setLevel(logging.INFO)
As the docs explain, the logger's default level is NOTSET, which means it checks with its parent, which is the root, which has a default of WARNING.
And you can probably leave the handler at its default of NOTSET, which means it defers to the logger's filtering.
I think you might have to set the correct threshold.
logger.setLevel(logging.INFO)
I got a logger using logging.getLogger(__name__). I tried setting the log level to logging.INFO as mentioned in other answers, but that didn't work.
A quick check on both the created logger and its parent (root) logger showed it did not have any handlers (using hasHandler()). The documentation states that the handler should've been created upon first call to any of logging functions debug, info etc.,
The functions debug(), info(), warning(), error() and critical() will
call basicConfig() automatically if no handlers are defined for the
root logger.
But it did not. All I had to do was call basicConfig() manually.
Solution:
import logging
logging.basicConfig() # Add logging level here if you plan on using logging.info() instead of my_logger as below.
my_logger = logging.getLogger(__name__)
my_logger .setLevel(logging.INFO)
my_logger .info("Hi")
INFO:__main__:Hi
I have used the following code to get warnings to be logged:
import logging
logging.captureWarnings(True)
formatter = logging.Formatter('%(asctime)s\t%(levelname)s\t%(message)s')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
This works, however, my logging formatter is not applied, and the warnings come out looking like this:
WARNING:py.warnings:/home/joakim/.virtualenvs/masterload/local/lib/python2.7/site-packages/MySQL_python-1.2.3c1-py2.7-linux-x86_64.egg/MySQLdb/cursors.py:100: Warning:
InnoDB: ROW_FORMAT=DYNAMIC requires innodb_file_per_table.
instead of the expected format:
2012-11-12 18:19:44,421 INFO START updating products
How can I apply my normal formatting to captured warning messages?
You created a handler, but never configured the logging module to use it:
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
You need to add this handler to a logger; the root logger for example:
logging.getLogger().addHandler(console_handler)
Alternatively, you can add the handler to the warnings logger only; the captureWarnings() documentation states that it uses py.warnings for captured warnings:
logging.getLogger('py.warnings').addHandler(console_handler)
Instead of creating a handler and formatter explicitly, you can also just call basicConfig() to configure the root logger:
logging.basicConfig(format='%(asctime)s\t%(levelname)s\t%(message)s', level=logging.DEBUG)
The above basic configuration is the moral equivalent of the handler configuration you set up.
logging.captureWarnings logs to a logger named py.warnings, so you need to add your handler to that logger:
import logging
logging.captureWarnings(True)
formatter = logging.Formatter('%(asctime)s\t%(levelname)s\t%(message)s')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
py_warnings_logger = logging.getLogger('py.warnings')
py_warnings_logger.addHandler(console_handler)
The documentation says that If capture is True, warnings issued by the warnings module will be redirected to the logging system. Specifically, a warning will be formatted using warnings.formatwarning() and the resulting string logged to a logger named 'py.warnings' with a severity of WARNING.
Therefore I would try to
# get the 'py.warnings' logger
log = logging.getLogger('py.warnings')
# assign the handler to it
log.addHandler(console_handler)