Just started to look up on the logging module and created a dummy program to understand the logger, handler and formatter. Here is the code
# logging_example.py
import logging
from datetime import datetime
import os
extension = datetime.now().strftime("%d-%b-%Y_%H_%M_%S_%p")
logfile = os.path.join("logs", f"demo_logging_{extension}.txt")
logger = logging.getLogger(__name__)
ch = logging.StreamHandler()
fh = logging.FileHandler(logfile)
ch.setLevel(logging.DEBUG)
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
ch.setFormatter(formatter)
fh.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(fh)
logger.info("Hello World")
When i execute the program the logs directory has the files but content is empty and nothing gets printed on the screen to. I am pretty sure I am missing something basic but am not able to catch it though :( .
I would appreciate any help.
Thank you
You have added log-level to the handlers but not to the logger. Which means, handler would have logged the message had the logger passed it. But since the logger threshold is higher it got dropped.
See this link
Add log-level to the logger also.
After adding the handlers:
logger.setLevel(logging.DEBUG)
Related
I have script.py file in which I import requests module. I have noticed that by using root logger at the level DEBUG, I can log all GET requests which is nice.
But I would like to have another log file to log other messages at INFO level. So I did this:
# root logger to log all GET requests
import logging
logging.basicConfig(filename= "allrequests.log", level=logging.DEBUG,
format='%(asctime)s: %(levelname)s: %(message)s')
# second logger to log only INFO
formatter = logging.Formatter('%(asctime)s: %(levelname)s: %(message)s')
handler = logging.FileHandler('onlyinfo.log')
info_logger = logging.getLogger('second_logger')
info_logger.setLevel(logging.INFO)
info_logger.addHandler(handler)
the problem is now, whenever I use:
info_logger.info('my message')
it will log it twice in the first log file and once in the second file.
It would be fine, if it only logs info messages once in each file.
In other words I would like to log all messages (DEBUG and INFO level messages) in one log file and only INFO messages in another log file. How can I do this?
I am using import logging to save changes to my bokeh server and I want to save it to a file with a .log extension, but when I run the bokeh server, the file is not created and the can not save operations to .log file.
There is a part of the code I wrote below.
Could it be that I am making a mistake in the code or bokeh server does it not work in accordance with logging?
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename = "test.log",
level = logging.DEBUG,
format = LOG_FORMAT,
filemode="w")
logger = logging.getLogger()
When you use bokeh serve %some_python_file%, the Bokeh server is started right away, but your code is executed only when you actually open the URL that points to the Bokeh document that you fill in that code.
bokeh serve configures logging by using logging.basicConfig as well, and calling this function again does not override anything - that's just how logging.basicConfig works.
Instead of using logging directly, you should just create and configure your own logger:
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
file_handler = logging.FileHandler(filename='test.log', mode='w')
file_handler.setFormatter(logging.Formatter(LOG_FORMAT))
logger = logging.getLogger(__name__)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
logger.info('Hello there')
Eugene's answer is correct. Calling logging.basicConfig() for a second time does not have any effect. Nevertheless, if you are using python >= 3.8 then you can use force=True which will disable all existing logging handlers and setup a new one. This practically means that your own logging.basicCOnfig() will just work:
logging.basicConfig(..., force=True)
docs
I don't know why it can't log that message, i think everything is correctly set.
And logging.DEBUG is defined under logging module
import logging
import sys
logger = logging.getLogger('collega_GUI')
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s %(levelname)s --file: %(module)s --riga: %(lineno)d, %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.debug('def __init__')
But if i try to run this one, it works:
logger.warning('def __init__')
Where is the problem with this level variable?
The problem is that the debug level message was filtered out by the logger before it ever got to the handler. The problem is fixed by changing handler.setLevel(logging.DEBUG) to logger.setLevel(logging.DEBUG).
You can filter by log level in several different places as a log message is passed down the chain. By default, loggers only pass INFO and above and handlers accept everything. Allowing handlers to use different log levels is useful if you want different levels of logging to go to different places. For example, you could set your logger to DEBUG and then create one handler that logs to the screen at WARN and above, and another handler that logs to a file at DEBUG and above. The user gets a little info and the log file is chatty.
I try to use logging in Python to write some log, but strangely, only the error will be logged, the info will be ignored no matter whichn level I set.
code:
import logging
import logging.handlers
if __name__ == "__main__":
logger = logging.getLogger()
fh = logging.handlers.RotatingFileHandler('./logtest.log', maxBytes=10240, backupCount=5)
fh.setLevel(logging.DEBUG)#no matter what level I set here
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('INFO')
logger.error('ERROR')
The result is:
2014-01-14 11:47:38,990 - root - ERROR - ERROR
According to http://docs.python.org/2/library/logging.html#logging-levels
The INFO should be logged too.
The problem is that the logger's level is still set to the default. So the logger discards the message before it even gets to the handlers. The fact that the handler would have accepted the message if it received it doesn't matter, because it never receives it.
So, just add this:
logger.setLevel(logging.INFO)
As the docs explain, the logger's default level is NOTSET, which means it checks with its parent, which is the root, which has a default of WARNING.
And you can probably leave the handler at its default of NOTSET, which means it defers to the logger's filtering.
I think you might have to set the correct threshold.
logger.setLevel(logging.INFO)
I got a logger using logging.getLogger(__name__). I tried setting the log level to logging.INFO as mentioned in other answers, but that didn't work.
A quick check on both the created logger and its parent (root) logger showed it did not have any handlers (using hasHandler()). The documentation states that the handler should've been created upon first call to any of logging functions debug, info etc.,
The functions debug(), info(), warning(), error() and critical() will
call basicConfig() automatically if no handlers are defined for the
root logger.
But it did not. All I had to do was call basicConfig() manually.
Solution:
import logging
logging.basicConfig() # Add logging level here if you plan on using logging.info() instead of my_logger as below.
my_logger = logging.getLogger(__name__)
my_logger .setLevel(logging.INFO)
my_logger .info("Hi")
INFO:__main__:Hi
I have used the following code to get warnings to be logged:
import logging
logging.captureWarnings(True)
formatter = logging.Formatter('%(asctime)s\t%(levelname)s\t%(message)s')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
This works, however, my logging formatter is not applied, and the warnings come out looking like this:
WARNING:py.warnings:/home/joakim/.virtualenvs/masterload/local/lib/python2.7/site-packages/MySQL_python-1.2.3c1-py2.7-linux-x86_64.egg/MySQLdb/cursors.py:100: Warning:
InnoDB: ROW_FORMAT=DYNAMIC requires innodb_file_per_table.
instead of the expected format:
2012-11-12 18:19:44,421 INFO START updating products
How can I apply my normal formatting to captured warning messages?
You created a handler, but never configured the logging module to use it:
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
You need to add this handler to a logger; the root logger for example:
logging.getLogger().addHandler(console_handler)
Alternatively, you can add the handler to the warnings logger only; the captureWarnings() documentation states that it uses py.warnings for captured warnings:
logging.getLogger('py.warnings').addHandler(console_handler)
Instead of creating a handler and formatter explicitly, you can also just call basicConfig() to configure the root logger:
logging.basicConfig(format='%(asctime)s\t%(levelname)s\t%(message)s', level=logging.DEBUG)
The above basic configuration is the moral equivalent of the handler configuration you set up.
logging.captureWarnings logs to a logger named py.warnings, so you need to add your handler to that logger:
import logging
logging.captureWarnings(True)
formatter = logging.Formatter('%(asctime)s\t%(levelname)s\t%(message)s')
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
py_warnings_logger = logging.getLogger('py.warnings')
py_warnings_logger.addHandler(console_handler)
The documentation says that If capture is True, warnings issued by the warnings module will be redirected to the logging system. Specifically, a warning will be formatted using warnings.formatwarning() and the resulting string logged to a logger named 'py.warnings' with a severity of WARNING.
Therefore I would try to
# get the 'py.warnings' logger
log = logging.getLogger('py.warnings')
# assign the handler to it
log.addHandler(console_handler)