python logging: logger setLevel() is not enforced? - python

fmtter = logging.Formatter('%(asctime)s,%(msecs)05.1f (%(funcName)s) %(message)s', '%H:%M:%S')
rock_log = '%s/rock.log' % Build.path
hdlr = logging.FileHandler(rock_log, mode='w')
hdlr.setFormatter(fmtter)
hdlr.setLevel(logging.DEBUG)
rock_logger = logging.getLogger('rock')
rock_logger.addHandler(hdlr)
I have the above logger
rock_logger.info("hi") doesnt print anything to the log BUT
rock_logger.error("hi") DOES print to the log
I think it has something to do with level but i specifically set it to logging.DEBUG
Anyone know what I am doing wrong?

In your case rock_logger is the logger and hdlr is the handler. When you try to log something, the logger first checks if it should handle the message (depending on it's own log level). Then it passes the message to the handler. The handler then checks the level against it's own log level and decided whether to write it to the file or not.
Your rock_logger might have the logging level set to errors. So it's not passing the messages to the handler when you try info() or debug().
rock_logger.setLevel(logging.DEBUG)
That should fix the issue.

Related

Using same logger to seperate file outout with python logging

with the following code I use to write info logs to info.log and error log to error.log
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('info.log')
error_file_handler = logging.FileHandler('error.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')
Now I using 2 logger : logger_err and logger_info.
Can I marge those 2 logger to 1 logger , that logger_info.info will write into info.log and logger_info.error will write to error.log
It is uncommon because logging usually processes messages that have a higher severity than a threshold, but it is possible by using 2 handlers and a custom filter:
you attach a handler to the logger with a level of ERROR and make it write to the error.log file
you attach a second handler to the same logger with a level of INFO and make it write to the info.log file
you add a custom filter to that second handler to reject messages with a level higher than INFO
Demo:
class RevFilter:
"""A filter to reject messages ABOVE a maximum level"""
def __init__(self, maxLev):
self.maxLev = maxLev
def filter(self, record):
return record.levelno <= self.maxLev
hinf = logging.FileHandler('/path/to/info.log')
herr = logging.FileHandler('/path/to/error.log')
herr.setLevel(logging.ERROR)
hinf.setLevel(logging.INFO)
hinf.addFilter(RevFilter(logging.INFO))
logger = logging.getLogger(name)
logger.addHandler(hinf)
logger.addHandler(herr)
logger.setLevel(logging.INFO) # or lower of course...
From that point, the file error.log will receive messages send by logger at a level of ERROR or above, and info.log will only receive message at a level of INFO, neither higher nor lower.
I'm not sure I understand what you're looking to achieve, but if you're simply wanting to log both types of messages to the same file, you can just specify the same output file when you create the two FileHandlers:
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('combined.log')
error_file_handler = logging.FileHandler('combined.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')

Prevent Python logger from printing to console

I'm getting mad at the logging module from Python, because I really have no idea anymore why the logger is printing out the logging messages to the console (on the DEBUG level, even though I set my FileHandler to INFO). The log file is produced correctly.
But I don't want any logger information on the console.
Here is my configuration for the logger:
template_name = "testing"
fh = logging.FileHandler(filename="testing.log")
fr = logging.Formatter("%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s")
fh.setFormatter(fr)
fh.setLevel(logging.INFO)
logger = logging.getLogger(template_name)
# logger.propagate = False # this helps nothing
logger.addHandler(fh)
Would be nice if anybody could help me out :)
I found this question as I encountered a similar issue, after I had removed the logging.basicConfig(). It started printing all logs in the console.
In my case, I needed to change the filename on every run, to save a log file to different directories. Adding the basic config on top (even if the initial filename is never used, solved the issue for me. (I can't explain why, sorry).
What helped me was adding the basic configuration in the beginning:
logging.basicConfig(filename=filename,
format='%(levelname)s - %(asctime)s - %(name)s - %(message)s',
filemode='w',
level=logging.INFO)
Then changing the filename by adding a handler in each run:
file_handler = logging.FileHandler(path_log + f'/log_run_{c_run}.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
logger_TS.addHandler(file_handler)
Also curious side-note. If I don't set the formater (file_handler.setFormatter(formatter)) before setting the handler with the new filename, the initially formated logging (levelname, time, etc.) is missing in the log files.
So, the key is to set the logging.basicConfig before, then set the add the handler. As #StressedBoi69420 indicated above.
Hope that helps a bit.
You should be able to add a StreamHandler to handle stdout and set the handlers log level to a level above 50. (Standard log levels are 50 and below.)
Example of how I'd do it...
import logging
import sys
console_log_level = 100
logging.basicConfig(level=logging.INFO,
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
filename="testing.log",
filemode="w")
console = logging.StreamHandler(sys.stdout)
console.setLevel(console_log_level)
root_logger = logging.getLogger("")
root_logger.addHandler(console)
logging.debug("debug log message")
logging.info("info log message")
logging.warning("warning log message")
logging.error("error log message")
logging.critical("critical log message")
Contents of testing.log...
2019-11-21 12:53:02,426,426 root INFO info log message
2019-11-21 12:53:02,426,426 root WARNING warning log message
2019-11-21 12:53:02,426,426 root ERROR error log message
2019-11-21 12:53:02,426,426 root CRITICAL critical log message
Note: The only reason I have the console_log_level variable is because I pulled most of this code from a default function that I use that will set the console log level based on an argument value. That way if I want to make the script "quiet", I can change the log level based on a command line arg to the script.

Flask + sqlalchemy advanced logging

I found a few other posts on this but none that worked for me yet so I wanted to reach out and see if anyone could explain how to properly get / redirect / set handlers on some of the loggers present in Flask / Werkzeurg / sqlalchemy.
Research prior that could not answer my question:
https://github.com/pallets/flask/issues/1359
http://flask.pocoo.org/docs/dev/logging/
https://gist.github.com/ibeex/3257877
My configurations:
main.py
...
def init_app():
""" Runs prior to app launching, contains initialization code """
# set logging level
if not os.path.exists(settings.LOG_DIR):
os.makedirs(settings.LOG_DIR)
# default level
log_level = logging.CRITICAL
if settings.ENV == 'DEV':
log_level = logging.DEBUG
elif settings.ENV == 'TEST':
log_level = logging.WARNING
elif settings.ENV == 'PROD':
log_level = logging.ERROR
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_logger = logging.getLogger()
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
api_logger.addHandler(api_handler)
logging.getLogger('werkzeug').addHandler(api_handler)
db_logger = logging.getLogger('sqlalchemy')
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
db_logger.addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
# add endpoints
...
if __name__ == '__main__':
init_app()
app.run(host='0.0.0.0', port=7777)
I tried grabbing and changes settings on the loggers a few different ways but I still end up with the werkzeug debug outputting to console and not my logs, I can see the logs are being created but it doesn't look like the loggers are actually outputting to them:
api.log (formatter wrote to it)
2018-02-15 12:03:03,944] {/usr/local/lib/python3.5/dist-packages/werkzeug/_internal.py:88} WARNING - * Debugger is active!
db.log (empty)
Any insight on this would be much appreciated!
UPDATE
I was able to get the werkzeug logger working using the long hand version, it seems the shorthand function calls shown were returning null objects. The sqlalchemy logger is still outputting to console though.. Could the engine configuration be overriding my filehandler?
main.py
...
# close current file handlers
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.engine').handlers):
logging.getLogger('sqlalchemy.engine').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.dialects').handlers):
logging.getLogger('sqlalchemy.dialects').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.pool').handlers):
logging.getLogger('sqlalchemy.pool').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.orm').handlers):
logging.getLogger('sqlalchemy.orm').removeHandler(handler)
handler.close()
# create our own custom handlers
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
logging.getLogger().setLevel(log_level)
logging.getLogger().addHandler(api_handler)
logging.getLogger('werkzeug').setLevel(log_level)
logging.getLogger('werkzeug').addHandler(api_handler)
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').setLevel(log_level)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').setLevel(log_level)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').setLevel(log_level)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').setLevel(log_level)
database.py
...
engine = create_engine(getDBURI(), echo="debug", echo_pool=True, pool_recycle=10)
ANSWER
For future reference if anyone runs into this issue, sqlalchemy engine configuration echo=True|'debug' will OVERRIDE your loggers. Fixed the issue by changing my engine configuration to:
engine = create_engine(getDBURI(), echo_pool=True, pool_recycle=10)
And then everything worked like a charm. Cheers! :D
as i understand it your file based log configuration for werkzeug is actually working => it outputs into api.log
The db log handler is also working (file gets created etc.) but there is no output.
This is probably due to the loglevel of those loggers beeing on Error by default. You need to set them manually on a lower level like this:
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.dialects').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG)
That werkzeug is still outputting to console is probably because there is allways a root logger defined. Before you add your new handlers you should do the following to remove all log handlers:
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close() # clean up used file handles
Then you can also assign your app log handler as the root log handler with
logging.getLogger().addHandler(api_handler)
If its not the root logger but just the werkzeug logger which has a default console logger defined you can also just remove all handlers from the werkzeug logger before adding yours like this:
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close() # clean up used file handles
logging.getLogger('werkzeug').addHandler(api_handler)

strange behavior of logger

I encountered such a problem and couldn't solve it. I used python's logger to log info, logger level set to logging.DEBUG. I used gunicorn to log
info at the same time. Normally, the error message goes to python's logger, and the link messages and other messages written by logger.info or logger.debug goes to the log file of gunicorn. However with one application it doesn't behave so. The messages output by logger.info also goes to python's logger. The problem is, I only want to see error messages in python's logger, all the other messages would be seen from gunicorn's logger. Can anyone give me a clue where I might do wrong in this situation?
thx in advance,
alex
The following is my config:
LOGGER_LEVEL = logging.DEBUG
LOGGER_ROOT_NAME = "root"
LOGGER_ROOT_HANLDERS = [logging.StreamHandler, logging.FileHandler]
LOGGER_ROOT_LEVEL = LOGGER_LEVEL
LOGGER_ROOT_FORMAT = "[%(asctime)s %(levelname)s %(name)s %(funcName)s:%(lineno)d] %(message)s"
LOGGER_LEVEL = logging.ERROR
LOGGER_FILE_PATH = "/data/log/web/"
Code:
def config_root_logger(self):
formatter = logging.Formatter(self.config.LOGGER_ROOT_FORMAT)
logger = logging.getLogger()
logger.setLevel(self.config.LOGGER_ROOT_LEVEL)
filename = os.path.join(self.config.LOGGER_FILE_PATH, "secondordersrv.log")
handler = logging.FileHandler(filename)
handler.setFormatter(formatter)
logger.addHandler(handler)
# 测试环境配置再增加console的日志记录
self._add_test_handler(logger, formatter)
def _add_test_handler(self, logger, formatter):
# 测试环境配置再增加console的日志记录
if self.config.RUN_MODE == 'test':
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
My gunicorn config looks like this:
errorlog = '/data/log/web/%s.log' % APP_NAME
loglevel = 'info'
accesslog = '-'
You did not set the level of your handler.
After handler.setFormatter(formatter), add the following line:
handler.setLevel(self.config.LOGGER_LEVEL)

Python logging caches?

I have a function in my python package, which returns a logger:
import logging
def get_logger(logger_name, log_level='DEBUG') :
"""Setup and return a logger."""
log = logging.getLogger(logger_name)
log.setLevel(log_level)
formatter = logging.Formatter('[%(levelname)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
log.addHandler(handler)
return log
I use this logger in my modules and submodules:
from tgtk.logger import get_logger
log = get_logger(__name__, 'DEBUG')
so I can use it via
log.debug("Debug message")
log.info("Info message")
etc. This worked so far perfectly, but today I encountered a weird problem: on one machine, there is no output at all. Every log.xxx is simply being "ignored", whatever level is set. I had some similar issues in the past and I remember that it somehow started working again after I renamed the logger, but this time it does not help.
Is there any caching or what is going on? The scripts are exactly the same (synced over SVN).

Categories

Resources