strange behavior of logger - python

I encountered such a problem and couldn't solve it. I used python's logger to log info, logger level set to logging.DEBUG. I used gunicorn to log
info at the same time. Normally, the error message goes to python's logger, and the link messages and other messages written by logger.info or logger.debug goes to the log file of gunicorn. However with one application it doesn't behave so. The messages output by logger.info also goes to python's logger. The problem is, I only want to see error messages in python's logger, all the other messages would be seen from gunicorn's logger. Can anyone give me a clue where I might do wrong in this situation?
thx in advance,
alex
The following is my config:
LOGGER_LEVEL = logging.DEBUG
LOGGER_ROOT_NAME = "root"
LOGGER_ROOT_HANLDERS = [logging.StreamHandler, logging.FileHandler]
LOGGER_ROOT_LEVEL = LOGGER_LEVEL
LOGGER_ROOT_FORMAT = "[%(asctime)s %(levelname)s %(name)s %(funcName)s:%(lineno)d] %(message)s"
LOGGER_LEVEL = logging.ERROR
LOGGER_FILE_PATH = "/data/log/web/"
Code:
def config_root_logger(self):
formatter = logging.Formatter(self.config.LOGGER_ROOT_FORMAT)
logger = logging.getLogger()
logger.setLevel(self.config.LOGGER_ROOT_LEVEL)
filename = os.path.join(self.config.LOGGER_FILE_PATH, "secondordersrv.log")
handler = logging.FileHandler(filename)
handler.setFormatter(formatter)
logger.addHandler(handler)
# 测试环境配置再增加console的日志记录
self._add_test_handler(logger, formatter)
def _add_test_handler(self, logger, formatter):
# 测试环境配置再增加console的日志记录
if self.config.RUN_MODE == 'test':
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
My gunicorn config looks like this:
errorlog = '/data/log/web/%s.log' % APP_NAME
loglevel = 'info'
accesslog = '-'

You did not set the level of your handler.
After handler.setFormatter(formatter), add the following line:
handler.setLevel(self.config.LOGGER_LEVEL)

Related

How to use multi logger in Python

I try to use two different loggers to handle different log levels. For example, I want info message store in a file and don't log error message. Error messages of some special functions will email to the user.
I write a simple program to test the logging module.
Code:
import logging
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
maillogger = logging.getLogger("mail")
maillogger.setLevel(logging.ERROR)
mailhandler = logging.StreamHandler()
mailhandler.setLevel(logging.ERROR)
mailhandler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
maillogger.addHandler(mailhandler)
print(def_logger.getEffectiveLevel())
print(maillogger.getEffectiveLevel())
def_logger.info("info 1")
maillogger.info("info 2")
def_logger.error("error 1")
maillogger.error("error 2")
Output:
Output result
I can see the level of them is correct, but both of them act like the level is ERROR.
How can I correctly configure them?
Answer: Base on blues advice, I added a handler and it solved my problem.
Here is the modified code:
import logging
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
def_logger.addHandler(logging.StreamHandler()) #added a handler here
maillogger = logging.getLogger("mail")
maillogger.setLevel(logging.ERROR)
mailhandler = logging.StreamHandler()
mailhandler.setLevel(logging.ERROR)
mailhandler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
maillogger.addHandler(mailhandler)
print(def_logger.getEffectiveLevel())
print(maillogger.getEffectiveLevel())
def_logger.info("info 1")
maillogger.info("info 2")
def_logger.error("error 1")
maillogger.error("error 2")
Neither the def_logger nor any of its parents have a handler attached to it. So what happens is that the logging module falls back to logging.lastResort which by default is a StreamHandler with level Warning. That is the reason why the info message doesn't appear, while the error does. So to solve your problem attach a handler to the def_logger.
Note: In your scenario the only parent both loggers have is the default root handler.
You could add a filter for each logger or handler to process only records of interest
import logging
def filter_info(record):
return True if record.levelno == logging.INFO else False
def filter_error(record):
return True if record.levelno >= logging.ERROR else False
# define debug logger
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
def_logger.addFilter(filter_info) # add filter directly to this logger since you didn't define any handler
# define handler for mail logger
mail_handler = logging.StreamHandler()
mail_handler.setLevel(logging.ERROR)
mail_handler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
mail_handler.addFilter(filter_error) # add filter to handler
mail_logger = logging.getLogger("mail")
mail_logger.setLevel(logging.ERROR)
mail_logger.addHandler(mail_handler)
# test
def_logger.info("info 1")
mail_logger.info("info 2")
def_logger.error("error 1")
mail_logger.error("error 2")
If filter return True, the log record is processed. Otherwise, it is skipped.
Note:
Filter attached to a logger won't be called for log record that is generated by descendant loggers. For example, if you add a filter to logger A, it won't be called for records that are generated by logger A.B nor A.B.C.
Filter attached to a handler is consulted before an event is emitted by that handler.
This means that you just need one logger and add two handlers with different filters attached.
import logging
def filter_info(record):
return True if record.levelno == logging.INFO else False
def filter_error(record):
return True if record.levelno >= logging.ERROR else False
# define your logger
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# define handler for file
file_handler = logging.FileHandler('path_to_log.txt')
file_handler.level = logging.INFO
file_handler.addFilter(filter_info) # add filter to handler
# define handler for mail
mail_handler = logging.StreamHandler()
mail_handler.setLevel(logging.ERROR)
mail_handler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
mail_handler.addFilter(filter_error) # add filter to handler
logger.addHandler(file_handler)
logger.addHandler(mail_handler)
# test
logger.info("info 1")
logger.info("info 2")
logger.error("error 1")
logger.error("error 2")

Prevent Python logger from printing to console

I'm getting mad at the logging module from Python, because I really have no idea anymore why the logger is printing out the logging messages to the console (on the DEBUG level, even though I set my FileHandler to INFO). The log file is produced correctly.
But I don't want any logger information on the console.
Here is my configuration for the logger:
template_name = "testing"
fh = logging.FileHandler(filename="testing.log")
fr = logging.Formatter("%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s")
fh.setFormatter(fr)
fh.setLevel(logging.INFO)
logger = logging.getLogger(template_name)
# logger.propagate = False # this helps nothing
logger.addHandler(fh)
Would be nice if anybody could help me out :)
I found this question as I encountered a similar issue, after I had removed the logging.basicConfig(). It started printing all logs in the console.
In my case, I needed to change the filename on every run, to save a log file to different directories. Adding the basic config on top (even if the initial filename is never used, solved the issue for me. (I can't explain why, sorry).
What helped me was adding the basic configuration in the beginning:
logging.basicConfig(filename=filename,
format='%(levelname)s - %(asctime)s - %(name)s - %(message)s',
filemode='w',
level=logging.INFO)
Then changing the filename by adding a handler in each run:
file_handler = logging.FileHandler(path_log + f'/log_run_{c_run}.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
logger_TS.addHandler(file_handler)
Also curious side-note. If I don't set the formater (file_handler.setFormatter(formatter)) before setting the handler with the new filename, the initially formated logging (levelname, time, etc.) is missing in the log files.
So, the key is to set the logging.basicConfig before, then set the add the handler. As #StressedBoi69420 indicated above.
Hope that helps a bit.
You should be able to add a StreamHandler to handle stdout and set the handlers log level to a level above 50. (Standard log levels are 50 and below.)
Example of how I'd do it...
import logging
import sys
console_log_level = 100
logging.basicConfig(level=logging.INFO,
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
filename="testing.log",
filemode="w")
console = logging.StreamHandler(sys.stdout)
console.setLevel(console_log_level)
root_logger = logging.getLogger("")
root_logger.addHandler(console)
logging.debug("debug log message")
logging.info("info log message")
logging.warning("warning log message")
logging.error("error log message")
logging.critical("critical log message")
Contents of testing.log...
2019-11-21 12:53:02,426,426 root INFO info log message
2019-11-21 12:53:02,426,426 root WARNING warning log message
2019-11-21 12:53:02,426,426 root ERROR error log message
2019-11-21 12:53:02,426,426 root CRITICAL critical log message
Note: The only reason I have the console_log_level variable is because I pulled most of this code from a default function that I use that will set the console log level based on an argument value. That way if I want to make the script "quiet", I can change the log level based on a command line arg to the script.

Flask + sqlalchemy advanced logging

I found a few other posts on this but none that worked for me yet so I wanted to reach out and see if anyone could explain how to properly get / redirect / set handlers on some of the loggers present in Flask / Werkzeurg / sqlalchemy.
Research prior that could not answer my question:
https://github.com/pallets/flask/issues/1359
http://flask.pocoo.org/docs/dev/logging/
https://gist.github.com/ibeex/3257877
My configurations:
main.py
...
def init_app():
""" Runs prior to app launching, contains initialization code """
# set logging level
if not os.path.exists(settings.LOG_DIR):
os.makedirs(settings.LOG_DIR)
# default level
log_level = logging.CRITICAL
if settings.ENV == 'DEV':
log_level = logging.DEBUG
elif settings.ENV == 'TEST':
log_level = logging.WARNING
elif settings.ENV == 'PROD':
log_level = logging.ERROR
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_logger = logging.getLogger()
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
api_logger.addHandler(api_handler)
logging.getLogger('werkzeug').addHandler(api_handler)
db_logger = logging.getLogger('sqlalchemy')
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
db_logger.addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
# add endpoints
...
if __name__ == '__main__':
init_app()
app.run(host='0.0.0.0', port=7777)
I tried grabbing and changes settings on the loggers a few different ways but I still end up with the werkzeug debug outputting to console and not my logs, I can see the logs are being created but it doesn't look like the loggers are actually outputting to them:
api.log (formatter wrote to it)
2018-02-15 12:03:03,944] {/usr/local/lib/python3.5/dist-packages/werkzeug/_internal.py:88} WARNING - * Debugger is active!
db.log (empty)
Any insight on this would be much appreciated!
UPDATE
I was able to get the werkzeug logger working using the long hand version, it seems the shorthand function calls shown were returning null objects. The sqlalchemy logger is still outputting to console though.. Could the engine configuration be overriding my filehandler?
main.py
...
# close current file handlers
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.engine').handlers):
logging.getLogger('sqlalchemy.engine').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.dialects').handlers):
logging.getLogger('sqlalchemy.dialects').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.pool').handlers):
logging.getLogger('sqlalchemy.pool').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.orm').handlers):
logging.getLogger('sqlalchemy.orm').removeHandler(handler)
handler.close()
# create our own custom handlers
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
logging.getLogger().setLevel(log_level)
logging.getLogger().addHandler(api_handler)
logging.getLogger('werkzeug').setLevel(log_level)
logging.getLogger('werkzeug').addHandler(api_handler)
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').setLevel(log_level)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').setLevel(log_level)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').setLevel(log_level)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').setLevel(log_level)
database.py
...
engine = create_engine(getDBURI(), echo="debug", echo_pool=True, pool_recycle=10)
ANSWER
For future reference if anyone runs into this issue, sqlalchemy engine configuration echo=True|'debug' will OVERRIDE your loggers. Fixed the issue by changing my engine configuration to:
engine = create_engine(getDBURI(), echo_pool=True, pool_recycle=10)
And then everything worked like a charm. Cheers! :D
as i understand it your file based log configuration for werkzeug is actually working => it outputs into api.log
The db log handler is also working (file gets created etc.) but there is no output.
This is probably due to the loglevel of those loggers beeing on Error by default. You need to set them manually on a lower level like this:
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.dialects').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG)
That werkzeug is still outputting to console is probably because there is allways a root logger defined. Before you add your new handlers you should do the following to remove all log handlers:
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close() # clean up used file handles
Then you can also assign your app log handler as the root log handler with
logging.getLogger().addHandler(api_handler)
If its not the root logger but just the werkzeug logger which has a default console logger defined you can also just remove all handlers from the werkzeug logger before adding yours like this:
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close() # clean up used file handles
logging.getLogger('werkzeug').addHandler(api_handler)

Python logging setLevel() not taking effect

I have a python program that utilizes multiprocessing to increase efficiency, and a function that creates a logger for each process. The logger function looks like this:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
# create a logger
if logging in os.environ:
logging_string = os.environ["logging"]
if logging_string == "DEBUG":
logging_level = loggin.DEBUG
else if logging_string == "INFO":
logging_level = logging.INFO
else if logging_string == "WARNING":
logging_level = logging.WARNING
else if logging_string == "ERROR":
logging_level = logging.ERROR
else if logging_string == "CRITICAL":
logging_level = logging.CRITICAL
else:
logging_level = logging.INFO
logger = logging.getLogger(app_name)
logger.setLevel(logging_level)
# Console handler for error output
console_handler = logging.StreamHandler()
console_handler.setLevel(logging_level)
# Formatter to make everything look nice
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
# Add the handlers to the logger
logger.addHandler(console_handler)
return logger
And my processing functions look like this:
import custom_logging
def do_capture(data_dict_access):
"""Process data"""
# Custom logging
LOGGER = custom_logging.create_logger("processor")
LOGGER.debug("Doing stuff...")
However, no matter what the logging environment variable is set to, I still receive debug log messages in the console. Why is my logging level not taking effect, surely the calls to setLevel() should stop the debug messages from being logged?
Here is an easy way to create a logger object:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
logging_level = os.getenv('logging', logging.INFO)
logging.basicConfig(
level=logging_level,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(app_name)
return logger
Discussion
There is no need to convert from "DEBUG" to logging.DEBUG, the logging module understands these strings.
Use basicConfig to ease the pain of setting up a logger. You don't need to create handler, set format, set level, ... This should work for most cases.
Update
I found out why your code does not work, besides the else if. Consider your line:
if logging in os.environ:
On this line loggging without quote refers to the logging library package. What you want is:
if 'logging' in os.environ:

Python logging caches?

I have a function in my python package, which returns a logger:
import logging
def get_logger(logger_name, log_level='DEBUG') :
"""Setup and return a logger."""
log = logging.getLogger(logger_name)
log.setLevel(log_level)
formatter = logging.Formatter('[%(levelname)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
log.addHandler(handler)
return log
I use this logger in my modules and submodules:
from tgtk.logger import get_logger
log = get_logger(__name__, 'DEBUG')
so I can use it via
log.debug("Debug message")
log.info("Info message")
etc. This worked so far perfectly, but today I encountered a weird problem: on one machine, there is no output at all. Every log.xxx is simply being "ignored", whatever level is set. I had some similar issues in the past and I remember that it somehow started working again after I renamed the logger, but this time it does not help.
Is there any caching or what is going on? The scripts are exactly the same (synced over SVN).

Categories

Resources