Python logging caches? - python

I have a function in my python package, which returns a logger:
import logging
def get_logger(logger_name, log_level='DEBUG') :
"""Setup and return a logger."""
log = logging.getLogger(logger_name)
log.setLevel(log_level)
formatter = logging.Formatter('[%(levelname)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
log.addHandler(handler)
return log
I use this logger in my modules and submodules:
from tgtk.logger import get_logger
log = get_logger(__name__, 'DEBUG')
so I can use it via
log.debug("Debug message")
log.info("Info message")
etc. This worked so far perfectly, but today I encountered a weird problem: on one machine, there is no output at all. Every log.xxx is simply being "ignored", whatever level is set. I had some similar issues in the past and I remember that it somehow started working again after I renamed the logger, but this time it does not help.
Is there any caching or what is going on? The scripts are exactly the same (synced over SVN).

Related

Creating a custom slack log handler doesn't work

I want to create a custom python logging Handler to send messages via slack.
I found this package however its no longer being updated so i created a very bare bone version of it. however it doesnt seem to work, I added a print call for debugging purposes and emit is not being evoked.
import logging
# import slacker
class SlackerLogHandler(logging.Handler):
def __init__(self, api_key, channel):
super().__init__()
self.channel = channel
# self.slacker = slacker.Slacker(api_key)
def emit(self, record):
message = self.format(record)
print('works')
# self.slacker.chat.post_message(text=message, channel=self.channel)
slack_handler = SlackerLogHandler('token', 'channel')
slack_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
slack_handler.setFormatter(formatter)
logger = logging.getLogger('my_app')
logger.addHandler(slack_handler)
logger.info('info_try') # 'works' is not printed and no slack message is sent
I saw this answer and tried to also inherit from StreamHandler but to no avail.
I think I am missing something very basic.
edited to remove slack logic for ease of reproduction.
After some additional searching I found out that the logging level that is set for the handler is separate from the level that is set for the logger.
meaning, that adding:
logger.setLevel(logging.INFO)
fixes the problem.

Python logging.getLogger not working in AWS Glue python shell job

I am trying to set up a logger for my AWS Glue job using Python's logging module. I have a Glue job with the type as "Python Shell" using Python version 3.
Logging works fine if I instantiate the logger without any name, but if I give my logger a name, it no longer works, and I get an error which says: Log stream not found.
I have the following code in an example Glue job:
import sys
import logging
# Version 1 - this works fine
logger = logging.getLogger()
log_format = "[%(asctime)s %(levelname)-8s %(message)s"
# Version 2 - this fails
logger = logging.getLogger(name = "foobar")
log_format = "[%(name)s] %(asctime)s %(levelname)-8s %(message)s"
date_format = "%a, %d %b %Y %H:%M:%S %Z"
log_stream = sys.stdout
if logger.handlers:
for handler in logger.handlers:
logger.removeHandler(handler)
logging.basicConfig(level = logging.INFO, format = log_format, stream =
log_stream, datefmt = date_format)
logger.info("This is a test.")
Note that I'm removing the handlers based on this post.
If I instantiate the logger using Version 1 of the code, it runs successfully and I am able to view the logs, as well as query them in CloudWatch.
If I run Version 2, giving the logger a name, the Glue job still succeeds. However, if I try to view the logs, I get the following error message:
Log stream not found
The log stream jr_f137743545d3d242618ac95d859b9146fd15d15a0aadce64d8f3ba991ffed012 could not be found. Check if it was correctly created and retry.
And I am also not able to query these logs in CloudWatch.
I have tried running this code locally using python version 3.6.0, and both versions work. Additionally, both versions of this logging code work inside of a Lambda funcvtion. They only fail in Glue.
This code worked for me:
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
root.addHandler(handler)
root.info("check")
I had a similar issue but fixed it with a combination of correct roles and looking in the right place in Cloudwatch. Make sure you're using the GlueServiceRole. Both Steve and your logging code is fine but the place that you are taken in Cloudwatch when you click on the "logs" button in Glue isn't the correct logging folder.
Go back into log groups then go into /aws-glue/python-jobs/error and that is the where the logger write to, while the stdout writes to the folder /aws-glue/python-jobs/output. It's not a very intuitive setup writing the logs to the error logs folder but hey ho, I'm sure there is a way of configuring it to get it to write where expected.
You should be able to name the log stream by using the following (replace "logger-name-here" with your desired log stream name):
import logging
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=MSG_FORMAT, datefmt=DATETIME_FORMAT)
logger = logging.getLogger(<logger-name-here>)
logger.setLevel(logging.INFO)
logger.info("Test log message")

Prevent Python logger from printing to console

I'm getting mad at the logging module from Python, because I really have no idea anymore why the logger is printing out the logging messages to the console (on the DEBUG level, even though I set my FileHandler to INFO). The log file is produced correctly.
But I don't want any logger information on the console.
Here is my configuration for the logger:
template_name = "testing"
fh = logging.FileHandler(filename="testing.log")
fr = logging.Formatter("%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s")
fh.setFormatter(fr)
fh.setLevel(logging.INFO)
logger = logging.getLogger(template_name)
# logger.propagate = False # this helps nothing
logger.addHandler(fh)
Would be nice if anybody could help me out :)
I found this question as I encountered a similar issue, after I had removed the logging.basicConfig(). It started printing all logs in the console.
In my case, I needed to change the filename on every run, to save a log file to different directories. Adding the basic config on top (even if the initial filename is never used, solved the issue for me. (I can't explain why, sorry).
What helped me was adding the basic configuration in the beginning:
logging.basicConfig(filename=filename,
format='%(levelname)s - %(asctime)s - %(name)s - %(message)s',
filemode='w',
level=logging.INFO)
Then changing the filename by adding a handler in each run:
file_handler = logging.FileHandler(path_log + f'/log_run_{c_run}.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
logger_TS.addHandler(file_handler)
Also curious side-note. If I don't set the formater (file_handler.setFormatter(formatter)) before setting the handler with the new filename, the initially formated logging (levelname, time, etc.) is missing in the log files.
So, the key is to set the logging.basicConfig before, then set the add the handler. As #StressedBoi69420 indicated above.
Hope that helps a bit.
You should be able to add a StreamHandler to handle stdout and set the handlers log level to a level above 50. (Standard log levels are 50 and below.)
Example of how I'd do it...
import logging
import sys
console_log_level = 100
logging.basicConfig(level=logging.INFO,
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
filename="testing.log",
filemode="w")
console = logging.StreamHandler(sys.stdout)
console.setLevel(console_log_level)
root_logger = logging.getLogger("")
root_logger.addHandler(console)
logging.debug("debug log message")
logging.info("info log message")
logging.warning("warning log message")
logging.error("error log message")
logging.critical("critical log message")
Contents of testing.log...
2019-11-21 12:53:02,426,426 root INFO info log message
2019-11-21 12:53:02,426,426 root WARNING warning log message
2019-11-21 12:53:02,426,426 root ERROR error log message
2019-11-21 12:53:02,426,426 root CRITICAL critical log message
Note: The only reason I have the console_log_level variable is because I pulled most of this code from a default function that I use that will set the console log level based on an argument value. That way if I want to make the script "quiet", I can change the log level based on a command line arg to the script.

Flask + sqlalchemy advanced logging

I found a few other posts on this but none that worked for me yet so I wanted to reach out and see if anyone could explain how to properly get / redirect / set handlers on some of the loggers present in Flask / Werkzeurg / sqlalchemy.
Research prior that could not answer my question:
https://github.com/pallets/flask/issues/1359
http://flask.pocoo.org/docs/dev/logging/
https://gist.github.com/ibeex/3257877
My configurations:
main.py
...
def init_app():
""" Runs prior to app launching, contains initialization code """
# set logging level
if not os.path.exists(settings.LOG_DIR):
os.makedirs(settings.LOG_DIR)
# default level
log_level = logging.CRITICAL
if settings.ENV == 'DEV':
log_level = logging.DEBUG
elif settings.ENV == 'TEST':
log_level = logging.WARNING
elif settings.ENV == 'PROD':
log_level = logging.ERROR
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_logger = logging.getLogger()
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
api_logger.addHandler(api_handler)
logging.getLogger('werkzeug').addHandler(api_handler)
db_logger = logging.getLogger('sqlalchemy')
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
db_logger.addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
# add endpoints
...
if __name__ == '__main__':
init_app()
app.run(host='0.0.0.0', port=7777)
I tried grabbing and changes settings on the loggers a few different ways but I still end up with the werkzeug debug outputting to console and not my logs, I can see the logs are being created but it doesn't look like the loggers are actually outputting to them:
api.log (formatter wrote to it)
2018-02-15 12:03:03,944] {/usr/local/lib/python3.5/dist-packages/werkzeug/_internal.py:88} WARNING - * Debugger is active!
db.log (empty)
Any insight on this would be much appreciated!
UPDATE
I was able to get the werkzeug logger working using the long hand version, it seems the shorthand function calls shown were returning null objects. The sqlalchemy logger is still outputting to console though.. Could the engine configuration be overriding my filehandler?
main.py
...
# close current file handlers
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.engine').handlers):
logging.getLogger('sqlalchemy.engine').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.dialects').handlers):
logging.getLogger('sqlalchemy.dialects').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.pool').handlers):
logging.getLogger('sqlalchemy.pool').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.orm').handlers):
logging.getLogger('sqlalchemy.orm').removeHandler(handler)
handler.close()
# create our own custom handlers
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
logging.getLogger().setLevel(log_level)
logging.getLogger().addHandler(api_handler)
logging.getLogger('werkzeug').setLevel(log_level)
logging.getLogger('werkzeug').addHandler(api_handler)
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').setLevel(log_level)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').setLevel(log_level)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').setLevel(log_level)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').setLevel(log_level)
database.py
...
engine = create_engine(getDBURI(), echo="debug", echo_pool=True, pool_recycle=10)
ANSWER
For future reference if anyone runs into this issue, sqlalchemy engine configuration echo=True|'debug' will OVERRIDE your loggers. Fixed the issue by changing my engine configuration to:
engine = create_engine(getDBURI(), echo_pool=True, pool_recycle=10)
And then everything worked like a charm. Cheers! :D
as i understand it your file based log configuration for werkzeug is actually working => it outputs into api.log
The db log handler is also working (file gets created etc.) but there is no output.
This is probably due to the loglevel of those loggers beeing on Error by default. You need to set them manually on a lower level like this:
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.dialects').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG)
That werkzeug is still outputting to console is probably because there is allways a root logger defined. Before you add your new handlers you should do the following to remove all log handlers:
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close() # clean up used file handles
Then you can also assign your app log handler as the root log handler with
logging.getLogger().addHandler(api_handler)
If its not the root logger but just the werkzeug logger which has a default console logger defined you can also just remove all handlers from the werkzeug logger before adding yours like this:
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close() # clean up used file handles
logging.getLogger('werkzeug').addHandler(api_handler)

strange behavior of logger

I encountered such a problem and couldn't solve it. I used python's logger to log info, logger level set to logging.DEBUG. I used gunicorn to log
info at the same time. Normally, the error message goes to python's logger, and the link messages and other messages written by logger.info or logger.debug goes to the log file of gunicorn. However with one application it doesn't behave so. The messages output by logger.info also goes to python's logger. The problem is, I only want to see error messages in python's logger, all the other messages would be seen from gunicorn's logger. Can anyone give me a clue where I might do wrong in this situation?
thx in advance,
alex
The following is my config:
LOGGER_LEVEL = logging.DEBUG
LOGGER_ROOT_NAME = "root"
LOGGER_ROOT_HANLDERS = [logging.StreamHandler, logging.FileHandler]
LOGGER_ROOT_LEVEL = LOGGER_LEVEL
LOGGER_ROOT_FORMAT = "[%(asctime)s %(levelname)s %(name)s %(funcName)s:%(lineno)d] %(message)s"
LOGGER_LEVEL = logging.ERROR
LOGGER_FILE_PATH = "/data/log/web/"
Code:
def config_root_logger(self):
formatter = logging.Formatter(self.config.LOGGER_ROOT_FORMAT)
logger = logging.getLogger()
logger.setLevel(self.config.LOGGER_ROOT_LEVEL)
filename = os.path.join(self.config.LOGGER_FILE_PATH, "secondordersrv.log")
handler = logging.FileHandler(filename)
handler.setFormatter(formatter)
logger.addHandler(handler)
# 测试环境配置再增加console的日志记录
self._add_test_handler(logger, formatter)
def _add_test_handler(self, logger, formatter):
# 测试环境配置再增加console的日志记录
if self.config.RUN_MODE == 'test':
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
My gunicorn config looks like this:
errorlog = '/data/log/web/%s.log' % APP_NAME
loglevel = 'info'
accesslog = '-'
You did not set the level of your handler.
After handler.setFormatter(formatter), add the following line:
handler.setLevel(self.config.LOGGER_LEVEL)

Categories

Resources