Python flask logging: two different formats, but want one format - python

I am working on a REST API using Flask.
The API is currently divided across two Python 3.6 modules: update.py and vmware_exporters_support.py.
update.py logs the way I want. vmware_exporters_support.py does not log the way I want. I want vmware_exporters_support.py to use update.py's logging format without logging things twice.
In update.py, the logging is set up with:
from flask.logging import create_logger
app = Flask('collector_api')
logger = create_logger(app)
import vmware_exporters_support
And create_logger, which is part of Flask, is at
https://github.com/pallets/flask/blob/1.1.x/src/flask/logging.py
Then in vmware_exporters_support.py I'm setting up logging with:
logger = logging.getLogger()
It seems like this should just get the root logger from update.py, but I'm not sure it does really considering how differently it's acting.
An illustrative log snippet looks like:
[2021-01-21 12:12:29,810] INFO in update: Writing container yaml /data/vmware-exporter/vmware_exporter_1
2021-01-21.12:12:29 INFO Writing container yaml /data/vmware-exporter/vmware_exporter_1
The [2021-01-21 12:12:29,810] (with the square brackets) is coming from update.py, and the 2021-01-21.12:12:29 (without the square brackets) is coming from vmware_exporters_support.py
What do I need to do to get vmware_exporters_support.py to use the same logging format as update.py?
BTW, update.py is the __main__, not vmware_exporters_support.py.
And I'm using Flask 1.1.2.
Thanks in advance!

The main reason - you have 2 different loggers
create_logger() using app.name for logger name(collector_api). But in vmware_exporters_support.py you created RootLogger(logger = logging.getLogger()). Therefore loggers may have different handlers and formatters. Here is an example how it works in your case:
import logging
logger1 = logging.getLogger('collector_api')
logger1.propagate = False
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fm = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(fm)
logger1.addHandler(ch)
# logger2 without handler and formatter...
logger2 = logging.getLogger()
logger2.setLevel(logging.INFO)
# uncomment later
# for h in logger1.handlers:
# logger2.addHandler(h)
logger1.info('first')
logger2.info('second')
run script. you will see only 1 message: 2021-01-25 11:11:19,241 - collector_api - INFO - first
now try to uncomment for h in logger1.handlers:.... You'll see 2 messages:
2021-01-25 11:13:53,593 - collector_api - INFO - first
2021-01-25 11:13:53,593 - root - INFO - second
So you need to use 1 logger or use the same handlers and formatters for 2 loggers.

Related

Python logging module: logger is repeating the message more and more as I run the script

So I have been having this weird problem in my Spyder IDE (and only in Spyder).
I initialize a logger using the logging module, and each time I run the script the messages are printed in more and more times (once on the first try, twice on the second, thrice on the third, etc...). Any ideas why?
The code:
import logging
logger = logging.getLogger(__name__)
handle = logging.StreamHandler()
logger.addHandler(handle)
handle.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt='%(asctime)s - %(levelname)s - %(message)s')
handle.setFormatter(formatter)
logger.warning('Testing')
Thanks in advance!
EDIT: This is what I see in my console after two tries:
logger.warning('Testing')
2020-09-30 15:34:34,763 - WARNING - Testing
logger.warning('Testing')
2020-09-30 15:34:38,476 - WARNING - Testing
2020-09-30 15:34:38,476 - WARNING - Testing
The logging module uses a static logger object that lives in your python session. So, every time you run your script, addHandler will add a new handler to logger.handlers. You can add a line to your script that prints logger.handlers and check.
...
logger.addHandler(handle)
print(logger.handlers)
...
And every time you call logger.warning the message will be printed as many times as the number of handlers you have.
To fix this you need to remove the handler:
...
logger.warning('Testing')
logger.removeHandler(handle)
Another fix is to check if you already have a handler before adding a new one. This way you will only have one handler, and you won't need to remove it.
...
if not logger.handlers:
handle = logging.StreamHandler()
formatter = logging.Formatter(fmt='%(asctime)s - %(levelname)s - %(message)s')
handle.setFormatter(formatter)
handle.setLevel(logging.DEBUG)
logger.addHandler(handle)
...
You could also look into removing all handlers first.

Python Logging - Only for own imported modules

referring to this question here: LINK
How can I set up a config, that will only log my root script and my own sub-scripts? The question of the link asked for disabling all imported modules, but that is not my intention.
My root setup:
import logging
from exchangehandler import send_mail
log_wp = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(levelname)s [%(filename)s]: %(name)s %(funcName)20s - Message: %(message)s',
datefmt='%d.%m.%Y %H:%M:%S',
filename='C:/log/myapp.log',
filemode='a')
handler = logging.StreamHandler()
log_wp.addHandler(handler)
log_wp.debug('This is from root')
send_mail('address#eg.com', 'Request', 'Hi there')
My sub-module exchangehandler.py:
import logging
log_wp = logging.getLogger(__name__)
def send_mail(mail_to,mail_subject,mail_body, mail_attachment=None):
log_wp.debug('Hey this is from exchangehandler.py!')
m.send_and_save()
myapp.log:
16.07.2018 10:27:40 - DEBUG [test_script.py]: __main__ <module> - Message: This is from root
16.07.2018 10:28:02 - DEBUG [exchangehandler.py]: exchangehandler send_mail - Message: Hey this is from exchangehandler.py!
16.07.2018 10:28:02 - DEBUG [folders.py]: exchangelib.folders get_default_folder - Message: Testing default <class 'exchangelib.folders.SentItems'> folder with GetFolder
16.07.2018 10:28:02 - DEBUG [services.py]: exchangelib.services get_payload - Message: Getting folder ArchiveDeletedItems (archivedeleteditems)
16.07.2018 10:28:02 - DEBUG [services.py]: exchangelib.services get_payload - Message: Getting folder ArchiveInbox (archiveinbox)
My problem is, that the log-file contains also a lot of information of the exchangelib-module, that is imported in exchangehandler.py. Either the imported exchangelib-module is configured incorrectly or I have made a mistake. So how can I reduce the log-output only to my logging messages?
EDIT:
An extract of the folder.py of the exchangelib-module. This is not anything that I have written:
import logging
log = logging.getLogger(__name__)
def get_default_folder(self, folder_cls):
try:
# Get the default folder
log.debug('Testing default %s folder with GetFolder', folder_cls)
# Use cached instance if available
for f in self._folders_map.values():
if isinstance(f, folder_cls) and f.has_distinguished_name:
return f
return folder_cls.get_distinguished(account=self.account)
The imported exchangelib module is not configured at all when it comes to logging. You are configuring it implicitly by calling logging.basicConfig() in your main module.
exchangelib does create loggers and logs to them, but by default these loggers do not have handlers and formatters attached, so they don't do anything visible. What they do, is propagating up to the root logger, which by default also has no handlers and formatters attached.
By calling logging.basicConfig in your main module, you actually attach handlers to the root logger. Your own, desired loggers propagate to the root logger, hence the messages are written to the handlers, but the same is true for the exchangelib loggers from that point onwards.
You have at least two options here. You can explicitly configure "your" named logger(s):
main module
import logging
log_wp = logging.getLogger(__name__) # or pass an explicit name here, e.g. "mylogger"
hdlr = logging.StreamHandler()
fhdlr = logging.FileHandler("myapp.log")
log_wp.addHandler(hdlr)
log_wp.addHandler(fhdlr)
log_wp.setLevel(logging.DEBUG)
The above is very simplified. To explicitly configure multiple named loggers, refer to the logging.config HowTo
If you rather want to stick to just using the root logger (configured via basicConfig()), you can also explicitly disable the undesired loggers after exchangelib has been imported and these loggers have been created:
logging.getLogger("exchangelib.folders").disabled = True
logging.getLogger("exchangelib.services").disabled = True
If you don't know the names of the loggers to disable, logging has a dictionary holding all the known loggers. So you could temporarily do this to see all the loggers your program creates:
# e.g. after the line 'log_wp.addHandler(handler)'
print([k for k in logging.Logger.manager.loggerDict])
Using the dict would also allow you to do sth. like this:
for v in logging.Logger.manager.loggerDict.values():
if v.name.startswith('exchangelib'):
v.disabled = True

Python logger double output

I'm simply trying to have a python logger with a specific format that outputs log messages only to the console. I've tried many different things but I keep getting 2 lines of console output per log call.
Here is my code:
logger = logging.getLogger('my_logger')
logger.setLevel(logging.INFO)
# Create console handler
stream_handler = logging.StreamHandler()
formatter = logging.Formatter('%(levelname)s - %(asctime)s - %(name)s - %(message)s')
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
logger.info('TEST LOG info')
With output:
INFO - 2017-08-21 14:30:00,751 - my_logger - TEST LOG info
INFO:my_logger:TEST LOG info
I did exactly this and it didn't work: Disable output of root logger
Any idea what is going on? I don't care whether I use the root logger or not, I just want one line
The above code actually should be working correctly. Although my script was very lean, it was importing a non-system library which, somewhere down the line, had some logging configured which was affecting my output.

How do I set a different name to an individual log handler?

I have two log handlers in my code: a StreamHandler to write INFO level logs from the same module to stdout, and a FileHandler to write more verbose, DEBUG logs to a file. This is my code:
import sys
import logging
log = logging.getLogger('mymodule')
log.setLevel(logging.DEBUG)
logf = logging.FileHandler('file.log')
logf.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
log.addHandler(logf)
logs = logging.StreamHandler(sys.stdout)
logs.setLevel(logging.INFO)
logs.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
log.addHandler(logs)
However, I want the FileHandler to also write DEBUG info from other modules. I can achieve this if I remove the name from logging.getLogger(), but this will also affect my StreamHandler, which I only want to print output from my own module.
So is there a way to have either of the handlers use a different name?
There's a name attribute available on the base Handler class. I took a look at the Python source and doesn't look like it's used internally so you could manually set each handler to a different name.

Logging over multiple modules

How can I log everything using Python 'logging' to 1 text file, over multiple modules?
Main.py:
import logging
logging.basicConfig(format='localhost - - [%(asctime)s] %(message)s', level=logging.DEBUG)
log_handler = logging.handlers.RotatingFileHandler('debug.out', maxBytes=2048576)
log = logging.getLogger('logger')
log.addHandler(log_handler)
import test
Test.py:
import logging
log = logging.getLogger('logger')
log.error('test')
debug.out stays empty. I'm not sure what to try next, even after reading the logging documentation.
Edit: Fixed with the code above.
Set the correct logging level (at least ERROR if you want to get all messages with level ERROR or higher) and add a handler to write all messages into a file. For more details have a look at https://docs.python.org/2/howto/logging-cookbook.html.

Categories

Resources