I have a question about python standard logging mechanism. So if I use logging.config.fileConfig to load my configuration file, then I create loggers for some modules using logging.getLogger test them right after creation and they work. Now if I call logging.config.fileConfig again with the same configuration file and create loggers for some other module would the previos ones still work ? Basically for the following logic:
logging.config.fileConfig(config_file)
logger1 = logging.getLogger(module1)
logger2 = logging.getLogger(module2)
logging.config.fileConfig(config_file)
logger3 = logging.getLogger(module3)
config_file is the same in both calls. Should logger1 and logger2 be functional ? How about if config_file is different in those calls? Currently my logger1 and logger2 are not working after i load a new config_file. So first step is to check if this is normal behaviour. If so is it possible to make this work without merging the two config_files into one big one?
Regards,
Bogdan
Config files are intended to completely replace the existing configuration with whatever is in the confguration - any loggers which are not named in the configuration, or children thereof, are disabled by fileConfig(), as documented here. you can prevent this disabling, but only on recent Python versions. It's not generally good practice to call fileConfig() multiple times in a program, unless you have a specific need to do so. It's not forbidden, but it's not usual.
A common usage involves configuring handlers on the root logger and perhaps one or two top-level loggers; does this apply to you?
Related
I want to initialise the logging config using a config file(json or yaml) only once when I call my main module.
Is there a concept of context in python like we have in Java where I can take the logger from config whenever I need.
Something like this in the main module -
logging.config.fileConfig('log-conf.json')
I want to use the loaded config in my entire application without having to load the config in each module.
Also, should I do log = logging.getLogger(__name__) at module level or method level. What is the advantage of doing at method level.
This blog post of mine contains major answers to your question (YAML).
http://glenfant.github.io/the-zen-of-logging-and-yaml.html
You might also have inspiration from this recipe, if you prefer a pure Python config file.
http://glenfant.github.io/simple-customizable-configuration.html
I want to improve my understanding of how to use logging correctly in Python. I want to use .ini file to configure it and what I want to do:
define basic logger config through .fileConfig(...) in some .py file
import logger, call logger = logging.getLogger(__name__) across the app and be sure that it uses my config file that I was loaded recently in different .py file
I read few resources over Internet ofc but they are describing tricks of how to configure it etc, but want I to understand is that .fileConfig works across all app or works only for file/module where it was declared.
Looks like I missed some small tip or smth like that.
It works across the whole app. Be sure to configure the correct loggers in the config. logger = logging.getLogger(__name__) works well if you know how to handle having a different logger in every module, otherwise you might be happier just calling logger = logging.getLogger("mylogger") which always gives you the same logger. If you only configure the root logger you might even skip that and simply use logging.info("message") directly.
I have a problem with my logging in my python script. I run the same script multiple times (to have several simulations) using Pool for increased performance. In my script I'm using a logger with MemoryHandler, defined as below:
capacity=5000000000
filehandler_name = SOME_NAME
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG)
filehandler = logging.FileHandler(filehandler_name)
memoryhandler = logging.handlers.MemoryHandler(
capacity=capacity,
flushLevel=logging.ERROR,
target=filehandler
)
logger.addHandler(memoryhandler)
and I log information using logger.info(...). However, I noticed that the logging is not always working. When I check different log files (I have one log sile per simulation), some log files contain data, the others are empty. There is not particular pattern in which are empty and which are not, usually it happens at random. I tried many things but it seems like I'm missing something. Does anyone has any suggestion on why Python logger might not be always working corretly?
Without a code snippet I would guess it is caused by the multiprocessing, you mention:
using Pool for increased performance..
You can check the official documentation on how to use logging module while multiprocessing.
I'm writing a large hardware simulation library in Python3. For logging, I use the Python3 Logging module.
For controlling debug messages with method-level granularity, I learned "on the street" (ok, here at StackOverflow) to create sub-loggers within each method I wanted to log from:
sub_logger = logging.getChild("new_sublogger_name")
sub_logger.setLevel(logging.DEBUG)
# Sample debug message
sub_logger.debug("This is a debug message...")
By changing the call to setLevel(), the user is able to enable/disable debugging messages on a per-method basis.
Now the Boss Man don't like this approach. He's advocating a single-point at which all logging messages in the library can be enabled/disabled with the same method-level granularity. (This was to be accomplished by writing our own Python logging library BTW).
Not wanting to re-invent the logging wheel, I proposed to instead continue to use the Python Logging library, but instead use Filters to allow single-point control of logging messages.
Having not used Python Logging Filters very often, is there a consensus on using Filters vs Sublogger.setLevel() for this application? What are the pros/cons of each method?
I'm quite used to setLevel() after using it for a while, but that may be coloring my objectiveness. I DO NOT, however, wish to waste everyone's time writing another Python logging library.
I think the existing logging module does what you want. The trick is to separate the place where you call setLevel() (a configuration operation) from the places where you call getChild() (ongoing logging operations).
import logging
logger = logging.getLogger('mod1')
def fctn1():
logger.getChild('fctn1').debug('I am chatty')
# do stuff (notice, no setLevel)
def fctn2():
logger.getChild('fctn2').debug('I am even more chatty')
# do stuff (notice, no setLevel)
Notice there was no setLevel() there, which makes sense. Why call setLevel() every time and since when does a method know what logging level the user wants.
You set your logging levels in a configuration step at the beginning of the program. You can do it with the dictionary based configuration, a python module that does a bunch of setLevel() calls or even something you cook up with ini files or whatever. But basically it boils down to:
def config_logger():
logging.getLogger('abc.def').setLevel(logging.INFO)
logging.getLogger('mod1').setLevel(logging.WARN)
logging.getLogger('mod1.fctn1').setLeveL(logging.DEBUG)
(etc...)
Now, if you want to get fancy with filters, you can use them to inspect the stack frame and pull the method name out for you. But that gets more complicated.
I am setting up my Python logging in main.py via reading in a file and using the fileConfig option. I want to be able to switch between testing and live logging configurations, so I want to read in a separate config file first and extract the logging config file path from there.
The problem here is that other files that I import from main.py grab their own logger via log = getLogger(__name__), and this happens at import time. These links then get broken when the new configuration is loaded in, and these modules end up without the logging working the way I expect.
I can't easily delay the importing of these modules without a lot of refactoring, so is there any other way of being able to keep this method of setting up loggers by module name while still loading in the log configuration later?
I'm not sure from your question exactly how things are breaking, but here's how I see it. The various modules which do log = logging.getLogger(__name__) will have valid names for their loggers (logger name = package name), unless you were to somehow actually move the modules to some other package location.
At import time, the logging configuration may or may not have been set, and there shouldn't be any actual logging calls made as a side-effect of the import (if there are, the messages may have nowhere to go).
Loading a new configuration using fileConfig typically just sets handlers, formatters and levels on loggers.
When you subsequently call code in the imported modules, they log via their loggers, which have handlers attached by your previous configuration call - so they will output according to the configuration.
You should be aware that on older versions of Python (<= 2.5), calls to fileConfig would unavoidably disable existing loggers which weren't named in the configuration - in more recent versions of Python (>= 2.6), this is configurable using a disable_existing_loggers=False keyword argument passed to fileConfig. You may want to check this, as it sometimes leads to unexpected behaviour (the default for that parameter is True, for compatibility with behaviour under the older Python versions).
If you post more details about what seems broken, I might be able to provide a better diagnosis of what's going on.