I have found a lot of documentation and tutorials such as the official logging config docs, the official logging cookbook, and this nice tutorial by Fang.
Each of them have gotten me near to an answer, but not quite. My question is this:
When using Config Files, how can I use a logger with 2 separate handlers at 2 separate levels?
To clarify, here is an example of my YAML file:
---
version: 1
handlers:
debug_console:
class: logging.StreamHandler
level: DEBUG
.
.
.
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
.
.
.
loggers:
dev:
handlers: [debug_console, info_file_handler]
test:
handlers: [info_file_handler]
root:
handlers: [info_file_handler]
I want to have two ways to run the logger, where one way (dev) is more verbose than the other. Moreover, when running the dev logger, I want it to have two different levels for the two different handlers.
This is a snippet of the code to try to launch the logger:
with open('logging.yaml', 'r') as f:
log_cfg = yaml.safe_load(f.read())
logging.config.dictConfig(log_cfg)
my_logger = logging.getLogger('dev')
The dictConfig line above works correctly. I say this because when I get to the code which asks to log to the console, I will see dev as the name when the log prints out. (I have edited the yaml, but it contains %(name)s in the format.)
But there is something wrong with my_logger. Even though it is tied to the name of dev, none of the rest of the attributes seem to have been set. Specifically, I see:
>>> my_logger
<Logger dev_model (WARNING)>
I don't know the logging module well enough to understand where the problem is. What I want is:
When I activate the 'dev' logger, I want to launch 2 handlers, one which is at the DEBUG level and writes to console, the other which is at the INFO level and writes to a file.
How can this be done?
If I understand the question correctly, the problem is caused by the fact the logger itself has a log level, not just handlers. Logger's log level defaults to WARNING, which seems to be set on your logger. If a generated message has a lower priority than the logger's level then it does not even make to the handlers.
So try setting logger's level to DEBUG. info_file_handler should ignore any messages more verbose than it's own level.
As for this part:
none of the rest of the attributes seem to have been set.
What happens there is logger's repr() method is called to convert Logger to some sort of string representation in order to render it. Which is not guaranteed to show all the attributes of the object.
Such a long question... Too long for me to understand well.
But I think you misunderstand how handlers work. Actually logger itself doesn't output anything but handlers do.
So let's say if you set DEBUG on dev logger, it will pass logs >= DEBUG to all handlers. And then debug_console handler will process logs >= DEBUG but info_file_handler will only process logs >= INFO. Setting DEBUG on dev logger won't let info_file_handler output logs < INFO. So you do can have two separate levels which one is >= DEBUG and goes to console while another is >= INFO and goes to file.
I am presuming I understand you rightly...
Related
I have a basic config for the logging module with debug level - now I want to create another logger with error level only. How can I do that?
The problem is that the root handler is called in addition to the error-handler - this is something I want to avoid.
import logging
fmt = '%(asctime)s:%(funcName)s:%(lineno)d:%(levelname)s:%(name)s:%(message)s'
logging.basicConfig(level=logging.DEBUG, format=fmt)
logger = logging.getLogger('Temp')
logger.setLevel(logging.ERROR)
handler = logging.StreamHandler()
handler.setLevel(logging.ERROR)
logger.addHandler(handler)
logger.error('boo')
The above code prints boo twice while I expect once only, and I have no idea what to do with this annoying issue...
In [4]: logger.error('boo')
boo
2021-04-26 18:54:24,329:<module>:1:ERROR:Temp:boo
In [5]: logger.handlers
Out[5]: [<StreamHandler stderr (ERROR)>]
Some basics about the logging module
logger: a person who receives the log string, sorts it by a predefined level, then uses his own handler if any to process the log and, by default passes the log to his superior.
root logger: the superior of superiors, does all the things that a normal logger does but doesn't pass the received log to anyone else.
handler: a private contractor of a logger, who actually does anything with the log, eg. formats the log, writes it to a file or stdout, or sends it through tcp/udp.
formatter: a theme, a design that the handler applies to the log.
basicConfig: a shortcut way to config the root logger. This is useful when you want him to do all the job and all his lower rank loggers would just pass the log to him.
With no argument, basicConfig sets root logger's level to WARNING and add a StreamHandler that output the log to stderr.
What you did
You created a format and used a shortcut basicConfig to config the root logger. You want the root logger to do all the actual logging things
You created a new low-rank logger Temp
You want it to accept logs with only ERROR level and above.
You created another StreamHandler. Which output to stdout by default.
You want it to handle only ERROR level and above
Oh, you assigned it to Temp logger, which made 5. redundant since the level is set in 3.
Oh wait, thought you just want the root logger to do the job since 1.!
You logged an ERROR with your logger.
What happened
Your Temp logger accepted a string boo at ERROR level. Then told its handler to process the string. Since this handler didn't have any formatter assigned to it, it outputted the string as-is to stdout: boo
After that, Temp logger passed the string boo to his superior, the root logger.
The root logger accepted the log since the log level is ERROR > WARNING.
The root logger then told its handler to process the string boo.
This handler applies the format string to boo. Added timestamp, added location, added the name of logger that passed the log, etc.
Finally it outputted the result to stderr: 2021-04-26 18:54:24,329:<module>:1:ERROR:Temp:boo
Make it right
Since your code does exactly what you tell it to do, you have to tell it as much detail as possible.
Only use basicConfig when you are lazy. By removing basicConfig line, your problem solved.
Use logger = logging.getLogger('__name__') so that the logger has the name of the module. Looking at the log and know exactly which import path that it came from.
Decide if a logger should keep the log on its own or pass it up the chain with the propagate property. In your case, logger.propagate = False also solves the problem.
Use a dictConfig file so you don't get messed with the config code.
In practice, you should not add handlers to your logger, and just let the logger pass the log all the way to the root and let the root do the actual logging. Why?
Someone else uses your code as a module, can control the logging. For example, not output to stdout but to tcp/udp, output with a different format, etc.
You can turn off the logging from a specific logger entirely, by propagating=False.
You know exactly all the handlers and formatters in the code if you only added them to the root logger. You have centralized control over the logging.
To set up logging in Python without basicConfig we would go through the steps:
Set up a file handler.
Set the logging level of the file handler.
Set up a formatter.
Point the file handler to the formatter.
Get the logger object.
Set the logging level of the logger object.
Add the file handler as a handler to the logger object.
Use the .info(), .warning(), etc method on the logger.
These steps are executed by the following code:
import logging
file_handler = logging.FileHandler('./out.log', 'a')
file_handler.setLevel(logging.DEBUG)
format_string = '%(asctime)s\t%(levelname)s: %(message)s'
formatter = logging.Formatter(format_string)
file_handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
logger.info('visible info')
logger.debug('invisible debug')
What is the difference between setting the logging level for the file handler and setting the logging level for the logger?
Okay, so here is a small bit of code to work out:
import logging
# Declare a function to log all 5 levels with different information
def log_all_levels(logger):
logger.debug("Debug from logger {}".format(logger.name))
logger.info("Info from logger {}".format(logger.name))
logger.warning("Warning from logger {}".format(logger.name))
logger.error("Error from logger {}".format(logger.name))
logger.critical("Fatal from logger {}".format(logger.name))
# This file handler will track errors from all loggers
all_errors_handler = logging.FileHandler('errors.log')
all_errors_handler.setLevel(logging.ERROR)
# This file handler will only be used in a specific region of code
foo_info_handler = logging.FileHandler('foo_info.log')
foo_info_handler.setLevel(logging.INFO)
foo_info_handler.addFilter(lambda r: r.levelno == logging.INFO)
# The following loggers will be used in the main execution
foo_logger = logging.getLogger("Foo")
nameless_logger = logging.getLogger("nameless")
foo_logger.setLevel(logging.INFO)
nameless_logger.setLevel(logging.DEBUG)
loggers = (foo_logger, nameless_logger)
# Set each logger up to use the file handlers
# Each logger can have many handlers, each handler can be used by many loggers
for logger in loggers:
logger.addHandler(all_errors_handler)
debug_file_handler = logging.FileHandler('{}.log'.format(logger.name))
debug_file_handler.setLevel(logging.DEBUG)
logger.addHandler(debug_file_handler)
if logger.name == "Foo":
logger.addHandler(foo_info_handler)
# Let's run some logging operations
for logger in loggers:
log_all_levels(logger)
There are 2 loggers - foo_logger set to the info level and nameless_logger set to the debug level. Both of them use the errors and debug handlers, however only the foo_logger uses the foo_file_handler. There are now loggers and file handlers with different levels, connected together in a many-to-many relationship.
As you can find out:
errors.log will contain errors from both loggers. Quite self-explanatory for a real life scenario - reading through logs containing just the errors helps debugging the code.
Foo.log and nameless.log will contain everything possible about those loggers, respecting their levels. So the former will contain info and greater, whereas the latter will track debug and greater levels. Logging per object will potentially create a lot of files, but it might be crucial when trying to detect some object-specific errors.
foo_info is a very special file handler and it only allows info level from the associated logger. Such files can be a life saver when you enter a potentially unsafe or untested area of code and would like to see what exactly is happening within that code block, without having to browse through all your program log.
There are many other things you can do with logging - set up your own logging rules, make a logging hierarchy, create a logger factory - possibilities are endless. Logging should allow flexibility - for example by allowing logger objects and file handlers to have different and separate logging levels, and letting the programmer combine them together as needed.
I hope the small code exercise alongside with my explanations cleared any further doubts - but I do recommend to have a look at Logging Cookbook or the docs if you still need more examples.
I set up a basic python logger that writes to a log file and to stdout. When I run my python program locally, log messages with logging.info appear as expected in the file and in the console. However, when I run the same program remotely via ssh -n user#server python main.py neither the console nor the file show any logging.info messages.
This is the code used to set up the logger:
def setup_logging(model_type, dataset):
file_name = dataset + "_" + model_type + time.strftime("_%Y%m%d_%H%M")
logging.basicConfig(
level=logging.INFO,
format="[%(levelname)-5.5s %(asctime)s] %(message)s",
datefmt='%H:%M:%S',
handlers=[
logging.FileHandler("log/{0}.log".format(file_name)),
logging.StreamHandler()
])
I already tried the following things:
Sending a message to logging.warning: Those appear as expected on the root logger. However, even without setting up the logger and falling back to the default logging.info messages do not show up.
The file and folder permissions seem to be alright and an empty file is created on disk.
Using print works as usual as well
If you look into the source code of basicConfig function, you will see that the function is applied only when there are no handlers on the root logger:
_acquireLock()
try:
force = kwargs.pop('force', False)
if force:
for h in root.handlers[:]:
root.removeHandler(h)
h.close()
if len(root.handlers) == 0:
handlers = kwargs.pop("handlers", None)
if handlers is None:
...
I think, one of the libraries you use configures logging on import. And as you see from the sample above, one of the solutions is to use force=True argument.
A possible disadvantage is that several popular data-science libraries keep a reference to the loggers they configure, so that when you reconfigure logging yourselves their old loggers with the handlers are still there and do not see your changes. In which case you will also need to clean the handlers for those loggers as well.
I'm writing a program with some fairly complicated configuration settings (with options from the environment, from the command line, and from a couple of possible configuration files). I'd like to enable very verbose "debug" level logging to memory during all this configuration reading and initialization --- and then I'd like to selectively dump some, possibly all, of that into their final logging destinations based on some of those configuration settings, command line switches and environment values.
Does anyone here know of a good open source example I could look at where someone's already done this sort of thing?
It looks like the Logging: MemoryHandler should be able to do this ... initially with a target=None and then calling .setTarget() method after the configuration is parsed. But the question is, can I set the MemoryHandler loglevel to DEBUG, then set a target with a different (usually less verbose) loglevel, and then simply .flush() the MemoryHandler into this other handler (effectively throwing away all of the overly verbose entries in the process)?
That won't quite work, because MemoryHandler's flush() method doesn't check levels before sending them to the target - all buffered records are sent. However, you could use a filter on the target handler, as in this example:
import logging, logging.handlers
class WarningFilter(logging.Filter):
def filter(self, record):
return record.levelno >= logging.WARNING
logger = logging.getLogger('foo')
mh = logging.handlers.MemoryHandler(1000)
logger.setLevel(logging.DEBUG)
logger.addHandler(mh)
logger.debug('bar')
logger.warning('baz')
sh = logging.StreamHandler()
sh.setLevel(logging.WARNING)
sh.addFilter(WarningFilter())
mh.setTarget(sh)
mh.flush()
When run, you should just see baz printed.
I'm logging events in my python code uing the python logging module. I have 2 logging files I wish to log too, one to contain user information and the other a more detailed log file for devs. I've set the the two logging files to the levels I want (usr.log = INFO and dev.log = ERROR) but cant work out how to restrict the logging to the usr.log file so only the INFO level logs are written to the log file as opposed to INFO plus everthing else above it e.g. INFO, WARNING, ERROR and CRITICAL.
This is basically my code:-
import logging
logger1 = logging.getLogger('')
logger1.addHandler(logging.FileHandler('/home/tmp/usr.log')
logger1.setLevel(logging.INFO)
logger2 = logging.getLogger('')
logger2.addHandler(logging.FileHandler('/home/tmp/dev.log')
logger2.setLevel(logging.ERROR)
logging.critical('this to be logged in dev.log only')
logging.info('this to be logged to usr.log and dev.log')
logging.warning('this to be logged to dev.log only')
Any help would be great thank you.
I am in general agreement with David, but I think more needs to be said. To paraphrase The Princess Bride - I do not think this code means what you think it means. Your code has:
logger1 = logging.getLogger('')
...
logger2 = logging.getLogger('')
which means that logger1 and logger2 are the same logger, so when you set the level of logger2 to ERROR you actually end up setting the level of logger1 at the same time. In order to get two different loggers, you would need to supply two different logger names. For example:
logger1 = logging.getLogger('user')
...
logger2 = logging.getLogger('dev')
Worse still, you are calling the logging module's critical(), info() and warning() methods and expecting that both loggers will get the messages. This only works because you used the empty string as the name for both logger1 and logger2 and thus they are not only the same logger, they are also the root logger. If you use different names for the two loggers as I have suggested, then you'll need to call the critical(), info() and warning() methods on each logger individually (i.e. you'll need two calls rather than just one).
What I think you really want is to have two different handlers on a single logger. For example:
import logging
mylogger = logging.getLogger('mylogger')
handler1 = logging.FileHandler('usr.log')
handler1.setLevel(logging.INFO)
mylogger.addHandler(handler1)
handler2 = logging.FileHandler('dev.log')
handler2.setLevel(logging.ERROR)
mylogger.addHandler(handler2)
mylogger.setLevel(logging.INFO)
mylogger.critical('A critical message')
mylogger.info('An info message')
Once you've made this change, then you can use filters as David has already mentioned. Here's a quick sample filter:
class MyFilter(object):
def __init__(self, level):
self.__level = level
def filter(self, logRecord):
return logRecord.levelno <= self.__level
You can apply the filter to each of the two handlers like this:
handler1.addFilter(MyFilter(logging.INFO))
...
handler2.addFilter(MyFilter(logging.ERROR))
This will restrict each handler to only write out log messages at the level specified.
First: this is a rather odd thing to want to do, and strikes me as a slight misuse of the logging system. I can't imagine any situation in which it makes sense to notify the user about the normal operation of the program but not about things that are more important. The logging levels should be used to indicate importance; if you have messages that are only of interest to developers, you should be using some other mechanism to distinguish them (such as which logger you send them to).
That being said, you can implement arbitrary filtering of log records by creating a Filter subclass whose filter method implements your desired criteria and install it on the appropriate handler.