Python logging creating extra log files - python

I am trying to use logging to create log files for a program. I'm doing something like this:
if not os.path.exists(r'.\logs'):
os.mkdir(r'.\logs')
logging.basicConfig(filename = rf'.\logs\log_{time.ctime().replace(":", "-").replace(" ", "_")}.log',
format = '%(asctime)s %(name)s %(levelname)s %(message)s',
level = logging.DEBUG)
def foo():
# do stuff ...
logging.debug('Done some stuff')
# do extra stuff ...
logging.debug('Did extra stuff')
# some parallel map that does NOT use logging in the mapping function
logging.debug('Done mapping')
if __name__ == '__main__':
foo()
All goes well ant the log is created with the correct information in it:
logs
log_Wed_Feb_14_09-23-32_2018.log
Only that at the end, for some reason, it also creates 2 additional log files and leaves them empty:
logs
log_Wed_Feb_14_09-23-32_2018.log
log_Wed_Feb_14_09-23-35_2018.log
log_Wed_Feb_14_09-23-39_2018.log
The timestamps are only a few seconds apart, but all of the logging still only goes in the first log file as it should.
Why is it doing this? Also is there a way to stop it from giving me extra empty files aside from just deleting any empty logs at the end of the program?

Solved. Kind of.
The behaviour using basic config kept happening so I tried to make a custom logger class:
class Logger:
"""Class used to encapsulate logging logic."""
__slots__ = ['dir',
'level',
'formatter',
'handler',
'logger']
def __init__(self,
name: str = '',
logdir: str = r'.\logs',
lvl: int = logging.INFO,
fmt: str = '%(asctime)s %(name)s %(levelname)s %(message)s',
hdl: str = rf'.\logs\log_{time.ctime().replace(":", "-").replace(" ", "_")}.log'):
print('construct')
if not os.path.exists(logdir):
os.mkdir(logdir)
self.dir = logdir
self.level = lvl
self.formatter = logging.Formatter(fmt = fmt)
self.handler = logging.FileHandler(filename = hdl)
self.handler.setFormatter(self.formatter)
self.logger = logging.getLogger(name)
self.logger.setLevel(self.level)
self.logger.addHandler(self.handler)
def log(self, msg: str):
"""Logs the given message to the set level of the logger."""
self.logger.log(self.level, msg)
def cleanup(self):
"""Iterates trough the root level of the log folder, removing all log files that have a size of 0."""
for log_file in (rf'{self.dir}\{log}' for log in next(os.walk(self.dir))[2]
if log.endswith('.log') and os.path.getsize(rf'{self.dir}\{log}') is 0):
os.remove(log_file)
def shutdown(self):
"""Prepares and executes the shutdown and cleanul actions."""
logging.shutdown()
self.handler.close()
self.cleanup()
And tried to pass it as a parameter to functions like this:
def foo(logger = Logger('foo_logger')):
But this approach made it construct a whole new logger each time I called the log method which let again to multiple files. By using one instance of Logger and defaulting the arguments to None I solved the problem of multiple files for this case.
However the initial Basic Config situation remains a mistery.

Related

Python log time when code kicks off and ends

I am writing a logger module to capture a few details of my codes. And one of the things I want to capture is the time the main code started and the time when it ended. I want to incorporate it into logging module, so that I don't have to write those lines in each of my codes. That way after I initialize my logger object I can just call the get_time function at beginning of main and end of it. I am practicing this in OOP, which I am new to. Is incorporating a function within the class CustomLogger (as below in def get_time()) the right way to do it? Or am I doing it wrong?
def __init__(self, loglevel=logging.INFO):
''' log config
Parameters:
loglevel (str): The default level to log. Valid values are 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'. Defaults to 'INFO'
'''
self.loglevel= loglevel
def get_time(self):
return datetime.now().strftime("%d/%m/%Y %H:%M:%S")
def custom_logger(self):
#inspect.stack()[1][3] gets the name of module that calls logger, incase we want multiple log calls from the ETL
logger = logging.getLogger(inspect.stack()[1][3])
logger.setLevel(self.loglevel)
workstation_logs = Path('path_to_write/workstation_logger.log')
log_message_format = logging.Formatter('%(asctime)s %(filename)s %(name)s - %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
#file_handler = logging.FileHandler(workstation_logs)
file_handler = logging.FileHandler(workstation_logs)
file_handler.setFormatter(log_message_format)
logger.addHandler(file_handler)
return logger
#end logger initiation
So my main code will look like this:
log= CustomLogger()
start= log.get_time()
#does a bunch of things
end= log.get_time()
log.custom_logger().info(f'started_at: {start}\t ended_at: {end}')
expected output- workstation_log:
2022-03-21 09:00:52 1608643300.py - INFO: Started at 3/21/2022 9:00:00 Ended at 3/21/2022 9:02:56

Why does logger persist even when instance spawning it has been deleted?

There was a weird issue happening yesterday. I got it fixed but still don't understand why things were happening the way they did.
So I have a class with an instance of a logger getting initialised in __init__:
class Foo:
def __init__(self, some_value):
self.logger = self._initialise_logger()
self.logger.info(f'Initialising an instance of {self.__class__.__name__}')
self.value = some_value
def _initialise_logger(self):
# Setting up logger
return logger
The class is meant to perform some calculations and at one stage I had to run it in a loop:
my_list = []
for m in my_list:
f = Foo(m)
f.calculate()
When this loop was running I started getting strange messages in the output. On the first run the messages would be normal, but on the second run they would be duplicated, then on the next run every logging message would appear three times and so on.
So I figured that somehow the instance of the class that spawned the logger might be persisting and logger keeps printing messages so I decided that I could just manually delete the instance when calculations are completed and the issue will be gone:
my_list = []
for m in my_list:
f = Foo(m)
f.calculate()
del f
That didn't work. In the end I fixed it by initialising an instance only once and then change the value of the instance variable inside the loop:
my_list = []
f = Foo()
for m in my_list:
f.value = m
f.calculate()
This fixed the problem but I still don't understand how a logger can persist even when the instance that spawned it has been deleted?
EDIT:
def _initialise_logger(self):
log_file = self._get_logging_filename()
logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
logger.propagate = False
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler(log_file)
file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter(fmt='%(asctime)s.%(msecs)04d,%(name)s,%(levelname)s,%(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
screen_handler = logging.StreamHandler()
screen_handler.setLevel(logging.DEBUG)
screen_formatter = logging.Formatter(fmt='%(asctime)s.%(msecs)02d,%(levelname)s: -> %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
screen_handler.setFormatter(screen_formatter)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.addHandler(screen_handler)
logger.info(f'\n\n\n\n\nInitiating a {self.__class__.__name__} instance.')
return logger
There's a lot of things that are wrong with your code, but for starters, let's take a look at the below example to get a better understanding of the logging module:
import logging
def make_logger(name):
logger = logging.getLogger(name)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter(fmt="%(message)s"))
logger.addHandler(sh)
return logger
logger_A = make_logger("A")
logger_A.warning("hi")
>>> hi
logger_B = make_logger("B")
logger_B.warning("bye")
>>> bye
logger_C1 = make_logger("C")
logger_C2 = make_logger("C")
logger_C1.warning("bonjour")
>>> bonjour
>>> bonjour
Notice we only get repeats when we use the same name for multiple loggers. This is because there can only be one instance of a logger with a given name, so if you call getLogger(name) with a name that already exists, it will just return the already existing logger object with that name. So, when we call our function make_logger twice with the same name, we're basically adding two different Stream Handlers to the same logger, which is why we see the double logging.
In your code, you construct the logger name using __name__ + "." + self.__class__.__name__. This produces a string that will be the exact same for every instance of your class. You could change that line of code to give a unique string for every different instance of your class, but this isn't really how you should be using the logging module.
I highly recommend reviewing this article to learn more about logging in python.
Why not just use one logger declared globally in your module/file? If you need to include identifying information for specific instance, you can always just include it in the log message itself:
import logging
logger = # set up your logger here, after your imports, before your code
class Foo:
def __init__(self, some_value):
self.value = some_value
logger.info(f'Initiating a {self.__class__.__name__} instance.')

Change log-level via mocking

I want to change the log-level temporarily.
My current strategy is to use mocking.
with mock.patch(...):
my_method_which_does_log()
All logging.info() calls inside the method should get ignored and not logged to the console.
How to implement the ... to make logs of level INFO get ignored?
The code is single-process and single-thread and executed during testing only.
I want to change the log-level temporarily.
A way to do this without mocking is logging.disable
class TestSomething(unittest.TestCase):
def setUp(self):
logging.disable(logging.WARNING)
def tearDown(self):
logging.disable(logging.NOTSET)
This example would only show messages of level WARNING and above for each test in the TestSomething class. (You call disable at the start and end of each test as needed. This seems a bit cleaner.)
To unset this temporary throttling, call logging.disable(logging.NOTSET):
If logging.disable(logging.NOTSET) is called, it effectively removes this overriding level, so that logging output again depends on the effective levels of individual loggers.
I don't think mocking is going to do what you want. The loggers are presumably already instantiated in this scenario, and level is an instance variable for each of the loggers (and also any of the handlers that each logger has).
You can create a custom context manager. That would look something like this:
Context Manager
import logging
class override_logging_level():
"A context manager for temporarily setting the logging level"
def __init__(self, level, process_handlers=True):
self.saved_level = {}
self.level = level
self.process_handlers = process_handlers
def __enter__(self):
# Save the root logger
self.save_logger('', logging.getLogger())
# Iterate over the other loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.save_logger(name, logger)
def __exit__(self, exception_type, exception_value, traceback):
# Restore the root logger
self.restore_logger('', logging.getLogger())
# Iterate over the loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.restore_logger(name, logger)
def save_logger(self, name, logger):
# Save off the level
self.saved_level[name] = logger.level
# Override the level
logger.setLevel(self.level)
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# No reliable name. Just use the id of the object
self.saved_level[id(handler)] = handler.level
def restore_logger(self, name, logger):
# It's possible that some intervening code added one or more loggers...
if name not in self.saved_level:
return
# Restore the level for the logger
logger.setLevel(self.saved_level[name])
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# Reconstruct the key for this handler
key = id(handler)
# Again, we could have possibly added more handlers
if key not in self.saved_level:
continue
# Restore the level for the handler
handler.setLevel(self.saved_level[key])
Test Code
# Setup for basic logging
logging.basicConfig(level=logging.ERROR)
# Create some loggers - the root logger and a couple others
lr = logging.getLogger()
l1 = logging.getLogger('L1')
l2 = logging.getLogger('L2')
# Won't see this message due to the level
lr.info("lr - msg 1")
l1.info("l1 - msg 1")
l2.info("l2 - msg 1")
# Temporarily override the level
with override_logging_level(logging.INFO):
# Will see
lr.info("lr - msg 2")
l1.info("l1 - msg 2")
l2.info("l2 - msg 2")
# Won't see, again...
lr.info("lr - msg 3")
l1.info("l1 - msg 3")
l2.info("l2 - msg 3")
Results
$ python ./main.py
INFO:root:lr - msg 2
INFO:L1:l1 - msg 2
INFO:L2:l2 - msg 2
Notes
The code would need to be enhanced to support multithreading; for example, logging.Logger.manager.loggerDict is a shared variable that's guarded by locks in the logging code.
Using #cryptoplex's approach of using Context Managers, here's the official version from the logging cookbook:
import logging
import sys
class LoggingContext(object):
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
def __exit__(self, et, ev, tb):
if self.level is not None:
self.logger.setLevel(self.old_level)
if self.handler:
self.logger.removeHandler(self.handler)
if self.handler and self.close:
self.handler.close()
# implicit return of None => don't swallow exceptions
You could use dependency injection to pass the logger instance to the method you are testing. It is a bit more invasive though since you are changing your method a little, however it gives you more flexibility.
Add the logger parameter to your method signature, something along the lines of:
def my_method( your_other_params, logger):
pass
In your unit test file:
if __name__ == "__main__":
# define the logger you want to use:
logging.basicConfig( stream=sys.stderr )
logging.getLogger( "MyTests.test_my_method" ).setLevel( logging.DEBUG )
...
def test_my_method(self):
test_logger = logging.getLogger( "MyTests.test_my_method" )
# pass your logger to your method
my_method(your_normal_parameters, test_logger)
python logger docs: https://docs.python.org/3/library/logging.html
I use this pattern to write all logs to a list. It ignores logs of level INFO and smaller.
logs=[]
import logging
def my_log(logger_self, level, *args, **kwargs):
if level>logging.INFO:
logs.append((args, kwargs))
with mock.patch('logging.Logger._log', my_log):
my_method_which_does_log()

Two writes using python logging

I have two files of classes with essentially the same set up of logging:
"""code - of 1 class the parent with mods from Reut's answer"""
logger = None
def __init__(self, verboseLevel=4):
'''
Constructor
'''
loggingLevels={1: logging.DEBUG,
2: logging.INFO,
3: logging.WARNING,
4: logging.ERROR,
5: logging.CRITICAL}
#debug(), info(), warning(), error(), critical()
if not tdoa.logger:
tdoa.logger=logging.getLogger('TDOA')
if (verboseLevel in range(1,6)):
logging.basicConfig(format='%(message)s',level=loggingLevels[verboseLevel])
else:
logging.basicConfig(format='%(levelname)s:%(message)s',level=logging.DEBUG)
tdoa.logger.critical("Incorrect logging level specified!")
self.logger = tdoa.logger
self.logger.debug("TDOA calculator using Newton's method.")
self.verboseLevel = verboseLevel
"""code of second "subclass" (with Reut's changes) (who's function is printing twice):"""
def __init__(self, verboseLevel=1, numberOfBytes=2, filename='myfile.log', ipaddr='127.0.0.1',getelset= True):
#debug(), info(), warning(), error(), critical()
# go through all this to know that only one logger is instantiated per class
# Set debug level
# set up various handlers (remove Std_err one for deployment unless you want them going to screen
# create console handler with a higher log level
if not capture.logger:
capture.logger=logging.getLogger('SatGeo')
console = logging.StreamHandler()
if (verboseLevel in range(1,6)):
console.setLevel(self.loggingLevels[verboseLevel])
logging.basicConfig(format='%(message)s',level=self.loggingLevels[verboseLevel],
filename=filename,filemode='a') #format='%(levelname)s:%(message)s'
else:
logging.basicConfig(format='%(message)s',level=logging.DEBUG,
filename=filename,filemod='a')
console.setLevel(logging.DEBUG)
capture.logger.critical("Incorrect logging level specified!")
# create formatter and add it to the handlers
#formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
#console.setFormatter(formatter)
# add the handlers to logger
handlers=capture.logger.handlers
if (console not in handlers):
capture.logger.addHandler(console)
else:
capture.logger.critical("not adding handler")
self.logger=capture.logger
I have a function in the "called class (satgeo)" that 'writes' to the logger:
def printMyself(self, rowDict):
ii=1
for res in rowDict:
self.logger.critical('{0}************************************'.format(ii))
ii+=1
for key, value in res.items():
self.logger.critical(' Name: {0}\t\t Value:{1}'.format(key, value))
When I call it by itself I get one output per self.logger call; but when I call it from the tdoa class it writes TWICE:
for example:
Name: actualLat Value:36.455444
Name: actualLat Value:36.455444
Any idea of how to fix this?
You are adding a handler to the parent class each time you construct a class instance using this line:
self.logger.addHandler(console)
So if you do something like:
for _ in range(x):
SubClass1()
some_operation_with_logging()
You should be seeing x messages, since you just added x handlers to the logger by doing x calls to parent's __init__.
You don't want to be doing that, make sure you add a handler only once!
You can access a logger's list of handlers using: logger.handlers.
Also, if you're using the same logger in both classes (named "TDOA") by using this line in both:
self.logger=logging.getLogger('TDOA')
Make sure you either synchronize the logger instantiation, or use separate loggers.
What I use:
Instead of having a private logger for each instance, you probably want a logger for all of them, or to be more precise - for the class itself:
class ClassWithLogger(object):
logger = None
def __init__(self):
if not ClassWithLogger.logger:
ClassWithLogger.logger = logging.getLogger("ClassWithLogger")
ClassWithLogger.logger.addHandler(logging.StreamHandler())
# log using ClassWithLogger.logger ...
# convenience:
self.logger = ClassWithLogger.logger
And now you know logger is instantiated once per class (instead of once per instance), and all instances of a certain class use the same logger.
I have since found a few links suggesting that submodules should take a logger as an input during the init:
def __init__ (self, pattern= None, action=None, logger = None):
# Set up logging for the class
self.log = logger or logging.getLogger(__name__)
self.log.addHandler(logging.NullHandler())
Note: the nullhandler is added to avoid a warning if the user decides to not provide a logger.
Then, if you want to debug your submodule:
if __name__ == "__main__":
log_level = logging.INFO
log = logging.getLogger('cmdparser')
log.setLevel(log_level)
fh = logging.FileHandler('cmdparser.log')
fh.setLevel(log_level)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(log_level)
# create formatter and add it to the handlers
# formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
log.addHandler(fh)
log.addHandler(ch)
<myfunction>(pattern,action, log)
Then provide the log to the module at instantiation.
I hope this helps.

logger configuration to log to file and print to stdout

I'm using Python's logging module to log some debug strings to a file which works pretty well. Now in addition, I'd like to use this module to also print the strings out to stdout. How do I do this? In order to log my strings to a file I use following code:
import logging
import logging.handlers
logger = logging.getLogger("")
logger.setLevel(logging.DEBUG)
handler = logging.handlers.RotatingFileHandler(
LOGFILE, maxBytes=(1048576*5), backupCount=7
)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
and then call a logger function like
logger.debug("I am written to the file")
Thank you for some help here!
Just get a handle to the root logger and add the StreamHandler. The StreamHandler writes to stderr. Not sure if you really need stdout over stderr, but this is what I use when I setup the Python logger and I also add the FileHandler as well. Then all my logs go to both places (which is what it sounds like you want).
import logging
logging.getLogger().addHandler(logging.StreamHandler())
If you want to output to stdout instead of stderr, you just need to specify it to the StreamHandler constructor.
import sys
# ...
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
You could also add a Formatter to it so all your log lines have a common header.
ie:
import logging
logFormatter = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s")
rootLogger = logging.getLogger()
fileHandler = logging.FileHandler("{0}/{1}.log".format(logPath, fileName))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
Prints to the format of:
2012-12-05 16:58:26,618 [MainThread ] [INFO ] my message
logging.basicConfig() can take a keyword argument handlers since Python 3.3, which simplifies logging setup a lot, especially when setting up multiple handlers with the same formatter:
handlers – If specified, this should be an iterable of already created handlers to add to the root logger. Any handlers which don’t already have a formatter set will be assigned the default formatter created in this function.
The whole setup can therefore be done with a single call like this:
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
]
)
(Or with import sys + StreamHandler(sys.stdout) per original question's requirements – the default for StreamHandler is to write to stderr. Look at LogRecord attributes if you want to customize the log format and add things like filename/line, thread info etc.)
The setup above needs to be done only once near the beginning of the script. You can use the logging from all other places in the codebase later like this:
logging.info('Useful message')
logging.error('Something bad happened')
...
Note: If it doesn't work, someone else has probably already initialized the logging system differently. Comments suggest doing logging.root.handlers = [] before the call to basicConfig().
Adding a StreamHandler without arguments goes to stderr instead of stdout. If some other process has a dependency on the stdout dump (i.e. when writing an NRPE plugin), then make sure to specify stdout explicitly or you might run into some unexpected troubles.
Here's a quick example reusing the assumed values and LOGFILE from the question:
import logging
from logging.handlers import RotatingFileHandler
from logging import handlers
import sys
log = logging.getLogger('')
log.setLevel(logging.DEBUG)
format = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
ch = logging.StreamHandler(sys.stdout)
ch.setFormatter(format)
log.addHandler(ch)
fh = handlers.RotatingFileHandler(LOGFILE, maxBytes=(1048576*5), backupCount=7)
fh.setFormatter(format)
log.addHandler(fh)
Here is a complete, nicely wrapped solution based on Waterboy's answer and various other sources. It supports logging to both console and log file, allows for different log level settings, provides colorized output and is easily configurable (also available as Gist):
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# -------------------------------------------------------------------------------
# -
# Python dual-logging setup (console and log file), -
# supporting different log levels and colorized output -
# -
# Created by Fonic <https://github.com/fonic> -
# Date: 04/05/20 - 02/07/23 -
# -
# Based on: -
# https://stackoverflow.com/a/13733863/1976617 -
# https://uran198.github.io/en/python/2016/07/12/colorful-python-logging.html -
# https://en.wikipedia.org/wiki/ANSI_escape_code#Colors -
# -
# -------------------------------------------------------------------------------
# Imports
import os
import sys
import logging
# Logging formatter supporting colorized output
class LogFormatter(logging.Formatter):
COLOR_CODES = {
logging.CRITICAL: "\033[1;35m", # bright/bold magenta
logging.ERROR: "\033[1;31m", # bright/bold red
logging.WARNING: "\033[1;33m", # bright/bold yellow
logging.INFO: "\033[0;37m", # white / light gray
logging.DEBUG: "\033[1;30m" # bright/bold black / dark gray
}
RESET_CODE = "\033[0m"
def __init__(self, color, *args, **kwargs):
super(LogFormatter, self).__init__(*args, **kwargs)
self.color = color
def format(self, record, *args, **kwargs):
if (self.color == True and record.levelno in self.COLOR_CODES):
record.color_on = self.COLOR_CODES[record.levelno]
record.color_off = self.RESET_CODE
else:
record.color_on = ""
record.color_off = ""
return super(LogFormatter, self).format(record, *args, **kwargs)
# Set up logging
def set_up_logging(console_log_output, console_log_level, console_log_color, logfile_file, logfile_log_level, logfile_log_color, log_line_template):
# Create logger
# For simplicity, we use the root logger, i.e. call 'logging.getLogger()'
# without name argument. This way we can simply use module methods for
# for logging throughout the script. An alternative would be exporting
# the logger, i.e. 'global logger; logger = logging.getLogger("<name>")'
logger = logging.getLogger()
# Set global log level to 'debug' (required for handler levels to work)
logger.setLevel(logging.DEBUG)
# Create console handler
console_log_output = console_log_output.lower()
if (console_log_output == "stdout"):
console_log_output = sys.stdout
elif (console_log_output == "stderr"):
console_log_output = sys.stderr
else:
print("Failed to set console output: invalid output: '%s'" % console_log_output)
return False
console_handler = logging.StreamHandler(console_log_output)
# Set console log level
try:
console_handler.setLevel(console_log_level.upper()) # only accepts uppercase level names
except:
print("Failed to set console log level: invalid level: '%s'" % console_log_level)
return False
# Create and set formatter, add console handler to logger
console_formatter = LogFormatter(fmt=log_line_template, color=console_log_color)
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
# Create log file handler
try:
logfile_handler = logging.FileHandler(logfile_file)
except Exception as exception:
print("Failed to set up log file: %s" % str(exception))
return False
# Set log file log level
try:
logfile_handler.setLevel(logfile_log_level.upper()) # only accepts uppercase level names
except:
print("Failed to set log file log level: invalid level: '%s'" % logfile_log_level)
return False
# Create and set formatter, add log file handler to logger
logfile_formatter = LogFormatter(fmt=log_line_template, color=logfile_log_color)
logfile_handler.setFormatter(logfile_formatter)
logger.addHandler(logfile_handler)
# Success
return True
# Main function
def main():
# Set up logging
script_name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
if (not set_up_logging(console_log_output="stdout", console_log_level="warning", console_log_color=True,
logfile_file=script_name + ".log", logfile_log_level="debug", logfile_log_color=False,
log_line_template="%(color_on)s[%(created)d] [%(threadName)s] [%(levelname)-8s] %(message)s%(color_off)s")):
print("Failed to set up logging, aborting.")
return 1
# Log some messages
logging.debug("Debug message")
logging.info("Info message")
logging.warning("Warning message")
logging.error("Error message")
logging.critical("Critical message")
# Call main function
if (__name__ == "__main__"):
sys.exit(main())
NOTE regarding Microsoft Windows:
For colorized output to work within the classic Command Prompt of Microsoft Windows, some additional code is necessary. This does NOT apply to the newer Terminal app which supports colorized output out of the box.
There are two options:
1) Use Python package colorama (filters output sent to stdout and stderr and translates escape sequences to native Windows API calls; works on Windows XP and later):
import colorama
colorama.init()
2) Enable ANSI terminal mode using the following function (enables terminal to interpret escape sequences by setting flag ENABLE_VIRTUAL_TERMINAL_PROCESSING; more info on this here, here, here and here; works on Windows 10 and later):
# Imports
import sys
import ctypes
# Enable ANSI terminal mode for Command Prompt on Microsoft Windows
def windows_enable_ansi_terminal_mode():
if (sys.platform != "win32"):
return None
try:
kernel32 = ctypes.windll.kernel32
result = kernel32.SetConsoleMode(kernel32.GetStdHandle(-11), 7)
if (result == 0): raise Exception
return True
except:
return False
Either run basicConfig with stream=sys.stdout as the argument prior to setting up any other handlers or logging any messages, or manually add a StreamHandler that pushes messages to stdout to the root logger (or any other logger you want, for that matter).
Logging to stdout and rotating file with different levels and formats:
import logging
import logging.handlers
import sys
if __name__ == "__main__":
# Change root logger level from WARNING (default) to NOTSET in order for all messages to be delegated.
logging.getLogger().setLevel(logging.NOTSET)
# Add stdout handler, with level INFO
console = logging.StreamHandler(sys.stdout)
console.setLevel(logging.INFO)
formater = logging.Formatter('%(name)-13s: %(levelname)-8s %(message)s')
console.setFormatter(formater)
logging.getLogger().addHandler(console)
# Add file rotating handler, with level DEBUG
rotatingHandler = logging.handlers.RotatingFileHandler(filename='rotating.log', maxBytes=1000, backupCount=5)
rotatingHandler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rotatingHandler.setFormatter(formatter)
logging.getLogger().addHandler(rotatingHandler)
log = logging.getLogger("app." + __name__)
log.debug('Debug message, should only appear in the file.')
log.info('Info message, should appear in file and stdout.')
log.warning('Warning message, should appear in file and stdout.')
log.error('Error message, should appear in file and stdout.')
After having used Waterboy's code over and over in multiple Python packages, I finally cast it into a tiny standalone Python package, which you can find here:
https://github.com/acschaefer/duallog
The code is well documented and easy to use. Simply download the .py file and include it in your project, or install the whole package via pip install duallog.
I have handled redirecting logs and prints to a file on disk, stdout, and stderr simultaneously, via the following module (the Gist is also available here):
import logging
import pathlib
import sys
from ml.common.const import LOG_DIR_PATH, ML_DIR
def create_log_file_path(file_path, root_dir=ML_DIR, log_dir=LOG_DIR_PATH):
path_parts = list(pathlib.Path(file_path).parts)
relative_path_parts = path_parts[path_parts.index(root_dir) + 1:]
log_file_path = pathlib.Path(log_dir, *relative_path_parts)
log_file_path = log_file_path.with_suffix('.log')
# Create the directories and the file itself
log_file_path.parent.mkdir(parents=True, exist_ok=True)
log_file_path.touch(exist_ok=True)
return log_file_path
def set_up_logs(file_path, mode='a', level=logging.INFO):
log_file_path = create_log_file_path(file_path)
logging_handlers = [logging.FileHandler(log_file_path, mode=mode),
logging.StreamHandler(sys.stdout)]
logging.basicConfig(
handlers=logging_handlers,
format='%(asctime)s %(name)s %(levelname)s %(message)s',
level=level
)
class OpenedFileHandler(logging.FileHandler):
def __init__(self, file_handle, filename, mode):
self.file_handle = file_handle
super(OpenedFileHandler, self).__init__(filename, mode)
def _open(self):
return self.file_handle
class StandardError:
def __init__(self, buffer_stderr, buffer_file):
self.buffer_stderr = buffer_stderr
self.buffer_file = buffer_file
def write(self, message):
self.buffer_stderr.write(message)
self.buffer_file.write(message)
class StandardOutput:
def __init__(self, buffer_stdout, buffer_file):
self.buffer_stdout = buffer_stdout
self.buffer_file = buffer_file
def write(self, message):
self.buffer_stdout.write(message)
self.buffer_file.write(message)
class Logger:
def __init__(self, file_path, mode='a', level=logging.INFO):
self.stdout_ = sys.stdout
self.stderr_ = sys.stderr
log_file_path = create_log_file_path(file_path)
self.file_ = open(log_file_path, mode=mode)
logging_handlers = [OpenedFileHandler(self.file_, log_file_path,
mode=mode),
logging.StreamHandler(sys.stdout)]
logging.basicConfig(
handlers=logging_handlers,
format='%(asctime)s %(name)s %(levelname)s %(message)s',
level=level
)
# Overrides write() method of stdout and stderr buffers
def write(self, message):
self.stdout_.write(message)
self.stderr_.write(message)
self.file_.write(message)
def flush(self):
pass
def __enter__(self):
sys.stdout = StandardOutput(self.stdout_, self.file_)
sys.stderr = StandardError(self.stderr_, self.file_)
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout = self.stdout_
sys.stderr = self.stderr_
self.file_.close()
Written as a context manager, you can simply add the functionality to your python script by adding an extra line:
from logger import Logger
...
if __name__ == '__main__':
with Logger(__file__):
main()
Although the question specifically asks for a logger configuration, there is an alternative approach that does not require any changes to the logging configuration, nor does it require redirecting stdout.
A bit simplistic, perhaps, but it works:
def log_and_print(message: str, level: int, logger: logging.Logger):
logger.log(level=level, msg=message) # log as normal
print(message) # prints to stdout by default
Instead of e.g. logger.debug('something') we now call log_and_print(message='something', level=logging.DEBUG, logger=logger).
We can also expand this a little, so it prints to stdout only when necessary:
def log_print(message: str, level: int, logger: logging.Logger):
# log the message normally
logger.log(level=level, msg=message)
# only print to stdout if the message is not logged to stdout
msg_logged_to_stdout = False
current_logger = logger
while current_logger and not msg_logged_to_stdout:
is_enabled = current_logger.isEnabledFor(level)
logs_to_stdout = any(
getattr(handler, 'stream', None) == sys.stdout
for handler in current_logger.handlers
)
msg_logged_to_stdout = is_enabled and logs_to_stdout
if not current_logger.propagate:
current_logger = None
else:
current_logger = current_logger.parent
if not msg_logged_to_stdout:
print(message)
This checks the logger and its parents for any handlers that stream to stdout, and checks if the logger is enabled for the specified level.
Note this has not been optimized for performance.
For 2.7, try the following:
fh = logging.handlers.RotatingFileHandler(LOGFILE, maxBytes=(1048576*5), backupCount=7)

Categories

Resources