How to log for each script using logging (python) - python

Im having an issue where currently I use two loggings where each script has its own logging. I have even used filename="Test1/Logs/hello.txt and Test2 for second script so the filename is seperated. My problem is that whenever I run a script, it saves only to one folder and in my case is Test1. It doesn't never logs into Test2 folder even though I run test2.py script which should save to the Test2 folder logs.
What I have done is:
# -------------------------------------------------------------------------
# CLASS A
# -------------------------------------------------------------------------
logging.basicConfig(
filename='{}{}.txt'.format(/Test1/Logs, datetime.now().strftime("%a %d %B %Y %H_%M_%S")),
filemode='a',
level=logging.INFO,
format='[%(asctime)s]:%(levelname)s:%(funcName)s - %(message)s',
datefmt=datetime.now().strftime("%H:%M:%S.%f")[:-3]
)
loggingError = logging.getLogger()
logging.warning("Starting: {}".format(datetime.now().strftime("%a %d-%B-%Y %H:%M:%S")))
class A:
def __init__(self):
self.logging = loggingError
def aW1wb3J0RmlsZXM(self):
self.logging.error(sys.exc_info(), exc_info=True)
# -------------------------------------------------------------------------
# CLASS B
# -------------------------------------------------------------------------
logging.basicConfig(
filename='{}{}.txt'.format(/Test2/Logs, datetime.now().strftime("%a %d %B %Y %H_%M_%S")),
filemode='a',
level=logging.INFO,
format='[%(asctime)s]:%(levelname)s:%(funcName)s - %(message)s',
datefmt=datetime.now().strftime("%H:%M:%S.%f")[:-3]
)
loggingError = logging.getLogger()
logging.warning("Starting: {}".format(datetime.now().strftime("%a %d-%B-%Y %H:%M:%S")))
class B:
def __init__(self):
self.logging = loggingError
def aW1wb3J0RmlsZXM(self):
self.logging.error(sys.exc_info(), exc_info=True)
import A
import B
def main():
if True:
B.main()
break
if False:
A.main()
break
if __name__ == "__main__":
try:
main()
except Exception as e:
print(e)
as you can see the only difference between loggings are the folder name filename and yet even if I run Class B it still saves to Class A folder. What is the reason for that?
UPDATED TO EACH SCRIPT:
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
file_handler = logging.FileHandler('{}{}.txt'.format(logsPath, datetime.now().strftime("%a %d %B %Y %H_%M_%S")))
formatter = logging.Formatter('[%(asctime)s]:%(levelname)s:%(funcName)s - %(message)s', "%d-%m-%Y %H:%M:%S")
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.warning("Starting: {}".format(datetime.now().strftime("%a %d-%B-%Y %H:%M:%S")))

This happens because when you import A all the code inside of the file runs. Specifically the call to basicConfig. The way this function works is that it only has an effect if logging is not configured already, otherwise it does nothing. So when you first import A logging is configured, and the second call to basicConfig in B doesn't have any effect.
This is documented well in the official python docs here.
Edit:
File logging per module is easier done by doing something similar to this in each of the files:
import logging
local_logger = logging.getLogger(__name__)
local_logger.addHandler(logging.FileHandler('filename.log', 'a'))
local_logger.setLevel(logging.INFO)

Related

Is it possible/advisable to put a logger into a single package and use it across all packages in a project?

What I want to do:
To create a Python package, and in that package, create a class that instantiates a logger. I'd create an instance of this class in the main function of my project and whenever I instantiate any package or class, I'd either import the logger class that I created or I'd pass a reference of the instance I created in the main function.
Why I want to do this:
Referential integrity of the logger filename and not having to write
the same lines of code everywhere to instantiate a logger.
To not have the logger as a global.
To be able to perform file operations before instantiating the
logger file. For example, to create a log folder to store the log
files before starting the logger.
To be able to load program parameters from a config file (my config
file manager will be a separate class in a separate package) and to
be able to load logger parameters from the config file.
The code I implemented as of now (which I want to change):
The main file:
import logging
from logging.handlers import RotatingFileHandler
from operatingSystemFunctions import operatingSystemFunc
from diskOperations import fileAndFolderOperations
logFileName = os.path.join('logs', 'logs.log')
logging.basicConfig(level=logging.INFO, format='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
log = logging.getLogger(__name__)
handler = RotatingFileHandler(logFileName, maxBytes=2000, backupCount=5)#TODO: Shift to config file
handler.formatter = logging.Formatter(fmt='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
log.addHandler(handler)
log.setLevel(logging.INFO)
if __name__ == '__main__':
while True:
log.info("Running")
#do something
Similarly, the custom packages I've imported, each have their own global instantiations of logger:
operatingSystemFunc for example, is:
import subprocess
from sys import platform #to check which OS the program is running on
from abc import ABC, abstractmethod
import logging
from logging.handlers import RotatingFileHandler
logFileName = os.path.join('logs', 'logs.log')
logging.basicConfig(level=logging.INFO, format='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
log = logging.getLogger(__name__)
handler = RotatingFileHandler(logFileName , maxBytes = 2000 , backupCount = 5)#TODO: Shift to config file
handler.formatter = logging.Formatter(fmt='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S') #setting this was necessary to get it to write the format to file. Without this, only the message part of the log would get written to the file
log.addHandler(handler)
log.setLevel(logging.INFO)
class AudioNotifier(ABC):#Abstract parent class
#abstractmethod
def __init__(self):
#self.id = "someUniqueNotifierName" #has to be implemented in the child class
pass
Right now the code throws an error FileNotFoundError: [Errno 2] No such file or directory: '/home/nav/prog/logs/logs.log' because I haven't yet created the logs folder.
One concern I have is that if I implement the logger in a separate package and use it by importing that package into all other packages or by passing a reference to an instance of the class to all other classes, then would it be a bad practice?
Update:
Learning from Pablo's answer, I changed the code to ensure that the RotatingFileHandler was simultaneously able to write to a file. So this is the resulting code:
import logging
from logging.handlers import RotatingFileHandler
from operatingSystemFunctions import operatingSystemFunc
from diskOperations import fileAndFolderOperations
logFileName = 'logs.log'
logging.basicConfig(level=logging.INFO, format='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
#log = logging.getLogger(__name__)
handler = RotatingFileHandler(logFileName, maxBytes=2000, backupCount=2)#TODO: Shift to config file
handler.formatter = logging.Formatter(fmt='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S') #setting this was necessary to get it to write the format to file. Without this, only the message part of the log would get written to the file
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(logging.INFO)
if __name__ == '__main__':
logging.info("\n\n---------------------------------")
logging.info("program started")
operatingSystemCheck = operatingSystemFunc.OperatingSystemFunc()
logging.info("Monitoring ...")
while True:
pass
and it's used in the packages like this:
import logging
class OperatingSystemFunc:
def __init__(self):
logging.info("Some message")
Few considerations, the python logger is already a singleton that runs in his own thread, so only one instance of the logger is ever made. When you call it, or you try to create it in different places, what you are doing is creating it just the first time and calling the same singleton again and again. So the handlers defined at the first place are accessible everywhere without passing them nor importing them.
If you make a instance of the logger once in your main function using a name, later you will only have to use logging.getLogger("your chosen name"), so in most cases there is no necessity to create a class inside a package etc etc. You can use the handlers, that you have defined elsewhere doing that and without directly importing them.
So I would say that what you are proposing is not a good practice, because it is not necessary, maybe with a few exceptions. If you really want save writing "log = logging.getLogger('...')" which is the only thing you will save, then you can definitely write that line in one package and import the "log" instead the logging module and that will be OK. But then I advice to you not putting the logger inside a class, or if in a class, make the instance in the same file and import the instance, not the class.
its my logger file what i used in my projects:
import logging
class CustomFormatter(logging.Formatter):
grey = "\x1b[38;21m"
green = "\x1b[0;30;42m"
yellow = "\x1b[1;30;43m"
red = "\x1b[0:30;41m"
bold_red = "\x1b[1;30;1m"
reset = "\x1b[0m"
FORMATS = {
logging.DEBUG: f"%(asctime)s - {grey}%(levelname)s{reset} - %(message)s",
logging.INFO: f"%(asctime)s - {green}%(levelname)s{reset} - %(message)s",
logging.WARNING: f"%(asctime)s - {yellow}%(levelname)s{reset} - %(message)s",
logging.ERROR: f"%(asctime)s - {red}%(levelname)s{reset} - %(message)s",
logging.CRITICAL: f"%(asctime)s - {bold_red}%(levelname)s{reset} - %(message)s"
}
def format(self, record):
log_fmt = self.FORMATS.get(record.levelno)
formatter = logging.Formatter(log_fmt)
return formatter.format(record)
logger = logging.getLogger("core")
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(CustomFormatter())
logger.addHandler(ch)
and put it in
_ init _.py
from .logger import logger as LOGGER
and made an ini file like
log.ini
[loggers]
keys=root
[handlers]
keys=logfile,logconsole
[formatters]
keys=logformatter
[logger_root]
level=INFO
handlers=logfile, logconsole
[formatter_logformatter]
format=[%(asctime)s.%(msecs)03d] %(levelname)s [%(thread)d] - %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
level=INFO
args=('logs.log','a')
formatter=logformatter
[handler_logconsole]
class=handlers.logging.StreamHandler
level=INFO
args=()
formatter=logformatter
these helped me too much, i hope it will help you

Python-threading: separate logging of individual threads

I would like individual .log file for each thread. Unfortunately, after using logging.basicConfig, many different files are created for logs, but finally all logs end up in the last declared file.
What should threads do to have independent log files?
import logging
import threading
import time
from datetime import datetime
def test_printing(name):
logging.basicConfig(
format="%(asctime)s, %(levelname)-8s | %(filename)-23s:%(lineno)-4s | %(threadName)15s: %(message)s", # noqa
datefmt="%Y-%m-%d:%H:%M:%S",
level=logging.INFO,
force=True,
handlers=[
logging.FileHandler(f"{name}.log"),
logging.StreamHandler()])
logging.info(f"Test {name}")
time.sleep(20)
logging.info(f"Test {name} after 20s")
def function_thread():
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
thread = threading.Thread(
target=test_printing,
kwargs={"name": timestamp}
)
thread.start()
for i in range(5):
time.sleep(1)
function_thread()
From https://docs.python.org/3/library/logging.html#logger-objects
Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name).
So you have to create and configure a new logger inside each thread:
logger = logging.getLogger(name)
logger.basicConfig(...)
more info at: https://docs.python.org/3/howto/logging.html#logging-from-multiple-modules
Edit: Use already defined name as logger identifier, instead of __name__
Edit:
You cannot use logging.basicConfig, instead you need to configure each thread logger on its own.
Full code provided and tested:
import logging
import threading
import time
from datetime import datetime
def test_printing(name):
logger = logging.getLogger(name)
logger.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt="%(asctime)s, %(levelname)-8s | %(filename)-23s:%(lineno)-4s | %(threadName)15s: %(message)s",
datefmt="%Y-%m-%d:%H:%M:%S")
sh = logging.StreamHandler()
fh = logging.FileHandler(f"{name}.log")
sh.setFormatter(formatter)
fh.setFormatter(formatter)
logger.addHandler(sh)
logger.addHandler(fh)
logger.info(f"Test {name}")
time.sleep(20)
logger.info(f"Test {name} after 20s")
def function_thread():
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
thread = threading.Thread(
target=test_printing,
kwargs={"name": timestamp}
)
thread.start()
for i in range(5):
time.sleep(1)
function_thread()

Logger not outputting INFO messages after specifying filename in basicConfig

I'm using a logging object to display the process of my Python script. Things were working fine until I specified a log file.
Working Code
from datetime import timedelta
import logging
import time
def main():
start_time = time.monotonic()
config = get_args()
# Logger configuration.
msg_format = '[%(asctime)s - %(levelname)s - %(filename)s: %(lineno)d (%(funcName)s)] %(message)s'
logging.basicConfig(format=msg_format, level=logging.INFO)
logger = logging.getLogger()
logger.info(msg='Starting process...')
# End of process.
end_time = time.monotonic()
process_time = str(timedelta(seconds=(end_time - start_time))).split('.')[0] # Get rid of trailing microseconds.
logger.info(msg=('End. Entire process took approximately %s' % process_time))
if __name__ == '__main__':
main()
Not Working Code
from datetime import datetime, timedelta
import logging
import os
import time
def main():
start_time = time.monotonic()
config = get_args()
# Logger configuration.
timestamp = datetime.now().strftime(format='%Y%m%d-%H%M')
log_filename = '_'.join(['log', timestamp])
log_file_path = os.path.join(config.log_dir, log_filename)
msg_format = '[%(asctime)s - %(levelname)s - %(filename)s: %(lineno)d (%(funcName)s)] %(message)s'
logging.basicConfig(filename=log_file_path, format=msg_format, level=logging.INFO)
logger = logging.getLogger()
logger.info(msg='Starting process...')
# End of process.
end_time = time.monotonic()
process_time = str(timedelta(seconds=(end_time - start_time))).split('.')[0] # Get rid of trailing microseconds.
logger.info(msg=('End. Entire process took approximately %s' % process_time))
if __name__ == '__main__':
main()
Would anyone know what might be the problem? I'm suspecting that specifying the log filename in the logging.basicConfig may have messed up the handlers or something, but I'm not 100% certain. Thanks!
The answer was relatively obvious after reading the documentation a bit more carefully. I wasn't aware that I hadn't included any handlers in my configuration. To fix the issue, all I had to do was change:
logging.basicConfig(filename=log_file_path, format=msg_format, level=logging.INFO)
to this:
logging.basicConfig(format=msg_format, level=logging.INFO, \
handlers=[logging.FileHandler(filename=log_file_path), logging.StreamHandler()])
Hope this helps anyone else running into the same mistake.
This answer helped me: logger configuration to log to file and print ot stdout

selectively setting the console logging level

I am trying to use an FTP server stub during tests. I don't want the console output, but I would like to capture the logging to a file.
I want the FTP server to run in a different process, so I use multiprocessing.
My code as follows sets all logging to level WARNING:
from pyftpdlib.authorizers import DummyAuthorizer
from pyftpdlib.handlers import FTPHandler
from pyftpdlib.servers import FTPServer
import pyftpdlib.log as pyftpdliblog
import os
import logging
import multiprocessing as mp
authorizer = DummyAuthorizer()
authorizer.add_user('user', '12345', '.', perm='elradfmwM')
handler = FTPHandler
handler.authorizer = authorizer
pyftpdliblog.LEVEL = logging.WARNING
logging.basicConfig(filename='pyftpd.log', level=logging.INFO)
server = FTPServer(('', 2121), handler)
def main():
p = mp.Process(target=server.serve_forever)
p.start()
if __name__ == '__main__':
main()
How do I set only the console logging to level WARNING, or even better, completely shutdown without giving up the file logging?
So, after digging inside the code, I found out the following hint:
# This is a method of FTPServer and it is called before
# server.serve_forever
def _log_start(self):
if not logging.getLogger('pyftpdlib').handlers:
# If we get to this point it means the user hasn't
# configured logger. We want to log by default so
# we configure logging ourselves so that it will
# print to stderr.
from pyftpdlib.ioloop import _config_logging
_config_logging()
So, all I had to do is to define my own appropriate handlers:
logger = logging.getLogger('pyftpdlib')
logger.setLevel(logging.INFO)
hdlr = logging.FileHandler('pyftpd.log' )
logger.addHandler(hdlr)
Now, there is file logging, but console logging will not start.
Something like this:
import logging
date_format = "%Y/%m/%d %H:%M:%S"
log_file_path = "my_file.txt"
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
# own_module_logger = logging.getLogger(__name__)
pyftpdlib_logger = logging.getLogger("pyftpdlib")
# Setup logging to file (Only pyftpdlib)
filehandler = logging.FileHandler(filename = log_file_path)
filehandler.setLevel(logging.DEBUG)
fileformatter = logging.Formatter(fmt = "%(asctime)s - %(levelname)-8s - %(name)s.%(funcName)s - %(message)s",
datefmt = date_format)
filehandler.setFormatter(fileformatter)
pyftpdlib_logger.addHandler(filehandler)
pyftpdlib_logger.propagate = False
# Setup logging to console (All other)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
consoleformatter = logging.Formatter(fmt = "%(asctime)s - %(levelname)-8s - %(name)s.%(funcName)s - %(message)s",
datefmt = date_format)
console.setFormatter(consoleformatter)
root_logger.addHandler(console)
# Do your loggings
a = logging.getLogger()
a.info('root I')
a.debug('root D')
b = logging.getLogger("pyftpdlib")
b.info('P I')
b.debug('P D')
logging.shutdown()
So loggings of pyftpdlib go to file. Everything from your module to console. One of the key thing here is the propagate!

logger configuration to log to file and print to stdout

I'm using Python's logging module to log some debug strings to a file which works pretty well. Now in addition, I'd like to use this module to also print the strings out to stdout. How do I do this? In order to log my strings to a file I use following code:
import logging
import logging.handlers
logger = logging.getLogger("")
logger.setLevel(logging.DEBUG)
handler = logging.handlers.RotatingFileHandler(
LOGFILE, maxBytes=(1048576*5), backupCount=7
)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
and then call a logger function like
logger.debug("I am written to the file")
Thank you for some help here!
Just get a handle to the root logger and add the StreamHandler. The StreamHandler writes to stderr. Not sure if you really need stdout over stderr, but this is what I use when I setup the Python logger and I also add the FileHandler as well. Then all my logs go to both places (which is what it sounds like you want).
import logging
logging.getLogger().addHandler(logging.StreamHandler())
If you want to output to stdout instead of stderr, you just need to specify it to the StreamHandler constructor.
import sys
# ...
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
You could also add a Formatter to it so all your log lines have a common header.
ie:
import logging
logFormatter = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s")
rootLogger = logging.getLogger()
fileHandler = logging.FileHandler("{0}/{1}.log".format(logPath, fileName))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
Prints to the format of:
2012-12-05 16:58:26,618 [MainThread ] [INFO ] my message
logging.basicConfig() can take a keyword argument handlers since Python 3.3, which simplifies logging setup a lot, especially when setting up multiple handlers with the same formatter:
handlers – If specified, this should be an iterable of already created handlers to add to the root logger. Any handlers which don’t already have a formatter set will be assigned the default formatter created in this function.
The whole setup can therefore be done with a single call like this:
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
]
)
(Or with import sys + StreamHandler(sys.stdout) per original question's requirements – the default for StreamHandler is to write to stderr. Look at LogRecord attributes if you want to customize the log format and add things like filename/line, thread info etc.)
The setup above needs to be done only once near the beginning of the script. You can use the logging from all other places in the codebase later like this:
logging.info('Useful message')
logging.error('Something bad happened')
...
Note: If it doesn't work, someone else has probably already initialized the logging system differently. Comments suggest doing logging.root.handlers = [] before the call to basicConfig().
Adding a StreamHandler without arguments goes to stderr instead of stdout. If some other process has a dependency on the stdout dump (i.e. when writing an NRPE plugin), then make sure to specify stdout explicitly or you might run into some unexpected troubles.
Here's a quick example reusing the assumed values and LOGFILE from the question:
import logging
from logging.handlers import RotatingFileHandler
from logging import handlers
import sys
log = logging.getLogger('')
log.setLevel(logging.DEBUG)
format = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
ch = logging.StreamHandler(sys.stdout)
ch.setFormatter(format)
log.addHandler(ch)
fh = handlers.RotatingFileHandler(LOGFILE, maxBytes=(1048576*5), backupCount=7)
fh.setFormatter(format)
log.addHandler(fh)
Here is a complete, nicely wrapped solution based on Waterboy's answer and various other sources. It supports logging to both console and log file, allows for different log level settings, provides colorized output and is easily configurable (also available as Gist):
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# -------------------------------------------------------------------------------
# -
# Python dual-logging setup (console and log file), -
# supporting different log levels and colorized output -
# -
# Created by Fonic <https://github.com/fonic> -
# Date: 04/05/20 - 02/07/23 -
# -
# Based on: -
# https://stackoverflow.com/a/13733863/1976617 -
# https://uran198.github.io/en/python/2016/07/12/colorful-python-logging.html -
# https://en.wikipedia.org/wiki/ANSI_escape_code#Colors -
# -
# -------------------------------------------------------------------------------
# Imports
import os
import sys
import logging
# Logging formatter supporting colorized output
class LogFormatter(logging.Formatter):
COLOR_CODES = {
logging.CRITICAL: "\033[1;35m", # bright/bold magenta
logging.ERROR: "\033[1;31m", # bright/bold red
logging.WARNING: "\033[1;33m", # bright/bold yellow
logging.INFO: "\033[0;37m", # white / light gray
logging.DEBUG: "\033[1;30m" # bright/bold black / dark gray
}
RESET_CODE = "\033[0m"
def __init__(self, color, *args, **kwargs):
super(LogFormatter, self).__init__(*args, **kwargs)
self.color = color
def format(self, record, *args, **kwargs):
if (self.color == True and record.levelno in self.COLOR_CODES):
record.color_on = self.COLOR_CODES[record.levelno]
record.color_off = self.RESET_CODE
else:
record.color_on = ""
record.color_off = ""
return super(LogFormatter, self).format(record, *args, **kwargs)
# Set up logging
def set_up_logging(console_log_output, console_log_level, console_log_color, logfile_file, logfile_log_level, logfile_log_color, log_line_template):
# Create logger
# For simplicity, we use the root logger, i.e. call 'logging.getLogger()'
# without name argument. This way we can simply use module methods for
# for logging throughout the script. An alternative would be exporting
# the logger, i.e. 'global logger; logger = logging.getLogger("<name>")'
logger = logging.getLogger()
# Set global log level to 'debug' (required for handler levels to work)
logger.setLevel(logging.DEBUG)
# Create console handler
console_log_output = console_log_output.lower()
if (console_log_output == "stdout"):
console_log_output = sys.stdout
elif (console_log_output == "stderr"):
console_log_output = sys.stderr
else:
print("Failed to set console output: invalid output: '%s'" % console_log_output)
return False
console_handler = logging.StreamHandler(console_log_output)
# Set console log level
try:
console_handler.setLevel(console_log_level.upper()) # only accepts uppercase level names
except:
print("Failed to set console log level: invalid level: '%s'" % console_log_level)
return False
# Create and set formatter, add console handler to logger
console_formatter = LogFormatter(fmt=log_line_template, color=console_log_color)
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
# Create log file handler
try:
logfile_handler = logging.FileHandler(logfile_file)
except Exception as exception:
print("Failed to set up log file: %s" % str(exception))
return False
# Set log file log level
try:
logfile_handler.setLevel(logfile_log_level.upper()) # only accepts uppercase level names
except:
print("Failed to set log file log level: invalid level: '%s'" % logfile_log_level)
return False
# Create and set formatter, add log file handler to logger
logfile_formatter = LogFormatter(fmt=log_line_template, color=logfile_log_color)
logfile_handler.setFormatter(logfile_formatter)
logger.addHandler(logfile_handler)
# Success
return True
# Main function
def main():
# Set up logging
script_name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
if (not set_up_logging(console_log_output="stdout", console_log_level="warning", console_log_color=True,
logfile_file=script_name + ".log", logfile_log_level="debug", logfile_log_color=False,
log_line_template="%(color_on)s[%(created)d] [%(threadName)s] [%(levelname)-8s] %(message)s%(color_off)s")):
print("Failed to set up logging, aborting.")
return 1
# Log some messages
logging.debug("Debug message")
logging.info("Info message")
logging.warning("Warning message")
logging.error("Error message")
logging.critical("Critical message")
# Call main function
if (__name__ == "__main__"):
sys.exit(main())
NOTE regarding Microsoft Windows:
For colorized output to work within the classic Command Prompt of Microsoft Windows, some additional code is necessary. This does NOT apply to the newer Terminal app which supports colorized output out of the box.
There are two options:
1) Use Python package colorama (filters output sent to stdout and stderr and translates escape sequences to native Windows API calls; works on Windows XP and later):
import colorama
colorama.init()
2) Enable ANSI terminal mode using the following function (enables terminal to interpret escape sequences by setting flag ENABLE_VIRTUAL_TERMINAL_PROCESSING; more info on this here, here, here and here; works on Windows 10 and later):
# Imports
import sys
import ctypes
# Enable ANSI terminal mode for Command Prompt on Microsoft Windows
def windows_enable_ansi_terminal_mode():
if (sys.platform != "win32"):
return None
try:
kernel32 = ctypes.windll.kernel32
result = kernel32.SetConsoleMode(kernel32.GetStdHandle(-11), 7)
if (result == 0): raise Exception
return True
except:
return False
Either run basicConfig with stream=sys.stdout as the argument prior to setting up any other handlers or logging any messages, or manually add a StreamHandler that pushes messages to stdout to the root logger (or any other logger you want, for that matter).
Logging to stdout and rotating file with different levels and formats:
import logging
import logging.handlers
import sys
if __name__ == "__main__":
# Change root logger level from WARNING (default) to NOTSET in order for all messages to be delegated.
logging.getLogger().setLevel(logging.NOTSET)
# Add stdout handler, with level INFO
console = logging.StreamHandler(sys.stdout)
console.setLevel(logging.INFO)
formater = logging.Formatter('%(name)-13s: %(levelname)-8s %(message)s')
console.setFormatter(formater)
logging.getLogger().addHandler(console)
# Add file rotating handler, with level DEBUG
rotatingHandler = logging.handlers.RotatingFileHandler(filename='rotating.log', maxBytes=1000, backupCount=5)
rotatingHandler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rotatingHandler.setFormatter(formatter)
logging.getLogger().addHandler(rotatingHandler)
log = logging.getLogger("app." + __name__)
log.debug('Debug message, should only appear in the file.')
log.info('Info message, should appear in file and stdout.')
log.warning('Warning message, should appear in file and stdout.')
log.error('Error message, should appear in file and stdout.')
After having used Waterboy's code over and over in multiple Python packages, I finally cast it into a tiny standalone Python package, which you can find here:
https://github.com/acschaefer/duallog
The code is well documented and easy to use. Simply download the .py file and include it in your project, or install the whole package via pip install duallog.
I have handled redirecting logs and prints to a file on disk, stdout, and stderr simultaneously, via the following module (the Gist is also available here):
import logging
import pathlib
import sys
from ml.common.const import LOG_DIR_PATH, ML_DIR
def create_log_file_path(file_path, root_dir=ML_DIR, log_dir=LOG_DIR_PATH):
path_parts = list(pathlib.Path(file_path).parts)
relative_path_parts = path_parts[path_parts.index(root_dir) + 1:]
log_file_path = pathlib.Path(log_dir, *relative_path_parts)
log_file_path = log_file_path.with_suffix('.log')
# Create the directories and the file itself
log_file_path.parent.mkdir(parents=True, exist_ok=True)
log_file_path.touch(exist_ok=True)
return log_file_path
def set_up_logs(file_path, mode='a', level=logging.INFO):
log_file_path = create_log_file_path(file_path)
logging_handlers = [logging.FileHandler(log_file_path, mode=mode),
logging.StreamHandler(sys.stdout)]
logging.basicConfig(
handlers=logging_handlers,
format='%(asctime)s %(name)s %(levelname)s %(message)s',
level=level
)
class OpenedFileHandler(logging.FileHandler):
def __init__(self, file_handle, filename, mode):
self.file_handle = file_handle
super(OpenedFileHandler, self).__init__(filename, mode)
def _open(self):
return self.file_handle
class StandardError:
def __init__(self, buffer_stderr, buffer_file):
self.buffer_stderr = buffer_stderr
self.buffer_file = buffer_file
def write(self, message):
self.buffer_stderr.write(message)
self.buffer_file.write(message)
class StandardOutput:
def __init__(self, buffer_stdout, buffer_file):
self.buffer_stdout = buffer_stdout
self.buffer_file = buffer_file
def write(self, message):
self.buffer_stdout.write(message)
self.buffer_file.write(message)
class Logger:
def __init__(self, file_path, mode='a', level=logging.INFO):
self.stdout_ = sys.stdout
self.stderr_ = sys.stderr
log_file_path = create_log_file_path(file_path)
self.file_ = open(log_file_path, mode=mode)
logging_handlers = [OpenedFileHandler(self.file_, log_file_path,
mode=mode),
logging.StreamHandler(sys.stdout)]
logging.basicConfig(
handlers=logging_handlers,
format='%(asctime)s %(name)s %(levelname)s %(message)s',
level=level
)
# Overrides write() method of stdout and stderr buffers
def write(self, message):
self.stdout_.write(message)
self.stderr_.write(message)
self.file_.write(message)
def flush(self):
pass
def __enter__(self):
sys.stdout = StandardOutput(self.stdout_, self.file_)
sys.stderr = StandardError(self.stderr_, self.file_)
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout = self.stdout_
sys.stderr = self.stderr_
self.file_.close()
Written as a context manager, you can simply add the functionality to your python script by adding an extra line:
from logger import Logger
...
if __name__ == '__main__':
with Logger(__file__):
main()
Although the question specifically asks for a logger configuration, there is an alternative approach that does not require any changes to the logging configuration, nor does it require redirecting stdout.
A bit simplistic, perhaps, but it works:
def log_and_print(message: str, level: int, logger: logging.Logger):
logger.log(level=level, msg=message) # log as normal
print(message) # prints to stdout by default
Instead of e.g. logger.debug('something') we now call log_and_print(message='something', level=logging.DEBUG, logger=logger).
We can also expand this a little, so it prints to stdout only when necessary:
def log_print(message: str, level: int, logger: logging.Logger):
# log the message normally
logger.log(level=level, msg=message)
# only print to stdout if the message is not logged to stdout
msg_logged_to_stdout = False
current_logger = logger
while current_logger and not msg_logged_to_stdout:
is_enabled = current_logger.isEnabledFor(level)
logs_to_stdout = any(
getattr(handler, 'stream', None) == sys.stdout
for handler in current_logger.handlers
)
msg_logged_to_stdout = is_enabled and logs_to_stdout
if not current_logger.propagate:
current_logger = None
else:
current_logger = current_logger.parent
if not msg_logged_to_stdout:
print(message)
This checks the logger and its parents for any handlers that stream to stdout, and checks if the logger is enabled for the specified level.
Note this has not been optimized for performance.
For 2.7, try the following:
fh = logging.handlers.RotatingFileHandler(LOGFILE, maxBytes=(1048576*5), backupCount=7)

Categories

Resources