import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.warning('\n new hello')
11:15:01 INFO hello
11:16:49 WARNING
new hello
Because the log is crowded, I want to explicitly insert a newline before asctime and levelname. Is this possible without modifying format?
I looked into logging module and googled a bit and could not find a viable way.
I have two solutions, the first is very easy, but the output is not very clean. The second method will produce the exact output you want, but it is a little more involved.
Method 1
To produce a blank line, just log an empty string with a new line:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.info('\n')
logging.warning('new hello')
The output will have an empty info line, which is not very clean:
16:07:26 INFO hello
16:07:26 INFO
16:07:26 WARNING new hello
Method 2
In this method, I created two different handlers. The console_handler which I use most of the time. When I need a new line, I switch to a second handler, blank_handler.
import logging
import types
def log_newline(self, how_many_lines=1):
# Switch handler, output a blank line
self.removeHandler(self.console_handler)
self.addHandler(self.blank_handler)
for i in range(how_many_lines):
self.info('')
# Switch back
self.removeHandler(self.blank_handler)
self.addHandler(self.console_handler)
def create_logger():
# Create a handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(logging.Formatter(fmt="%(name)s %(levelname)-8s: %(message)s"))
# Create a "blank line" handler
blank_handler = logging.StreamHandler()
blank_handler.setLevel(logging.DEBUG)
blank_handler.setFormatter(logging.Formatter(fmt=''))
# Create a logger, with the previously-defined handler
logger = logging.getLogger('logging_test')
logger.setLevel(logging.DEBUG)
logger.addHandler(console_handler)
# Save some data and add a method to logger object
logger.console_handler = console_handler
logger.blank_handler = blank_handler
logger.newline = types.MethodType(log_newline, logger)
return logger
if __name__ == '__main__':
logger = create_logger()
logger.info('Start reading database')
logger.info('Updating records ...')
logger.newline()
logger.info('Finish updating records')
The output is what you want to see:
logging_test INFO : Start reading database
logging_test INFO : Updating records ...
logging_test INFO : Finish updating records
Discussion
If you can put up with the less-than-perfect output, method 1 is the way to go. It has the advantage of being simple, least amount of effort.
The second method does the job correctly, but it is a little involved. It creates two different handlers and switch them in order to achieve your goal.
Another disadvantage of using method 2 is you have to change your code by searching for logging and replacing them with logger. You must take care replacing only relevant parts and leave such text as logging.DEBUG in tact.
Could you not add the newline after the first hello? i.e.
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello\n')
logging.info('new hello')
Which will output
2014-08-06 11:37:24,061 INFO : hello
2014-08-06 11:37:24,061 INFO : new hello
Easiest way to insert newlines that I figured out:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s\n\r%(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.info('new hello')
11:50:32 INFO
hello
11:50:32 INFO
new hello
Use a custom Formatter which uses different format strings at different times. You can't do this using basicConfig() - you'll have to use other parts of the logging API.
class MyFormatter(logging.Formatter):
def format(self, record):
# set self._fmt to value with or without newline,
# as per your decision criteria
# self._fmt = ...
return super(MyFormatter, self).format(record)
Or, you can call the super method, then modify the string to insert a newline before returning it (in case it's dependent on line length, say).
As an alternative to Hai Vu's Method 2 you could as well reset the handler's Formatter every time you want to log a new line:
import logging
import types
def log_newline(self, how_many_lines=1):
# Switch formatter, output a blank line
self.handler.setFormatter(self.blank_formatter)
for i in range(how_many_lines):
self.info('')
# Switch back
self.handler.setFormatter(self.formatter)
def create_logger():
# Create a handler
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt="%(name)s %(levelname)-8s: %(message)s")
blank_formatter = logging.Formatter(fmt="")
handler.setFormatter(formatter)
# Create a logger, with the previously-defined handler
logger = logging.getLogger('logging_test')
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
# Save some data and add a method to logger object
logger.handler = handler
logger.formatter = formatter
logger.blank_formatter = blank_formatter
logger.newline = types.MethodType(log_newline, logger)
return logger
if __name__ == '__main__':
logger = create_logger()
logger.info('Start reading database')
logger.info('Updating records ...')
logger.newline()
logger.info('Finish updating records')
Output
logging_test INFO : Start reading database
logging_test INFO : Updating records ...
logging_test INFO : Finish updating records
The advantage of this is that you have a single handler. For example you can define a FileHandler's mode-attribute to write, if you wanted to clean your log-file on every new run of your program.
If you are just looking to output some debug code in development then you may not want to spend time on this. The 5 second fix is this;
str = "\n\n\n"
log.getLogger().debug(str)
where the logger is the standard python logger
Something like this. Add \n into you logging.basicConfig between asctime and levelname
>>> logging.basicConfig(level=logging.DEBUG, format='%(asctime)s\n %(levelname)s %(message)s',datefmt='%H:%M:%S')
What about writing to the log file, without the logging service?
fn_log = 'test.log'
logging.basicConfig(filename=fn_log, level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.warning('no empty line')
def empty_line(fn_log):
new_empty_line = open(fn_log,'a+')
new_empty_line.write('\n')
new_empty_line.close()
empty_line(fn_log)
logging.warning('hello')
Output:
09:26:00 INFO hello
11:51:05 INFO hello
11:51:05 WARNING no empty line
11:51:05 WARNING hello
Following up on Vinay Salip's helpful answer (below), I did it this way (I'm using the python3 superclass convention, but super(MyFormatter, self) works just as well) ...
class MyFormatter(logging.Formatter):
def format(self, record):
return super().format(record).replace(r'\n', '\n')
Then, I can embed newlines as follows:
logging.info('Message\\n\\n\\n\\nOther stuff')
or
logging.info(r'Message\n\n\n\nOther stuff')
If you use FileHandler or descendants thereof, these two functions may help. An added benefit is that all FileHandler type handlers attached to the logger should get the newline.
def getAllLoggerFilenames(logger):
""" Returns array of all log filenames attached to the logger. """
logFiles = [];
parent = logger.__dict__['parent'];
if parent.__class__.__name__ == 'RootLogger':
for h in logger.__dict__['handlers']:
if h.baseFilename:
logFiles.append(h.baseFilename);
else:
logFiles = getAllLoggerFilenames(parent);
return logFiles;
def logBlankLine(logger):
""" This utility method writes a blank line to the log. """
logNames = getAllLoggerFilenames(logger)
for fn in logNames:
with open(fn, 'a') as fh:
fh.write("\n")
Usage:
# We use YAML for logging config files, YMMV:
with open(logConfig, 'rt') as f:
logging.config.dictConfig(yaml.safe_load(f.read()))
logger = logging.getLogger("test.test")
logger.info("line 1")
logBlankLine(logger)
logger.info("line 2")
Output:
2019/12/22 16:33:59.152: INFO : test.test : line 1
2019/12/22 16:33:59.152: INFO : test.test : line 2
The easiest solution is to use f-strings if you are using Python 3:
logging.info( f'hello\n' )
You can try the following solution. It's simple and straightforward.
logging.debug("\b" * 20) # output blank line using escape character
logging.debug("debug message")
Related
I have a file named helper.py
import logging
import os
from json import load
def get_config(value):
with open('config.json','r') as f:
result=load(f)[value]
return result
def get_logger(name,level):
logpath=get_config("log_path")
if not os.path.exists(logpath):
os.mkdir(logpath)
logger = logging.getLogger(name)
if not bool(logger.handlers):
formatter = logging.Formatter('%(asctime)s.%(msecs)03d - %(name)s - %(levelname)s - %(message)s',datefmt='%Y-%m-%d %H:%M:%S')
fh = logging.FileHandler(os.path.join(logpath,f'{get_config("log_file_name")}.log'),mode="w",encoding='utf-8')
fh.setFormatter(formatter)
logger.addHandler(fh)
ch = logging.StreamHandler()
ch.setFormatter(formatter)
ch.setLevel(level)
logger.addHandler(ch)
return logger
LOGGER=get_logger("MyLogger",logging.INFO)
This is config.json:
{
"save_path" : "results/",
"log_path" : "logs/",
"log_file_name" : "MyLog"
}
let's say I am using LOGGER from helper it in x.py
from helper import LOGGER
logger=LOGGER
def div(x,y):
try:
logger.info("inside div")
return x/y
except Exception as e:
logger.error(f"div failed due to {e.message if 'message' in dir(e) else e}")
I am using this LOGGER in other files by importing helper.LOGGER for logging purposes but it's not printing anything on the console nor writing in a log file
My attempt:
I tried adding sys.stdout in StreamHandler() It doesn't worked
Then I tried setting the level of fh but nothing works
I tried adding basicConfig() instead of fileHandler() but then printing to console using print() and the output of logs is not coming in the correct order
Kindly let me know where I go wrong
Any help is appreciated :)
Thanks :)
You are not setting the level on the LOGGER, which by default is warning. This is why your info level log is not appearing. The Python documentation has a flow chart illustrating when a log will be logged: https://docs.python.org/3/howto/logging.html#logging-flow
The first thing it does, is that before a logger sends a log to their handlers it checks if the level is enabled for the logger. You should add logger.setLevel(level) in your get_logger().
I am trying to have two different handlers where one handler will print the logs on console and other different handler will print the logs on console. Conslole handler is given by one inbuilt python modbus-tk library and I have written my own file handlers.
LOG = utils.create_logger(name="console", record_format="%(message)s") . ---> This is from modbus-tk library
LOG = utils.create_logger("console", level=logging.INFO)
logging.basicConfig(filename="log", level=logging.DEBUG)
log = logging.getLogger("simulator")
handler = RotatingFileHandler("log",maxBytes=5000,backupCount=1)
log.addHandler(handler)
What I need:
LOG.info("This will print message on console")
log.info("This will print message in file")
But problem is both the logs are getting printed on the console and both are going in file. I want only LOG to be printed on the console and log to be printed in the file.
edited:
Adding snippet from utils.create_logger
def create_logger(name="dummy", level=logging.DEBUG, record_format=None):
"""Create a logger according to the given settings"""
if record_format is None:
record_format = "%(asctime)s\t%(levelname)s\t%(module)s.%(funcName)s\t%(threadName)s\t%(message)s"
logger = logging.getLogger("modbus_tk")
logger.setLevel(level)
formatter = logging.Formatter(record_format)
if name == "udp":
log_handler = LogitHandler(("127.0.0.1", 1975))
elif name == "console":
log_handler = ConsoleHandler()
elif name == "dummy":
log_handler = DummyHandler()
else:
raise Exception("Unknown handler %s" % name)
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
return logger
I have an own customized logging module. I have modified a little and I think now it can be proper for your problem. It is totally configurable and it can handle more different handlers.
If you want to combine the console and file logging, you only need to remove the return statement (I use this way).
I have written comment to code for more understandable and You can found a test section in if __name__ == "__main__": ... statement.
Code:
import logging
import os
# Custom logger class with multiple destinations
class CustomLogger(logging.Logger):
"""
Customized Logger class from the original logging.Logger class.
"""
# Format for console log
FORMAT = (
"[%(name)-30s][%(levelname)-19s] | %(message)-100s "
"| (%(filename)s:%(lineno)d)"
)
# Format for log file
LOG_FILE_FORMAT = "[%(name)s][%(levelname)s] | %(message)s " "| %(filename)s:%(lineno)d)"
def __init__(
self,
name,
log_file_path=None,
console_level=logging.INFO,
log_file_level=logging.DEBUG,
log_file_open_format="w",
):
logging.Logger.__init__(self, name)
consol_color_formatter = logging.Formatter(self.FORMAT)
# If the "log_file_path" parameter is provided,
# the logs will be visible only in the log file.
if log_file_path:
fh_formatter = logging.Formatter(self.LOG_FILE_FORMAT)
file_handler = logging.FileHandler(log_file_path, mode=log_file_open_format)
file_handler.setLevel(log_file_level)
file_handler.setFormatter(fh_formatter)
self.addHandler(file_handler)
return
# If the "log_file_path" parameter is not provided,
# the logs will be visible only in the console.
console = logging.StreamHandler()
console.setLevel(console_level)
console.setFormatter(consol_color_formatter)
self.addHandler(console)
if __name__ == "__main__": # pragma: no cover
current_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_log.log")
console_logger = CustomLogger(__file__, console_level=logging.INFO)
file_logger = CustomLogger(__file__, log_file_path=current_dir, log_file_level=logging.DEBUG)
console_logger.info("test_to_console")
file_logger.info("test_to_file")
Console output:
>>> python3 test.py
[test.py][INFO ] | test_to_console | (test.py:55)
Content of test_log.log file:
[test.py][INFO] | test_to_file | test.py:56)
If something is not clear of you have question/remark, let me know and I will try to help.
EDIT:
If you change the GetLogger to Logger in your implementation, it will work.
Code:
import logging
def create_logger(name="dummy", level=logging.DEBUG, record_format=None):
"""Create a logger according to the given settings"""
if record_format is None:
record_format = "%(asctime)s\t%(levelname)s\t%(module)s.%(funcName)s\t%(threadName)s\t%(message)s"
logger = logging.Logger("modbus_tk")
logger.setLevel(level)
formatter = logging.Formatter(record_format)
if name == "console":
log_handler = logging.StreamHandler()
else:
raise Exception("Wrong type of handler")
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
return logger
console_logger = create_logger(name="console")
# logging.basicConfig(filename="log", level=logging.DEBUG)
file_logger = logging.Logger("simulator")
handler = logging.FileHandler("log", "w")
file_logger.addHandler(handler)
console_logger.info("info to console")
file_logger.info("info to file")
Console output:
>>> python3 test.py
2019-12-16 13:10:45,963 INFO test.<module> MainThread info to console
Content of log file:
info to file
There are a few problems in your code and without seeing the whole configuration it is hard to tell what exactly causes this, but most likely what is happening is that the logs are propagated.
First of all when you call basicConfig you are configuring the root logger and tell it to create a FileHandler with the filename log, but just two lines after that you are creating a RotatingFileHandler that uses the same file. Both loggers are writing to the same file now.
I find it always helps to understand the flow of how logging works in python: https://docs.python.org/3/howto/logging.html#logging-flow
And if you don't want all logs to be sent to the root logger too you should set LOG.propagate = False. That stops this logger from propagating their logs.
I would like to generate a new log file on each iteration of a loop in Python using the logging module. I am analysing data in a for loop, where each iteration of the loop contains information on a new object. I would like to generate a log file per object.
I looked at the docs for the logging module and there is capability to change log file on time intervals or when the log file fills up, but I cannot see how to iteratively generate a new log file with a new name. I know ahead of time how many objects are in the loop.
My imagined pseudo code would be:
import logging
for target in targets:
logfile_name = f"{target}.log"
logging.basicConfig(format='%(asctime)s - %(levelname)s : %(message)s',
datefmt='%Y-%m/%dT%H:%M:%S',
filename=logfile_name,
level=logging.DEBUG)
# analyse target infomation
logging.info('log target info...')
However, the logging information is always appended to the fist log file for target 1.
Is there a way to force a new log file at the beginning of each loop?
Rather than using logging directly, you need to use logger objects. Go thorough the docs here.
Create a new logger object as a first statement in the loop. The below is a working solution.
import logging
import sys
def my_custom_logger(logger_name, level=logging.DEBUG):
"""
Method to return a custom logger with the given name and level
"""
logger = logging.getLogger(logger_name)
logger.setLevel(level)
format_string = ("%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:"
"%(lineno)d — %(message)s")
log_format = logging.Formatter(format_string)
# Creating and adding the console handler
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_format)
logger.addHandler(console_handler)
# Creating and adding the file handler
file_handler = logging.FileHandler(logger_name, mode='a')
file_handler.setFormatter(log_format)
logger.addHandler(file_handler)
return logger
if __name__ == "__main__":
for item in range(10):
logger = my_custom_logger(f"Logger{item}")
logger.debug(item)
This writes to a different log file for each iteration.
This might not be the best solution, but it will create new log file for each iteration. What this is doing is, adding a new file handler in each iteration.
import logging
targets = ["a", "b", "c"]
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
for target in targets:
log_file = "{}.log".format(target)
log_format = "|%(levelname)s| : [%(filename)s]--[%(funcName)s] : %(message)s"
formatter = logging.Formatter(log_format)
# create file handler and set the formatter
file_handler = logging.FileHandler(log_file)
file_handler.setFormatter(formatter)
# add handler to the logger
logger.addHandler(file_handler)
# sample message
logger.info("Log file: {}".format(target))
This is not necessarily the best answer but worked for my case, and just wanted to put it here for future references. I created a function that looks as follows:
def logger(filename, level=None, format=None):
"""A wrapper to the logging python module
This module is useful for cases where we need to log in a for loop
different files. It also will allow more flexibility later on how the
logging format could evolve.
Parameters
----------
filename : str
Name of logfile.
level : str, optional
Level of logging messages, by default 'info'. Supported are: 'info'
and 'debug'.
format : str, optional
Format of logging messages, by default '%(message)s'.
Returns
-------
logger
A logger object.
"""
levels = {"info": logging.INFO, "debug": logging.DEBUG}
if level is None:
level = levels["info"]
else:
level = levels[level.lower()]
if format is None:
format = "%(message)s"
# https://stackoverflow.com/a/12158233/1995261
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logger = logging.basicConfig(filename=filename, level=level, format=format)
return logger
As you can see (you might need to scroll down the code above to see the return logger line), I am using logging.basicConfig(). All modules I have in my package that log stuff, have the following at the beginning of the files:
import logging
import other stuff
logger = logging.getLogger()
class SomeClass(object):
def some_method(self):
logger.info("Whatever")
.... stuff
When doing a loop, I have call things this way:
if __name__ == "__main__":
for i in range(1, 11, 1):
directory = "_{}".format(i)
if not os.path.exists(directory):
os.makedirs(directory)
filename = directory + "/training.log"
logger(filename=filename)
I hope this is helpful.
I'd like to slightly modify #0Nicholas's method. The direction is right, but the first FileHandler will continue log information into the first log file as long as the function is running. Therefore, we would want to pop the handler out of the logger's handlers list:
import logging
targets = ["a", "b", "c"]
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
log_format = "|%(levelname)s| : [%(filename)s]--[%(funcName)s] : %(message)s"
formatter = logging.Formatter(log_format)
for target in targets:
log_file = f"{target}.log"
# create file handler and set the formatter
file_handler = logging.FileHandler(log_file)
file_handler.setFormatter(formatter)
# add handler to the logger
logger.addHandler(file_handler)
# sample message
logger.info(f"Log file: {target}")
# close the log file
file_handler.close()
# remove the handler from the logger. The default behavior is to pop out
# the last added one, which is the file_handler we just added in the
# beginning of this iteration.
logger.handlers.pop()
Here is a working version for this problem. I was only able to get it to work if the targets already have .log before going into the loop so you may want to add one more for before going into targets and override all targets with .log extension
import logging
targets = ["a.log","b.log","c.log"]
for target in targets:
log = logging.getLogger(target)
formatter = logging.Formatter('%(asctime)s - %(levelname)s : %(message)s', datefmt='%Y-%m/%dT%H:%M:%S')
fileHandler = logging.FileHandler(target, mode='a')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
log.addHandler(fileHandler)
log.addHandler(streamHandler)
log.info('log target info...')
I have lots of code on a project with print statements and wanted to make a quick a dirty logger of these print statements and decided to go the custom route. I managed to put together a logger that prints both to the terminal and to a file (with the help of this site), but now I want to add a simple time stamp to each statement and I am running into a weird issue.
Here is my logging class.
class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message)
self.log.write(self.stamp() + message)
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
Notice the stamp method that I then attempt to use in the write method.
When running the following two lines I get an unexpected output:
sys.stdout = Logger(sys.stdout)
print("Hello World!")
Output:
[11:10:47] Hello World![11:10:47]
This what the output also looks in the log file, however, I see no reason why the string that I am adding appends to the end. Can someone help me here?
UPDATE
See answer below. However, for quicker reference the issue is using "print()" in general; replace it with sys.stdout.write after assigning the variable.
Also use "logging" for long-term/larger projects right off the bat.
It calls the .write() method of your stream twice because in cpython print calls the stream .write() method twice. The first time is with the object, and the second time it writes a newline character. For example look at line 138 in the pprint module in cpython v3.5.2
def pprint(self, object):
self._format(object, self._stream, 0, 0, {}, 0)
self._stream.write("\n") # <- write() called again!
You can test this out:
>>> from my_logger import Logger # my_logger.py has your Logger class
>>> import sys
>>> sys.stdout = Logger(stream=sys.stdout)
>>> sys.stdout.write('hi\n')
[14:05:32] hi
You can replace print(<blah>) everywhere in your code using sed.
$ for mymodule in *.py; do
> sed -i -E "s/print\((.+)\)/LOGGER.debug(\1)/" $mymodule
> done
Check out Python's Logging builtin module. It has pretty comprehensive logging including inclusion of a timestamp in all messages format.
import logging
FORMAT = '%(asctime)-15s %(message)s'
DATEFMT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=FORMAT, datefmt=DATEFMT)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.debug('message: %s', 'message')
This outputs 2016-07-29 11:44:20 message: message to stdout. There are also handlers to send output to files. There is a basic tutorial, an advanced tutorial and a cookbook of common logging recipes.
There is an example of using simultaneous file and console loggers in the cookbook.
import logging
LOGGER = logging.getLogger(__name__) # get logger named for this module
LOGGER.setLevel(logging.DEBUG) # set logger level to debug
# create formatter
LOG_DATEFMT = '%Y-%m-%d %H:%M:%S'
LOG_FORMAT = ('\n[%(levelname)s/%(name)s:%(lineno)d] %(asctime)s ' +
'(%(processName)s/%(threadName)s)\n> %(message)s')
FORMATTER = logging.Formatter(LOG_FORMAT, datefmt=LOG_DATEFMT)
CH = logging.StreamHandler() # create console handler
CH.setLevel(logging.DEBUG) # set handler level to debug
CH.setFormatter(FORMATTER) # add formatter to ch
LOGGER.addHandler(CH) # add console handler to logger
FH = logging.FileHandler('myapp.log') # create file handler
FH.setLevel(logging.DEBUG) # set handler level to debug
FH.setFormatter(FORMATTER) # add formatter to fh
LOGGER.addHandler(FH) # add file handler to logger
LOGGER.debug('test: %s', 'hi')
This outputs:
[DEBUG/__main__:22] 2016-07-29 12:20:45 (MainProcess/MainThread)
> test: hi
to both console and file myapp.log simultaneously.
You probably need to use newline character.
class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message + "\n")
self.log.write(self.stamp() + message + "\n")
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
Anyway, using built-in logging module will be better.
In my code I get a logger from my client, then I do stuff and log my analysis to the logger.
I want to add my own prefix to the logger but I don't want to create my own formatter, just to add my prefix to the existing one.
In addition I want to remove my prefix once my code is done.
From looking at the documentation I could only find ways to create new formatter but not to modify an existing one. Is there a way to do so?
You are correct. As per Python 3 and Python 2 documentation there is no way to reset your format on the existing formatter object and you do need to create a new logging.Formatter object. However, looking at the object at runtime there is _fmt method to get the existing format and it seems tweaking it will work. I tried in 2.7 and it works. Below is the example.
Example code for python 2.7:
import logging
logger = logging.getLogger('something')
myFormatter = logging.Formatter('%(asctime)s - %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(myFormatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.info("log statement here")
#Tweak the formatter
myFormatter._fmt = "My PREFIX -- " + myFormatter._fmt
logger.info("another log statement here")
Output:
2015-03-11 12:51:36,605 - log statement here
My PREFIX -- 2015-03-11 12:51:36,605 - another log statement here
This can be achieved with logging.LoggerAdapter
import logging
class CustomAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
return f"[my prefix] {msg}", kwargs
logger = CustomAdapter(logging.getLogger(__name__))
Please note that only the message will be affected. But this technique can be used for more complicated cases
You can actually set the format through the 'basicConfig', it is mentioned in the Python document: https://docs.python.org/2/howto/logging-cookbook.html#context-info
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')