I'm an admitted noob to Python. I've written a little logger that takes data from the serial port and writes it to a log file. I've got a small procedure that opens the file for append, writes, then closes. I suspect this might not be the best way to do it, but it's what I've figured out so far.
I'd like to be able to have it automagically perform a log-rotate at 00 UTC, but so far, my attempts to do this with RotatingFileHandler have failed.
Here's what the code looks like:
import time, serial, logging, logging.handlers,os,sys
from datetime import *
CT12 = serial.Serial()
CT12.port = "/dev/ct12k"
CT12.baudrate = 2400
CT12.parity = 'E'
CT12.bytesize = 7
CT12.stopbits = 1
CT12.timeout = 3
logStart = datetime.now()
dtg = datetime.strftime(logStart, '%Y-%m-%d %H:%M:%S ')
ctlA = unichr(1)
bom = unichr(2)
eom = unichr(3)
bel = unichr(7)
CT12Name = [ctlA, 'CT12-NWC-test']
CT12Header = ['-Ceilometer Logfile \r\n', '-File created: ', dtg, '\r\n']
def write_ceilo ( text ) :
f = open ('/data/CT12.log', 'a')
f.write (text)
f.close ()
write_ceilo(''.join(CT12Header))
CT12.open()
discard = CT12.readlines()
#print (discard)
while CT12.isOpen():
response = CT12.readline()
if len(response) >= 3:
if response[0] == '\x02' :
now=datetime.now()
dtg=datetime.strftime(now, '-%Y-%m-%d %H:%M:%S\r\n')
write_ceilo(dtg)
write_ceilo(''.join(CT12Name))
write_ceilo(response)
What can I do to make this rotate automatically, affixing either a date of rotation, or a serial number, for identification. I'm not looking to rotate any of these out, just keep a daily log file of the data. (or maybe an hourly file?)
For anyone arriving via Google, please don't move the log file out from under the logger while it is in use by calling the system copy of move commands or etc.
What you are looking for is a TimedRotatingFileHandler:
import time
import logging
from logging.handlers import TimedRotatingFileHandler
# format the log entries
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
handler = TimedRotatingFileHandler('/path/to/logfile.log',
when='midnight',
backupCount=10)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# generate example messages
for i in range(10000):
time.sleep(1)
logger.debug('debug message')
logger.info('informational message')
logger.warn('warning')
logger.error('error message')
logger.critical('critical failure')
You can simply do this:
import os
import time
date1 = time.strftime('%Y%m%d%H%M%S')
cmd1= "cp logfile logfile{0}".format(date1)
cmd2= "cat /dev/null > logfile"
os.system(cmd1)
os.system(cmd2)
'logfile' is the name of the file. I have copied the older log a new log file with a name based on time and date and then emptied the original file. If you want to rotate it every one hour, put this script in cron.
For anyone who does not like the idea of rotating files, but simply wants to use a file handler that immediately writes to a file with a specific date in its name: it is not hard to write your own handler. Here is an example:
class FileHandlerWithOneFilePerPeriod(FileHandler):
"""A handler which writes formatted logging records to files, one file per period."""
def __init__(self, filename_pattern, mode='a', encoding=None, delay=False):
"""
Constructs the file handler.
:param filename_pattern: the filename. Use strftime() directives to specify the format of the period.
For example, %Y%m%d can be used to generate one log file per day.
:param mode: the mode to open the file before writing. Common values are 'w' for writing (truncating the file
if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix
systems, means that all writes append to the end of the file regardless of the current seek position).
:param encoding: encoding is the name of the encoding used to decode or encode the file. This should only be
used in text mode.
:param delay: True if the file is opened when the first log message is emitted; False if the file is opened now
by the constructor.
"""
self.filename_pattern = filename_pattern
filename = datetime.now().strftime(self.filename_pattern)
super().__init__(filename, mode, encoding, delay)
def emit(self, record: LogRecord):
new_filename = datetime.fromtimestamp(record.created).strftime(self.filename_pattern)
if self.stream is None:
self.set_new_filename(new_filename)
elif self.differs_from_current_filename(new_filename):
self.close()
self.set_new_filename(new_filename)
super().emit(record)
def set_new_filename(self, new_filename):
self.baseFilename = new_filename
def differs_from_current_filename(self, filename: str) -> bool:
return filename != self.baseFilename
To use this handler, configure it with the following values in a dictionary (using logging.config.dictConfig():
version: 1
formatters:
simple:
format: '%(asctime)s %(name)s %(levelname)s %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: my_package.my_module.FileHandlerWithOneFilePerPeriod
level: DEBUG
formatter: simple
filename_pattern: my_logging-%Y%m%d.log
root:
level: DEBUG
handlers: [console, file]
This will log to console and to file. One file per day is used. Change my_package and my_module to match the module where you put the handler. Change my_logging to a more appropriate name.
By changing the date pattern in filename_pattern you actually control when new files are created. Each time the pattern applied to the datetime a log message is created differs from the previous applied pattern, a new file will be created.
Related
I setup TimedRotatingFileHandler like that:
import logging as logging
from logging.handlers import TimedRotatingFileHandler
import os
import time
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# new file every minute
rotation_logging_handler = TimedRotatingFileHandler('logs/log',
when='m',
interval=1,
backupCount=5)
rotation_logging_handler.setLevel(logging.DEBUG)
format = u'%(asctime)s\t%(levelname)s\t%(filename)s:%(lineno)d\t%(message)s'
rotation_logging_handler.setFormatter(logging.Formatter(format))
rotation_logging_handler.suffix = '%Y-%m-%d'
logger.addHandler(rotation_logging_handler)
Usage:
logger.logging.info('Service started at port %s', config.get_param('port'))
while True:
time.sleep(21)
logger.logging.info('Now time is {}'.format(time.time()))
I expected that every minute new messages from logs/log had to append to existing log file for current date. Instead it every minute messages from logs/log overwrote existing log file for current date.
What should I do to reach that behaviour?
PS: After small research I found that TimedRotatingFileHandler in the doRollover method deletes existing log file and creates new file. So first solution is to create new handler derived from TimedRotatingFileHandler
which creates new file (with some index for example) insted of deleting existing log file.
It is very much possible to change the filenames when they are rotated to anything you want by overrding the rotation_filename method of the BaseRotatingHandler class and setting it to an appropriate callable.
Here is a very trivial example of doing the same, but you can tweak it to suit your needs.
import logging
from logging.handlers import TimedRotatingFileHandler
import datetime as dt
def filer(self):
now = dt.datetime.now()
return 'file.txt'+now.strftime("%Y-%m-%d_%H:%M:%S")
logger = logging.getLogger()
rotating_file_handler = TimedRotatingFileHandler(filename="/Users/rbhanot/file.txt",
when='S',
interval=2,
backupCount=5)
rotating_file_handler.rotation_filename = filer
formatter = logging.Formatter(
'%(asctime)s %(name)s:%(levelname)s - %(message)s')
rotating_file_handler.setFormatter(formatter)
logger.addHandler(rotating_file_handler)
logger.setLevel(logging.DEBUG)
logger.info("hello")
Here is the output of
❯ ls file*
file.txt file.txt2020-10-06_13:12:13 file.txt2020-10-06_13:12:15 file.txt2020-10-06_13:13:45
After little bit more researching I found BaseRotatingHandler.namer attribute usage in the BaseRotatingHandler.rotation_filename method:
The default implementation calls the 'namer' attribute of the handler, if it's callable, passing the default name to it. If the attribute isn't callable (the default is None), the name is returned unchanged.
So as a solution I implemented my own namer function that got filename and returned new filename with my template:
20181231.log
20181231.0.log
20181231.1.log
etc.
Full example:
import logging as logging
from logging.handlers import TimedRotatingFileHandler
import os
import time
def get_filename(filename):
# Get logs directory
log_directory = os.path.split(filename)[0]
# Get file extension (also it's a suffix's value (i.e. ".20181231")) without dot
date = os.path.splitext(filename)[1][1:]
# Create new file name
filename = os.path.join(log_directory, date)
# I don't want to add index if only one log file will exists for date
if not os.path.exists('{}.log'.format(filename)):
return '{}.log'.format(filename)
# Create new file name with index
index = 0
f = '{}.{}.log'.format(filename, index)
while os.path.exists(f):
index += 1
f = '{}.{}.log'.format(filename, index)
return f
format = u'%(asctime)s\t%(levelname)s\t%(filename)s:%(lineno)d\t%(message)s'
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# new file every minute
rotation_logging_handler = TimedRotatingFileHandler('logs/log',
when='m',
interval=1,
backupCount=5)
rotation_logging_handler.setLevel(logging.DEBUG)
rotation_logging_handler.setFormatter(logging.Formatter(format))
rotation_logging_handler.suffix = '%Y%m%d'
rotation_logging_handler.namer = get_filename
logger.addHandler(rotation_logging_handler)
According to the documentation of TimedRotatingFileHandler:
The system will save old log files by appending extensions to the
filename. The extensions are date-and-time based, using the strftime
format %Y-%m-%d_%H-%M-%S or a leading portion thereof, depending on
the rollover interval.
In other words: by modifying suffix you are breaking the rollover. Just leave it at default and python will create the files called:
logs/log2018-02-02-01-30
logs/log2018-02-02-01-31
logs/log2018-02-02-01-32
logs/log2018-02-02-01-33
logs/log2018-02-02-01-34
And after this (if backupCount=5) it will delete the -30 one and create the -35.
If you instead want to have names like :
logs/log2018-02-02-01.0
logs/log2018-02-02-01.1
logs/log2018-02-02-01.2
logs/log2018-02-02-01.3
logs/log2018-02-02-01.4
Where 0 is the newest one and .4 is the oldest one, then yes, that handler has not been designed to do that.
I have a Python program that runs daily. I'm using the logging module with FileHandler to write logs to a file. I would like each run's logs to be in its own file with a timestamp. However, I want to delete old files (say > 3 months) to avoid filling the disk.
I've looked at the RotatingFileHandler and TimedRotatingFileHandler but I don't want a single run's logs to be split across multiple files, even if a single run were to take days. Is there a built-in method for that?
The logging module has a built in TimedRotatingFileHandler:
# import module
from logging.handlers import TimedRotatingFileHandler
from logging import Formatter
# get named logger
logger = logging.getLogger(__name__)
# create handler
handler = TimedRotatingFileHandler(filename='runtime.log', when='D', interval=1, backupCount=90, encoding='utf-8', delay=False)
# create formatter and add to handler
formatter = Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handler to named logger
logger.addHandler(handler)
# set the logging level
logger.setLevel(logging.INFO)
# --------------------------------------
# log something
logger.info("test")
Old logs automatically get a timestamp appended.
Every day a new backup will be created.
If more than 91 (current+backups) files exist the oldest will be deleted.
import logging
import time
from logging.handlers import RotatingFileHandler
logFile = 'test-' + time.strftime("%Y%m%d-%H%M%S")+ '.log'
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler(logFile, mode='a', maxBytes=50*1024*1024,
backupCount=5, encoding=None, delay=False)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
As suggest by #MartijnPieters in this question, you could easily extend the FileHandler class in order to handle your own deletion logic.
For example, my class will hold only the last "backup_count" files.
import os
import re
import datetime
import logging
from itertools import islice
class TimedPatternFileHandler(logging.FileHandler):
"""File handler that uses the current time fo the log filename,
by formating the current datetime, according to filename_pattern, using
the strftime function.
If backup_count is non-zero, then older filenames that match the base
filename are deleted to only leave the backup_count most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(self, filename_pattern, mode, backup_count):
self.filename_pattern = os.path.abspath(filename_pattern)
self.backup_count = backup_count
self.filename = datetime.datetime.now().strftime(self.filename_pattern)
delete = islice(self._matching_files(), self.backup_count, None)
for entry in delete:
# print(entry.path)
os.remove(entry.path)
super().__init__(filename=self.filename, mode=mode)
#property
def filename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
return datetime.datetime.now().strftime(self.filename_pattern)
#filename.setter
def filename(self, _):
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
basename = os.path.basename(self.filename_pattern)
pattern = re.compile(re.sub('%[a-zA-z]', '.*', basename))
for entry in os.scandir(os.path.dirname(self.filename_pattern)):
if not entry.is_file():
continue
entry_basename = os.path.basename(entry.path)
if re.match(pattern, entry_basename):
matches.append(entry)
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def create_timed_rotating_log(path):
""""""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedPatternFileHandler('{}_%H-%M-%S.log'.format(path), mode='a', backup_count=5)
logger.addHandler(handler)
logger.info("This is a test!")
Get the date/time. See this answer on how to get the timestamp. If the file is older than the current date by 3 months. Then delete it with
import os
os.remove("filename.extension")
save this file to py2exe, then just use any task scheduler to run this job at startup.
Windows: open the run command and enter shell:startup, then place your exe in here.
On OSX: The old way used to be to create a cron job, this doesn't work in many cases from my experience anymore but still work trying. The new recommended way by apple is CreatingLaunchdJobs. You can also refer to this topic for a more detailed explanation.
I want to change the way that the rotating file handler names files.
For example, if I use RotatingFileHandler, it separates the log file when it reaches a specific file size naming "log file name + extension numbering", like below.
filename.log #first log file
filename.log.1 #rotating log file1
filename.log.2 #rotating log file2
However, I want the log handler to name them every time it is created.
For example.
09-01-12-20.log #first log file
09-01-12-43.log #rotating log file1
09-01-15-00.log #rotating log file2
How can I do this?
Edit:
I am not asking how to create and name a file.
I want to facilitate python logging package doing something like inheriting and overriding logging.
I inherit and override RotatingFileHandler of python logging handler.
RotatingFileHandler has self.baseFilename value, the handler will use self.baseFilename to create logFile.(when it creates file first or when rollover happens)
self.shouldRollover() method, It checks if the handler should rollover logfile or not.
If this method return 1, it means rollover should happen or return 0.
By overriding them, I define when this handler makes rollover and which name should be used for new log file by rollover.
-----------------------------------------Edit-----------------------------------------
I post the example code.
from logging import handlers
class DailyRotatingFileHandler(handlers.RotatingFileHandler):
def __init__(self, alias, basedir, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=0):
"""
#summary:
Set self.baseFilename to date string of today.
The handler create logFile named self.baseFilename
"""
self.basedir_ = basedir
self.alias_ = alias
self.baseFilename = self.getBaseFilename()
handlers.RotatingFileHandler.__init__(self, self.baseFilename, mode, maxBytes, backupCount, encoding, delay)
def getBaseFilename(self):
"""
#summary: Return logFile name string formatted to "today.log.alias"
"""
self.today_ = datetime.date.today()
basename_ = self.today_.strftime("%Y-%m-%d") + ".log" + '.' + self.alias_
return os.path.join(self.basedir_, basename_)
def shouldRollover(self, record):
"""
#summary:
Rollover happen
1. When the logFile size is get over maxBytes.
2. When date is changed.
#see: BaseRotatingHandler.emit
"""
if self.stream is None:
self.stream = self._open()
if self.maxBytes > 0 :
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2)
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
if self.today_ != datetime.date.today():
self.baseFilename = self.getBaseFilename()
return 1
return 0
This DailyRotatingFileHandler will create logfile like
2016-10-05.log.alias
2016-10-05.log.alias.1
2016-10-05.log.alias.2
2016-10-06.log.alias
2016-10-06.log.alias.1
2016-10-07.log.alias.1
Check the following code and see if it helps. As far as I could understand from your question if your issue is in achieving the filename based on timestamp then this shall work for you.
import datetime, time
# This return the epoch timestamp
epochTime = time.time()
# We generate the timestamp
# as per the need
timeStamp = datetime.datetime\
.fromtimestamp(epochTime)\
.strftime('%Y-%m-%d-%H-%M')
# Create a log file
# use timeStamp as filename
fo = open(timeStamp+".log", "wb")
fo.write("Log data")
fo.close()
I have lots of code on a project with print statements and wanted to make a quick a dirty logger of these print statements and decided to go the custom route. I managed to put together a logger that prints both to the terminal and to a file (with the help of this site), but now I want to add a simple time stamp to each statement and I am running into a weird issue.
Here is my logging class.
class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message)
self.log.write(self.stamp() + message)
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
Notice the stamp method that I then attempt to use in the write method.
When running the following two lines I get an unexpected output:
sys.stdout = Logger(sys.stdout)
print("Hello World!")
Output:
[11:10:47] Hello World![11:10:47]
This what the output also looks in the log file, however, I see no reason why the string that I am adding appends to the end. Can someone help me here?
UPDATE
See answer below. However, for quicker reference the issue is using "print()" in general; replace it with sys.stdout.write after assigning the variable.
Also use "logging" for long-term/larger projects right off the bat.
It calls the .write() method of your stream twice because in cpython print calls the stream .write() method twice. The first time is with the object, and the second time it writes a newline character. For example look at line 138 in the pprint module in cpython v3.5.2
def pprint(self, object):
self._format(object, self._stream, 0, 0, {}, 0)
self._stream.write("\n") # <- write() called again!
You can test this out:
>>> from my_logger import Logger # my_logger.py has your Logger class
>>> import sys
>>> sys.stdout = Logger(stream=sys.stdout)
>>> sys.stdout.write('hi\n')
[14:05:32] hi
You can replace print(<blah>) everywhere in your code using sed.
$ for mymodule in *.py; do
> sed -i -E "s/print\((.+)\)/LOGGER.debug(\1)/" $mymodule
> done
Check out Python's Logging builtin module. It has pretty comprehensive logging including inclusion of a timestamp in all messages format.
import logging
FORMAT = '%(asctime)-15s %(message)s'
DATEFMT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=FORMAT, datefmt=DATEFMT)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.debug('message: %s', 'message')
This outputs 2016-07-29 11:44:20 message: message to stdout. There are also handlers to send output to files. There is a basic tutorial, an advanced tutorial and a cookbook of common logging recipes.
There is an example of using simultaneous file and console loggers in the cookbook.
import logging
LOGGER = logging.getLogger(__name__) # get logger named for this module
LOGGER.setLevel(logging.DEBUG) # set logger level to debug
# create formatter
LOG_DATEFMT = '%Y-%m-%d %H:%M:%S'
LOG_FORMAT = ('\n[%(levelname)s/%(name)s:%(lineno)d] %(asctime)s ' +
'(%(processName)s/%(threadName)s)\n> %(message)s')
FORMATTER = logging.Formatter(LOG_FORMAT, datefmt=LOG_DATEFMT)
CH = logging.StreamHandler() # create console handler
CH.setLevel(logging.DEBUG) # set handler level to debug
CH.setFormatter(FORMATTER) # add formatter to ch
LOGGER.addHandler(CH) # add console handler to logger
FH = logging.FileHandler('myapp.log') # create file handler
FH.setLevel(logging.DEBUG) # set handler level to debug
FH.setFormatter(FORMATTER) # add formatter to fh
LOGGER.addHandler(FH) # add file handler to logger
LOGGER.debug('test: %s', 'hi')
This outputs:
[DEBUG/__main__:22] 2016-07-29 12:20:45 (MainProcess/MainThread)
> test: hi
to both console and file myapp.log simultaneously.
You probably need to use newline character.
class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message + "\n")
self.log.write(self.stamp() + message + "\n")
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
Anyway, using built-in logging module will be better.
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.warning('\n new hello')
11:15:01 INFO hello
11:16:49 WARNING
new hello
Because the log is crowded, I want to explicitly insert a newline before asctime and levelname. Is this possible without modifying format?
I looked into logging module and googled a bit and could not find a viable way.
I have two solutions, the first is very easy, but the output is not very clean. The second method will produce the exact output you want, but it is a little more involved.
Method 1
To produce a blank line, just log an empty string with a new line:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.info('\n')
logging.warning('new hello')
The output will have an empty info line, which is not very clean:
16:07:26 INFO hello
16:07:26 INFO
16:07:26 WARNING new hello
Method 2
In this method, I created two different handlers. The console_handler which I use most of the time. When I need a new line, I switch to a second handler, blank_handler.
import logging
import types
def log_newline(self, how_many_lines=1):
# Switch handler, output a blank line
self.removeHandler(self.console_handler)
self.addHandler(self.blank_handler)
for i in range(how_many_lines):
self.info('')
# Switch back
self.removeHandler(self.blank_handler)
self.addHandler(self.console_handler)
def create_logger():
# Create a handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(logging.Formatter(fmt="%(name)s %(levelname)-8s: %(message)s"))
# Create a "blank line" handler
blank_handler = logging.StreamHandler()
blank_handler.setLevel(logging.DEBUG)
blank_handler.setFormatter(logging.Formatter(fmt=''))
# Create a logger, with the previously-defined handler
logger = logging.getLogger('logging_test')
logger.setLevel(logging.DEBUG)
logger.addHandler(console_handler)
# Save some data and add a method to logger object
logger.console_handler = console_handler
logger.blank_handler = blank_handler
logger.newline = types.MethodType(log_newline, logger)
return logger
if __name__ == '__main__':
logger = create_logger()
logger.info('Start reading database')
logger.info('Updating records ...')
logger.newline()
logger.info('Finish updating records')
The output is what you want to see:
logging_test INFO : Start reading database
logging_test INFO : Updating records ...
logging_test INFO : Finish updating records
Discussion
If you can put up with the less-than-perfect output, method 1 is the way to go. It has the advantage of being simple, least amount of effort.
The second method does the job correctly, but it is a little involved. It creates two different handlers and switch them in order to achieve your goal.
Another disadvantage of using method 2 is you have to change your code by searching for logging and replacing them with logger. You must take care replacing only relevant parts and leave such text as logging.DEBUG in tact.
Could you not add the newline after the first hello? i.e.
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello\n')
logging.info('new hello')
Which will output
2014-08-06 11:37:24,061 INFO : hello
2014-08-06 11:37:24,061 INFO : new hello
Easiest way to insert newlines that I figured out:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s\n\r%(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.info('new hello')
11:50:32 INFO
hello
11:50:32 INFO
new hello
Use a custom Formatter which uses different format strings at different times. You can't do this using basicConfig() - you'll have to use other parts of the logging API.
class MyFormatter(logging.Formatter):
def format(self, record):
# set self._fmt to value with or without newline,
# as per your decision criteria
# self._fmt = ...
return super(MyFormatter, self).format(record)
Or, you can call the super method, then modify the string to insert a newline before returning it (in case it's dependent on line length, say).
As an alternative to Hai Vu's Method 2 you could as well reset the handler's Formatter every time you want to log a new line:
import logging
import types
def log_newline(self, how_many_lines=1):
# Switch formatter, output a blank line
self.handler.setFormatter(self.blank_formatter)
for i in range(how_many_lines):
self.info('')
# Switch back
self.handler.setFormatter(self.formatter)
def create_logger():
# Create a handler
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt="%(name)s %(levelname)-8s: %(message)s")
blank_formatter = logging.Formatter(fmt="")
handler.setFormatter(formatter)
# Create a logger, with the previously-defined handler
logger = logging.getLogger('logging_test')
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
# Save some data and add a method to logger object
logger.handler = handler
logger.formatter = formatter
logger.blank_formatter = blank_formatter
logger.newline = types.MethodType(log_newline, logger)
return logger
if __name__ == '__main__':
logger = create_logger()
logger.info('Start reading database')
logger.info('Updating records ...')
logger.newline()
logger.info('Finish updating records')
Output
logging_test INFO : Start reading database
logging_test INFO : Updating records ...
logging_test INFO : Finish updating records
The advantage of this is that you have a single handler. For example you can define a FileHandler's mode-attribute to write, if you wanted to clean your log-file on every new run of your program.
If you are just looking to output some debug code in development then you may not want to spend time on this. The 5 second fix is this;
str = "\n\n\n"
log.getLogger().debug(str)
where the logger is the standard python logger
Something like this. Add \n into you logging.basicConfig between asctime and levelname
>>> logging.basicConfig(level=logging.DEBUG, format='%(asctime)s\n %(levelname)s %(message)s',datefmt='%H:%M:%S')
What about writing to the log file, without the logging service?
fn_log = 'test.log'
logging.basicConfig(filename=fn_log, level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s', datefmt='%H:%M:%S')
logging.info('hello')
logging.warning('no empty line')
def empty_line(fn_log):
new_empty_line = open(fn_log,'a+')
new_empty_line.write('\n')
new_empty_line.close()
empty_line(fn_log)
logging.warning('hello')
Output:
09:26:00 INFO hello
11:51:05 INFO hello
11:51:05 WARNING no empty line
11:51:05 WARNING hello
Following up on Vinay Salip's helpful answer (below), I did it this way (I'm using the python3 superclass convention, but super(MyFormatter, self) works just as well) ...
class MyFormatter(logging.Formatter):
def format(self, record):
return super().format(record).replace(r'\n', '\n')
Then, I can embed newlines as follows:
logging.info('Message\\n\\n\\n\\nOther stuff')
or
logging.info(r'Message\n\n\n\nOther stuff')
If you use FileHandler or descendants thereof, these two functions may help. An added benefit is that all FileHandler type handlers attached to the logger should get the newline.
def getAllLoggerFilenames(logger):
""" Returns array of all log filenames attached to the logger. """
logFiles = [];
parent = logger.__dict__['parent'];
if parent.__class__.__name__ == 'RootLogger':
for h in logger.__dict__['handlers']:
if h.baseFilename:
logFiles.append(h.baseFilename);
else:
logFiles = getAllLoggerFilenames(parent);
return logFiles;
def logBlankLine(logger):
""" This utility method writes a blank line to the log. """
logNames = getAllLoggerFilenames(logger)
for fn in logNames:
with open(fn, 'a') as fh:
fh.write("\n")
Usage:
# We use YAML for logging config files, YMMV:
with open(logConfig, 'rt') as f:
logging.config.dictConfig(yaml.safe_load(f.read()))
logger = logging.getLogger("test.test")
logger.info("line 1")
logBlankLine(logger)
logger.info("line 2")
Output:
2019/12/22 16:33:59.152: INFO : test.test : line 1
2019/12/22 16:33:59.152: INFO : test.test : line 2
The easiest solution is to use f-strings if you are using Python 3:
logging.info( f'hello\n' )
You can try the following solution. It's simple and straightforward.
logging.debug("\b" * 20) # output blank line using escape character
logging.debug("debug message")