why TimedRotatingFileHandler does not delete old files? - python

I am using TimedRotatingFileHandler to create my logs.
I want my log files to be created every minute, keep at most 2 log files and delete older ones. Here is the sample code:
import logging
import logging.handlers
import datetime
logger = logging.getLogger('MyLogger')
logger.setLevel(logging.DEBUG)
handler = logging.handlers.TimedRotatingFileHandler(
"logs/{:%H-%M}.log".format(datetime.datetime.now()),
when="M",
backupCount=2)
logger.addHandler(handler)
logger.debug("PLEASE DELETE PREVIOUS FILES")
If I run this code multiple times (with a minute interval) I get multiple files in my logs directory like so:
21-01.log
21-02.log
21-03.log
...
This seems strange to me, since I set backupCount=2 which indicates that at most 2 files should be saved and older files should be deleted. However when I start my application with 2 files or more in the log folder, old files are not deleted.
Why TimedRotatingFileHandler does not delete old files?
Is there any way I can set TimedRotatingFileHandler to delete older files?

Like you can see in TimedRotatingFileHandler documentation, your log filename should be the same to get properly the rotating system.
In your case, because YOU are yourself appending dateTime information, the log filename is different each time, and so you observe the result.
So, in your source code, you just need to adapt the log filename:
handler = logging.handlers.TimedRotatingFileHandler(
"logs/MyLog",
when="M",
backupCount=2)
If you want to challenge it, you can change the when to "S" (seconds), and check the rotation is OK.
For instance, it will produce such files automagically:
> MyLog
> MyLog.2019-07-08_11-36-53
> MyLog.2019-07-08_11-36-58
Don't hesitate if you need additional information.

You can't use TimedRotatingFileHandler, as designed, for your use case. The handler expects the "current" log file name to remain stable and defines rotating as moving existing logging files to a backup by renaming. These are the backups that are kept or deleted. The rotation backups are created from the base filename plus a suffix with the rotation timestamp. So the implementation distinguishes between the log file (stored in baseFilename) and rotation files (generated in the doRotate() method. Note that backups are only deleted when rotation takes place, so after the handler has been in use for at least one full interval.
You instead want the base filename itself to carry the time information, and so are varying the log file name itself. There are no 'backups' in this scenario, you simply open a new file at rotation moments. Moreover, you appear to be running short-lived Python code, so you want older files removed immediately, not just when explicitly rotating, which may never be reached.
This is why TimedRotatingFileHandler won't delete any files, because *it never gets to create backup files. No backups means there is no backups to remove. To rotate files, current implementation of the handler expects to be in charge of filename generation, and can't be expected to know about filenames it would not itself generate. When you configure it with the "M" per-minute rotation frequency, it is configured to rotate files to backup files with the pattern {baseFileame}.{now:%Y-%m-%d_%H_%M}, and so will only ever delete rotated backup files that match that pattern. See the documentation:
The system will save old log files by appending extensions to the filename. The extensions are date-and-time based, using the strftime format %Y-%m-%d_%H-%M-%S or a leading portion thereof, depending on the rollover interval.
Instead, what you want is a base filename that itself carries the timestamp, and that on opening a new log file with a different name that older log files (not backup files) are deleted. For this you'd have to create a custom handler.
Luckily, the class hierarchy is specifically designed for easy customisation. You can subclass BaseRotatingHandler here, and provide your own deletion logic:
import os
import time
from itertools import islice
from logging.handlers import BaseRotatingHandler, TimedRotatingFileHandler
# rotation intervals in seconds
_intervals = {
"S": 1,
"M": 60,
"H": 60 * 60,
"D": 60 * 60 * 24,
"MIDNIGHT": 60 * 60 * 24,
"W": 60 * 60 * 24 * 7,
}
class TimedPatternFileHandler(BaseRotatingHandler):
"""File handler that uses the current time in the log filename.
The time is quantisized to a configured interval. See
TimedRotatingFileHandler for the meaning of the when, interval, utc and
atTime arguments.
If backupCount is non-zero, then older filenames that match the base
filename are deleted to only leave the backupCount most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(
self,
filenamePattern,
when="h",
interval=1,
backupCount=0,
encoding=None,
delay=False,
utc=False,
atTime=None,
):
self.when = when.upper()
self.backupCount = backupCount
self.utc = utc
self.atTime = atTime
try:
key = "W" if self.when.startswith("W") else self.when
self.interval = _intervals[key]
except KeyError:
raise ValueError(
f"Invalid rollover interval specified: {self.when}"
) from None
if self.when.startswith("W"):
if len(self.when) != 2:
raise ValueError(
"You must specify a day for weekly rollover from 0 to 6 "
f"(0 is Monday): {self.when}"
)
if not "0" <= self.when[1] <= "6":
raise ValueError(
f"Invalid day specified for weekly rollover: {self.when}"
)
self.dayOfWeek = int(self.when[1])
self.interval = self.interval * interval
self.pattern = os.path.abspath(os.fspath(filenamePattern))
# determine best time to base our rollover times on
# prefer the creation time of the most recently created log file.
t = now = time.time()
entry = next(self._matching_files(), None)
if entry is not None:
t = entry.stat().st_ctime
while t + self.interval < now:
t += self.interval
self.rolloverAt = self.computeRollover(t)
# delete older files on startup and not delaying
if not delay and backupCount > 0:
keep = backupCount
if os.path.exists(self.baseFilename):
keep += 1
delete = islice(self._matching_files(), keep, None)
for entry in delete:
os.remove(entry.path)
# Will set self.baseFilename indirectly, and then may use
# self.baseFilename to open. So by this point self.rolloverAt and
# self.interval must be known.
super().__init__(filenamePattern, "a", encoding, delay)
#property
def baseFilename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
t = self.rolloverAt - self.interval
if self.utc:
time_tuple = time.gmtime(t)
else:
time_tuple = time.localtime(t)
dst = time.localtime(self.rolloverAt)[-1]
if dst != time_tuple[-1] and self.interval > 3600:
# DST switches between t and self.rolloverAt, adjust
addend = 3600 if dst else -3600
time_tuple = time.localtime(t + addend)
return time.strftime(self.pattern, time_tuple)
#baseFilename.setter
def baseFilename(self, _):
# assigned to by FileHandler, just ignore this as we use self.pattern
# instead
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
pattern = self.pattern
for entry in os.scandir(os.path.dirname(pattern)):
if not entry.is_file():
continue
try:
time.strptime(entry.path, pattern)
matches.append(entry)
except ValueError:
continue
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def doRollover(self):
"""Do a roll-over. This basically needs to open a new generated filename.
"""
if self.stream:
self.stream.close()
self.stream = None
if self.backupCount > 0:
delete = islice(self._matching_files(), self.backupCount, None)
for entry in delete:
os.remove(entry.path)
now = int(time.time())
rollover = self.computeRollover(now)
while rollover <= now:
rollover += self.interval
if not self.utc:
# If DST changes and midnight or weekly rollover, adjust for this.
if self.when == "MIDNIGHT" or self.when.startswith("W"):
dst = time.localtime(now)[-1]
if dst != time.localtime(rollover)[-1]:
rollover += 3600 if dst else -3600
self.rolloverAt = rollover
if not self.delay:
self.stream = self._open()
# borrow *some* TimedRotatingFileHandler methods
computeRollover = TimedRotatingFileHandler.computeRollover
shouldRollover = TimedRotatingFileHandler.shouldRollover
Use this with time.strftime() placeholders in the log filename, and those will be filled in for you:
handler = TimedPatternFileHandler("logs/%H-%M.log", when="M", backupCount=2)
Note that this cleans up older files when you create the instance.

I solved the issue, the problem in my case was exactly in the way I named the log file.
Instead of using "example.log" I've tried to use "example" without the extension, like that:
logHandler = handlers.TimedRotatingFileHandler('example', when='M', interval=1, backupCount=2)
In the handlers.py file of the logging lib, specifically in getFilesToDelete() method I've noticed that tiny comment which says exactly what not to do:
# See bpo-44753: Don't use the extension when computing the prefix.
Though the issue seems to be already solved, this is the answer to help the same people as me, searching for it for their case.

As other people already indicated the backupCount will work only if you always log to the file with the same filename and then rotate every now and then. Then you will have log files like #Bsquare indicated.
However in my case, I needed to rotate every day and have my log files have the following names: 2019-07-06.log, 2019-07-07.log, 2019-07-07.log, ...
I found out that it is not possible to do using current implementation of TimedRotatingFileHandler
So I ended up creating my own deleting functionality that suits my needs on top of FileHandler
This is a simple example of a logger class that uses FileHandler and will make sure old logs files are deleted every time you create an instance of this class:
import os
import datetime
import logging
import re
import pathlib
class Logger:
# Maximum number of logs to store
LOGS_COUNT = 3
# Directory to log to
LOGS_DIRECTORY = "logs"
def __init__(self):
# Make sure logs directory is created
self.__create_directory(Logger.LOGS_DIRECTORY)
# Clean old logs every time you create a logger
self.__clean_old_logs()
self.logger = logging.getLogger("Logger")
# If condition will make sure logger handlers will be initialize only once when this object is created
if not self.logger.handlers:
self.logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
file_handler = logging.FileHandler("logs/{:%Y-%m-%d}.log".format(datetime.datetime.now()))
file_handler.setFormatter(formatter)
self.logger.addHandler(file_handler)
def log_info(self, message):
self.logger.info(message)
def log_error(self, message):
self.logger.error(message)
def __clean_old_logs(self):
for name in self.__get_old_logs():
path = os.path.join(Logger.LOGS_DIRECTORY, name)
self.__delete_file(path)
def __get_old_logs(self):
logs = [name for name in self.__get_file_names(Logger.LOGS_DIRECTORY)
if re.match("([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))\.log", name)]
logs.sort(reverse=True)
return logs[Logger.LOGS_COUNT:]
def __get_file_names(self, path):
return [item.name for item in pathlib.Path(path).glob("*") if item.is_file()]
def __delete_file(self, path):
os.remove(path)
def __create_directory(self, directory):
if not os.path.exists(directory):
os.makedirs(directory)
And then you would use it like so:
logger = Logger()
logger.log_info("This is a log message")

Related

Python TimedRotatingFileHandler overwrites logs

I setup TimedRotatingFileHandler like that:
import logging as logging
from logging.handlers import TimedRotatingFileHandler
import os
import time
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# new file every minute
rotation_logging_handler = TimedRotatingFileHandler('logs/log',
when='m',
interval=1,
backupCount=5)
rotation_logging_handler.setLevel(logging.DEBUG)
format = u'%(asctime)s\t%(levelname)s\t%(filename)s:%(lineno)d\t%(message)s'
rotation_logging_handler.setFormatter(logging.Formatter(format))
rotation_logging_handler.suffix = '%Y-%m-%d'
logger.addHandler(rotation_logging_handler)
Usage:
logger.logging.info('Service started at port %s', config.get_param('port'))
while True:
time.sleep(21)
logger.logging.info('Now time is {}'.format(time.time()))
I expected that every minute new messages from logs/log had to append to existing log file for current date. Instead it every minute messages from logs/log overwrote existing log file for current date.
What should I do to reach that behaviour?
PS: After small research I found that TimedRotatingFileHandler in the doRollover method deletes existing log file and creates new file. So first solution is to create new handler derived from TimedRotatingFileHandler
which creates new file (with some index for example) insted of deleting existing log file.
It is very much possible to change the filenames when they are rotated to anything you want by overrding the rotation_filename method of the BaseRotatingHandler class and setting it to an appropriate callable.
Here is a very trivial example of doing the same, but you can tweak it to suit your needs.
import logging
from logging.handlers import TimedRotatingFileHandler
import datetime as dt
def filer(self):
now = dt.datetime.now()
return 'file.txt'+now.strftime("%Y-%m-%d_%H:%M:%S")
logger = logging.getLogger()
rotating_file_handler = TimedRotatingFileHandler(filename="/Users/rbhanot/file.txt",
when='S',
interval=2,
backupCount=5)
rotating_file_handler.rotation_filename = filer
formatter = logging.Formatter(
'%(asctime)s %(name)s:%(levelname)s - %(message)s')
rotating_file_handler.setFormatter(formatter)
logger.addHandler(rotating_file_handler)
logger.setLevel(logging.DEBUG)
logger.info("hello")
Here is the output of
❯ ls file*
file.txt file.txt2020-10-06_13:12:13 file.txt2020-10-06_13:12:15 file.txt2020-10-06_13:13:45
After little bit more researching I found BaseRotatingHandler.namer attribute usage in the BaseRotatingHandler.rotation_filename method:
The default implementation calls the 'namer' attribute of the handler, if it's callable, passing the default name to it. If the attribute isn't callable (the default is None), the name is returned unchanged.
So as a solution I implemented my own namer function that got filename and returned new filename with my template:
20181231.log
20181231.0.log
20181231.1.log
etc.
Full example:
import logging as logging
from logging.handlers import TimedRotatingFileHandler
import os
import time
def get_filename(filename):
# Get logs directory
log_directory = os.path.split(filename)[0]
# Get file extension (also it's a suffix's value (i.e. ".20181231")) without dot
date = os.path.splitext(filename)[1][1:]
# Create new file name
filename = os.path.join(log_directory, date)
# I don't want to add index if only one log file will exists for date
if not os.path.exists('{}.log'.format(filename)):
return '{}.log'.format(filename)
# Create new file name with index
index = 0
f = '{}.{}.log'.format(filename, index)
while os.path.exists(f):
index += 1
f = '{}.{}.log'.format(filename, index)
return f
format = u'%(asctime)s\t%(levelname)s\t%(filename)s:%(lineno)d\t%(message)s'
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# new file every minute
rotation_logging_handler = TimedRotatingFileHandler('logs/log',
when='m',
interval=1,
backupCount=5)
rotation_logging_handler.setLevel(logging.DEBUG)
rotation_logging_handler.setFormatter(logging.Formatter(format))
rotation_logging_handler.suffix = '%Y%m%d'
rotation_logging_handler.namer = get_filename
logger.addHandler(rotation_logging_handler)
According to the documentation of TimedRotatingFileHandler:
The system will save old log files by appending extensions to the
filename. The extensions are date-and-time based, using the strftime
format %Y-%m-%d_%H-%M-%S or a leading portion thereof, depending on
the rollover interval.
In other words: by modifying suffix you are breaking the rollover. Just leave it at default and python will create the files called:
logs/log2018-02-02-01-30
logs/log2018-02-02-01-31
logs/log2018-02-02-01-32
logs/log2018-02-02-01-33
logs/log2018-02-02-01-34
And after this (if backupCount=5) it will delete the -30 one and create the -35.
If you instead want to have names like :
logs/log2018-02-02-01.0
logs/log2018-02-02-01.1
logs/log2018-02-02-01.2
logs/log2018-02-02-01.3
logs/log2018-02-02-01.4
Where 0 is the newest one and .4 is the oldest one, then yes, that handler has not been designed to do that.

Automatically delete old Python log files

I have a Python program that runs daily. I'm using the logging module with FileHandler to write logs to a file. I would like each run's logs to be in its own file with a timestamp. However, I want to delete old files (say > 3 months) to avoid filling the disk.
I've looked at the RotatingFileHandler and TimedRotatingFileHandler but I don't want a single run's logs to be split across multiple files, even if a single run were to take days. Is there a built-in method for that?
The logging module has a built in TimedRotatingFileHandler:
# import module
from logging.handlers import TimedRotatingFileHandler
from logging import Formatter
# get named logger
logger = logging.getLogger(__name__)
# create handler
handler = TimedRotatingFileHandler(filename='runtime.log', when='D', interval=1, backupCount=90, encoding='utf-8', delay=False)
# create formatter and add to handler
formatter = Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handler to named logger
logger.addHandler(handler)
# set the logging level
logger.setLevel(logging.INFO)
# --------------------------------------
# log something
logger.info("test")
Old logs automatically get a timestamp appended.
Every day a new backup will be created.
If more than 91 (current+backups) files exist the oldest will be deleted.
import logging
import time
from logging.handlers import RotatingFileHandler
logFile = 'test-' + time.strftime("%Y%m%d-%H%M%S")+ '.log'
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler(logFile, mode='a', maxBytes=50*1024*1024,
backupCount=5, encoding=None, delay=False)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
As suggest by #MartijnPieters in this question, you could easily extend the FileHandler class in order to handle your own deletion logic.
For example, my class will hold only the last "backup_count" files.
import os
import re
import datetime
import logging
from itertools import islice
class TimedPatternFileHandler(logging.FileHandler):
"""File handler that uses the current time fo the log filename,
by formating the current datetime, according to filename_pattern, using
the strftime function.
If backup_count is non-zero, then older filenames that match the base
filename are deleted to only leave the backup_count most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(self, filename_pattern, mode, backup_count):
self.filename_pattern = os.path.abspath(filename_pattern)
self.backup_count = backup_count
self.filename = datetime.datetime.now().strftime(self.filename_pattern)
delete = islice(self._matching_files(), self.backup_count, None)
for entry in delete:
# print(entry.path)
os.remove(entry.path)
super().__init__(filename=self.filename, mode=mode)
#property
def filename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
return datetime.datetime.now().strftime(self.filename_pattern)
#filename.setter
def filename(self, _):
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
basename = os.path.basename(self.filename_pattern)
pattern = re.compile(re.sub('%[a-zA-z]', '.*', basename))
for entry in os.scandir(os.path.dirname(self.filename_pattern)):
if not entry.is_file():
continue
entry_basename = os.path.basename(entry.path)
if re.match(pattern, entry_basename):
matches.append(entry)
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def create_timed_rotating_log(path):
""""""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedPatternFileHandler('{}_%H-%M-%S.log'.format(path), mode='a', backup_count=5)
logger.addHandler(handler)
logger.info("This is a test!")
Get the date/time. See this answer on how to get the timestamp. If the file is older than the current date by 3 months. Then delete it with
import os
os.remove("filename.extension")
save this file to py2exe, then just use any task scheduler to run this job at startup.
Windows: open the run command and enter shell:startup, then place your exe in here.
On OSX: The old way used to be to create a cron job, this doesn't work in many cases from my experience anymore but still work trying. The new recommended way by apple is CreatingLaunchdJobs. You can also refer to this topic for a more detailed explanation.

In the logging module's RotatingFileHandler, how to set the backupCount to a practically infinite number

I'm writing a program which backs up a database using Python's RotatingFileHandler. This has two parameters, maxBytes and backupCount: the former is the maximum size of each log file, and the latter the maximum number of log files.
I would like to effectively never delete data, but still have each log file a certain size (say, 2 kB for the purpose of illustration). So I tried to set the backupCount parameter to sys.maxint:
import msgpack
import json
from faker import Faker
import logging
from logging.handlers import RotatingFileHandler
import os, glob
import itertools
import sys
fake = Faker()
fake.seed(0)
data_file = "my_log.log"
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler(data_file, maxBytes=2000, backupCount=sys.maxint)
logger.addHandler(handler)
fake_dicts = [{'name': fake.name(), 'email': fake.email()} for _ in range(100)]
def dump(item, mode='json'):
if mode == 'json':
return json.dumps(item)
elif mode == 'msgpack':
return msgpack.packb(item)
mode = 'json'
# Generate the archive log
for item in fake_dicts:
dump_string = dump(item, mode=mode)
logger.debug(dump_string)
However, this leads to several MemoryErrors which look like this:
Traceback (most recent call last):
File "/usr/lib/python2.7/logging/handlers.py", line 77, in emit
self.doRollover()
File "/usr/lib/python2.7/logging/handlers.py", line 129, in doRollover
for i in range(self.backupCount - 1, 0, -1):
MemoryError
Logged from file json_logger.py, line 37
It seems like making this parameter large causes the system to use lots of memory, which is not desirable. Is there any way around this trade-off?
An improvement to the solution suggested by #Asiel
Instead of using itertools and os.path.exists to determine what the nextName should be in doRollOver, the solution below simply remembers the number of last backup done and increments it to get the nextName.
from logging.handlers import RotatingFileHandler
import os
class RollingFileHandler(RotatingFileHandler):
def __init__(self, filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False):
self.last_backup_cnt = 0
super(RollingFileHandler, self).__init__(filename=filename,
mode=mode,
maxBytes=maxBytes,
backupCount=backupCount,
encoding=encoding,
delay=delay)
# override
def doRollover(self):
if self.stream:
self.stream.close()
self.stream = None
# my code starts here
self.last_backup_cnt += 1
nextName = "%s.%d" % (self.baseFilename, self.last_backup_cnt)
self.rotate(self.baseFilename, nextName)
# my code ends here
if not self.delay:
self.stream = self._open()
This class will still save your backups in an ascendant order (ex. first backup will end with ".1", the second one will end with ".2", and so on). Modifying this to do also do gzip is straight forward.
The problem here is that RotatingFileHandler is intended to... well rotate, and actually if you set its backupCount to a big number the RotatingFileHandler.doRollover method will loop in a backward range from backupCount-1 to zero trying to find the last created backup, the bigger the backupCount the slower it will be (when you have an small number of backups)
Also the RotatingFileHandler will keep renaming your backups which isn't necessary for what you want and actually it is an overhead, instead of simply putting your latest backup with the next ".n+1" extension it will rename all your backups and put the latest backup with the extension ".1" (will shift all backup names)
Solution:
You could code the next class (probably with a better name):
from logging.handlers import RotatingFileHandler
import itertools
import os
class RollingFileHandler(RotatingFileHandler):
# override
def doRollover(self):
if self.stream:
self.stream.close()
self.stream = None
# my code starts here
for i in itertools.count(1):
nextName = "%s.%d" % (self.baseFilename, i)
if not os.path.exists(nextName):
self.rotate(self.baseFilename, nextName)
break
# my code ends here
if not self.delay:
self.stream = self._open()
This class will save your backups in an ascendant order (ex. first backup will end with ".1", the second one will end with ".2", and so on)
Since RollingFileHandler extends RotatingFileHandler you can simply replace RotatingFileHandler for RollingFileHandler in your code, you don't need to provide the backupCount argument since it is ignored by this new class.
Bonus solution: (compressing backups)
Since you will have an ever growing amount of log backups, you may want to compress them to save disk space. So you could create a class similar to RollingFileHandler:
from logging.handlers import RotatingFileHandler
import gzip
import itertools
import os
import shutil
class RollingGzipFileHandler(RotatingFileHandler):
# override
def doRollover(self):
if self.stream:
self.stream.close()
self.stream = None
# my code starts here
for i in itertools.count(1):
nextName = "%s.%d.gz" % (self.baseFilename, i)
if not os.path.exists(nextName):
with open(self.baseFilename, 'rb') as original_log:
with gzip.open(nextName, 'wb') as gzipped_log:
shutil.copyfileobj(original_log, gzipped_log)
os.remove(self.baseFilename)
break
# my code ends here
if not self.delay:
self.stream = self._open()
This class will save your compressed backups with extensions ".1.gz", ".2.gz", etc. Also there are other compression algorithms available in the standard library if you don't want to use gzip.
This is an old question, but hope this help.
Try having a different base filename, every time like the following and have backupCount as 0.
import datetime
timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S%f")
data_file = "my_log_%s.log" % timestamp

How do I change the way that the RotatingFileHandler names files in Python?

I want to change the way that the rotating file handler names files.
For example, if I use RotatingFileHandler, it separates the log file when it reaches a specific file size naming "log file name + extension numbering", like below.
filename.log #first log file
filename.log.1 #rotating log file1
filename.log.2 #rotating log file2
However, I want the log handler to name them every time it is created.
For example.
09-01-12-20.log #first log file
09-01-12-43.log #rotating log file1
09-01-15-00.log #rotating log file2
How can I do this?
Edit:
I am not asking how to create and name a file.
I want to facilitate python logging package doing something like inheriting and overriding logging.
I inherit and override RotatingFileHandler of python logging handler.
RotatingFileHandler has self.baseFilename value, the handler will use self.baseFilename to create logFile.(when it creates file first or when rollover happens)
self.shouldRollover() method, It checks if the handler should rollover logfile or not.
If this method return 1, it means rollover should happen or return 0.
By overriding them, I define when this handler makes rollover and which name should be used for new log file by rollover.
-----------------------------------------Edit-----------------------------------------
I post the example code.
from logging import handlers
class DailyRotatingFileHandler(handlers.RotatingFileHandler):
def __init__(self, alias, basedir, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=0):
"""
#summary:
Set self.baseFilename to date string of today.
The handler create logFile named self.baseFilename
"""
self.basedir_ = basedir
self.alias_ = alias
self.baseFilename = self.getBaseFilename()
handlers.RotatingFileHandler.__init__(self, self.baseFilename, mode, maxBytes, backupCount, encoding, delay)
def getBaseFilename(self):
"""
#summary: Return logFile name string formatted to "today.log.alias"
"""
self.today_ = datetime.date.today()
basename_ = self.today_.strftime("%Y-%m-%d") + ".log" + '.' + self.alias_
return os.path.join(self.basedir_, basename_)
def shouldRollover(self, record):
"""
#summary:
Rollover happen
1. When the logFile size is get over maxBytes.
2. When date is changed.
#see: BaseRotatingHandler.emit
"""
if self.stream is None:
self.stream = self._open()
if self.maxBytes > 0 :
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2)
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
if self.today_ != datetime.date.today():
self.baseFilename = self.getBaseFilename()
return 1
return 0
This DailyRotatingFileHandler will create logfile like
2016-10-05.log.alias
2016-10-05.log.alias.1
2016-10-05.log.alias.2
2016-10-06.log.alias
2016-10-06.log.alias.1
2016-10-07.log.alias.1
Check the following code and see if it helps. As far as I could understand from your question if your issue is in achieving the filename based on timestamp then this shall work for you.
import datetime, time
# This return the epoch timestamp
epochTime = time.time()
# We generate the timestamp
# as per the need
timeStamp = datetime.datetime\
.fromtimestamp(epochTime)\
.strftime('%Y-%m-%d-%H-%M')
# Create a log file
# use timeStamp as filename
fo = open(timeStamp+".log", "wb")
fo.write("Log data")
fo.close()

Need to do a daily log rotation (0utc) using Python

I'm an admitted noob to Python. I've written a little logger that takes data from the serial port and writes it to a log file. I've got a small procedure that opens the file for append, writes, then closes. I suspect this might not be the best way to do it, but it's what I've figured out so far.
I'd like to be able to have it automagically perform a log-rotate at 00 UTC, but so far, my attempts to do this with RotatingFileHandler have failed.
Here's what the code looks like:
import time, serial, logging, logging.handlers,os,sys
from datetime import *
CT12 = serial.Serial()
CT12.port = "/dev/ct12k"
CT12.baudrate = 2400
CT12.parity = 'E'
CT12.bytesize = 7
CT12.stopbits = 1
CT12.timeout = 3
logStart = datetime.now()
dtg = datetime.strftime(logStart, '%Y-%m-%d %H:%M:%S ')
ctlA = unichr(1)
bom = unichr(2)
eom = unichr(3)
bel = unichr(7)
CT12Name = [ctlA, 'CT12-NWC-test']
CT12Header = ['-Ceilometer Logfile \r\n', '-File created: ', dtg, '\r\n']
def write_ceilo ( text ) :
f = open ('/data/CT12.log', 'a')
f.write (text)
f.close ()
write_ceilo(''.join(CT12Header))
CT12.open()
discard = CT12.readlines()
#print (discard)
while CT12.isOpen():
response = CT12.readline()
if len(response) >= 3:
if response[0] == '\x02' :
now=datetime.now()
dtg=datetime.strftime(now, '-%Y-%m-%d %H:%M:%S\r\n')
write_ceilo(dtg)
write_ceilo(''.join(CT12Name))
write_ceilo(response)
What can I do to make this rotate automatically, affixing either a date of rotation, or a serial number, for identification. I'm not looking to rotate any of these out, just keep a daily log file of the data. (or maybe an hourly file?)
For anyone arriving via Google, please don't move the log file out from under the logger while it is in use by calling the system copy of move commands or etc.
What you are looking for is a TimedRotatingFileHandler:
import time
import logging
from logging.handlers import TimedRotatingFileHandler
# format the log entries
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
handler = TimedRotatingFileHandler('/path/to/logfile.log',
when='midnight',
backupCount=10)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# generate example messages
for i in range(10000):
time.sleep(1)
logger.debug('debug message')
logger.info('informational message')
logger.warn('warning')
logger.error('error message')
logger.critical('critical failure')
You can simply do this:
import os
import time
date1 = time.strftime('%Y%m%d%H%M%S')
cmd1= "cp logfile logfile{0}".format(date1)
cmd2= "cat /dev/null > logfile"
os.system(cmd1)
os.system(cmd2)
'logfile' is the name of the file. I have copied the older log a new log file with a name based on time and date and then emptied the original file. If you want to rotate it every one hour, put this script in cron.
For anyone who does not like the idea of rotating files, but simply wants to use a file handler that immediately writes to a file with a specific date in its name: it is not hard to write your own handler. Here is an example:
class FileHandlerWithOneFilePerPeriod(FileHandler):
"""A handler which writes formatted logging records to files, one file per period."""
def __init__(self, filename_pattern, mode='a', encoding=None, delay=False):
"""
Constructs the file handler.
:param filename_pattern: the filename. Use strftime() directives to specify the format of the period.
For example, %Y%m%d can be used to generate one log file per day.
:param mode: the mode to open the file before writing. Common values are 'w' for writing (truncating the file
if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix
systems, means that all writes append to the end of the file regardless of the current seek position).
:param encoding: encoding is the name of the encoding used to decode or encode the file. This should only be
used in text mode.
:param delay: True if the file is opened when the first log message is emitted; False if the file is opened now
by the constructor.
"""
self.filename_pattern = filename_pattern
filename = datetime.now().strftime(self.filename_pattern)
super().__init__(filename, mode, encoding, delay)
def emit(self, record: LogRecord):
new_filename = datetime.fromtimestamp(record.created).strftime(self.filename_pattern)
if self.stream is None:
self.set_new_filename(new_filename)
elif self.differs_from_current_filename(new_filename):
self.close()
self.set_new_filename(new_filename)
super().emit(record)
def set_new_filename(self, new_filename):
self.baseFilename = new_filename
def differs_from_current_filename(self, filename: str) -> bool:
return filename != self.baseFilename
To use this handler, configure it with the following values in a dictionary (using logging.config.dictConfig():
version: 1
formatters:
simple:
format: '%(asctime)s %(name)s %(levelname)s %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: my_package.my_module.FileHandlerWithOneFilePerPeriod
level: DEBUG
formatter: simple
filename_pattern: my_logging-%Y%m%d.log
root:
level: DEBUG
handlers: [console, file]
This will log to console and to file. One file per day is used. Change my_package and my_module to match the module where you put the handler. Change my_logging to a more appropriate name.
By changing the date pattern in filename_pattern you actually control when new files are created. Each time the pattern applied to the datetime a log message is created differs from the previous applied pattern, a new file will be created.

Categories

Resources