Python TimedRotatingFileHandler overwrites logs - python

I setup TimedRotatingFileHandler like that:
import logging as logging
from logging.handlers import TimedRotatingFileHandler
import os
import time
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# new file every minute
rotation_logging_handler = TimedRotatingFileHandler('logs/log',
when='m',
interval=1,
backupCount=5)
rotation_logging_handler.setLevel(logging.DEBUG)
format = u'%(asctime)s\t%(levelname)s\t%(filename)s:%(lineno)d\t%(message)s'
rotation_logging_handler.setFormatter(logging.Formatter(format))
rotation_logging_handler.suffix = '%Y-%m-%d'
logger.addHandler(rotation_logging_handler)
Usage:
logger.logging.info('Service started at port %s', config.get_param('port'))
while True:
time.sleep(21)
logger.logging.info('Now time is {}'.format(time.time()))
I expected that every minute new messages from logs/log had to append to existing log file for current date. Instead it every minute messages from logs/log overwrote existing log file for current date.
What should I do to reach that behaviour?
PS: After small research I found that TimedRotatingFileHandler in the doRollover method deletes existing log file and creates new file. So first solution is to create new handler derived from TimedRotatingFileHandler
which creates new file (with some index for example) insted of deleting existing log file.

It is very much possible to change the filenames when they are rotated to anything you want by overrding the rotation_filename method of the BaseRotatingHandler class and setting it to an appropriate callable.
Here is a very trivial example of doing the same, but you can tweak it to suit your needs.
import logging
from logging.handlers import TimedRotatingFileHandler
import datetime as dt
def filer(self):
now = dt.datetime.now()
return 'file.txt'+now.strftime("%Y-%m-%d_%H:%M:%S")
logger = logging.getLogger()
rotating_file_handler = TimedRotatingFileHandler(filename="/Users/rbhanot/file.txt",
when='S',
interval=2,
backupCount=5)
rotating_file_handler.rotation_filename = filer
formatter = logging.Formatter(
'%(asctime)s %(name)s:%(levelname)s - %(message)s')
rotating_file_handler.setFormatter(formatter)
logger.addHandler(rotating_file_handler)
logger.setLevel(logging.DEBUG)
logger.info("hello")
Here is the output of
❯ ls file*
file.txt file.txt2020-10-06_13:12:13 file.txt2020-10-06_13:12:15 file.txt2020-10-06_13:13:45

After little bit more researching I found BaseRotatingHandler.namer attribute usage in the BaseRotatingHandler.rotation_filename method:
The default implementation calls the 'namer' attribute of the handler, if it's callable, passing the default name to it. If the attribute isn't callable (the default is None), the name is returned unchanged.
So as a solution I implemented my own namer function that got filename and returned new filename with my template:
20181231.log
20181231.0.log
20181231.1.log
etc.
Full example:
import logging as logging
from logging.handlers import TimedRotatingFileHandler
import os
import time
def get_filename(filename):
# Get logs directory
log_directory = os.path.split(filename)[0]
# Get file extension (also it's a suffix's value (i.e. ".20181231")) without dot
date = os.path.splitext(filename)[1][1:]
# Create new file name
filename = os.path.join(log_directory, date)
# I don't want to add index if only one log file will exists for date
if not os.path.exists('{}.log'.format(filename)):
return '{}.log'.format(filename)
# Create new file name with index
index = 0
f = '{}.{}.log'.format(filename, index)
while os.path.exists(f):
index += 1
f = '{}.{}.log'.format(filename, index)
return f
format = u'%(asctime)s\t%(levelname)s\t%(filename)s:%(lineno)d\t%(message)s'
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# new file every minute
rotation_logging_handler = TimedRotatingFileHandler('logs/log',
when='m',
interval=1,
backupCount=5)
rotation_logging_handler.setLevel(logging.DEBUG)
rotation_logging_handler.setFormatter(logging.Formatter(format))
rotation_logging_handler.suffix = '%Y%m%d'
rotation_logging_handler.namer = get_filename
logger.addHandler(rotation_logging_handler)

According to the documentation of TimedRotatingFileHandler:
The system will save old log files by appending extensions to the
filename. The extensions are date-and-time based, using the strftime
format %Y-%m-%d_%H-%M-%S or a leading portion thereof, depending on
the rollover interval.
In other words: by modifying suffix you are breaking the rollover. Just leave it at default and python will create the files called:
logs/log2018-02-02-01-30
logs/log2018-02-02-01-31
logs/log2018-02-02-01-32
logs/log2018-02-02-01-33
logs/log2018-02-02-01-34
And after this (if backupCount=5) it will delete the -30 one and create the -35.
If you instead want to have names like :
logs/log2018-02-02-01.0
logs/log2018-02-02-01.1
logs/log2018-02-02-01.2
logs/log2018-02-02-01.3
logs/log2018-02-02-01.4
Where 0 is the newest one and .4 is the oldest one, then yes, that handler has not been designed to do that.

Related

why TimedRotatingFileHandler does not delete old files?

I am using TimedRotatingFileHandler to create my logs.
I want my log files to be created every minute, keep at most 2 log files and delete older ones. Here is the sample code:
import logging
import logging.handlers
import datetime
logger = logging.getLogger('MyLogger')
logger.setLevel(logging.DEBUG)
handler = logging.handlers.TimedRotatingFileHandler(
"logs/{:%H-%M}.log".format(datetime.datetime.now()),
when="M",
backupCount=2)
logger.addHandler(handler)
logger.debug("PLEASE DELETE PREVIOUS FILES")
If I run this code multiple times (with a minute interval) I get multiple files in my logs directory like so:
21-01.log
21-02.log
21-03.log
...
This seems strange to me, since I set backupCount=2 which indicates that at most 2 files should be saved and older files should be deleted. However when I start my application with 2 files or more in the log folder, old files are not deleted.
Why TimedRotatingFileHandler does not delete old files?
Is there any way I can set TimedRotatingFileHandler to delete older files?
Like you can see in TimedRotatingFileHandler documentation, your log filename should be the same to get properly the rotating system.
In your case, because YOU are yourself appending dateTime information, the log filename is different each time, and so you observe the result.
So, in your source code, you just need to adapt the log filename:
handler = logging.handlers.TimedRotatingFileHandler(
"logs/MyLog",
when="M",
backupCount=2)
If you want to challenge it, you can change the when to "S" (seconds), and check the rotation is OK.
For instance, it will produce such files automagically:
> MyLog
> MyLog.2019-07-08_11-36-53
> MyLog.2019-07-08_11-36-58
Don't hesitate if you need additional information.
You can't use TimedRotatingFileHandler, as designed, for your use case. The handler expects the "current" log file name to remain stable and defines rotating as moving existing logging files to a backup by renaming. These are the backups that are kept or deleted. The rotation backups are created from the base filename plus a suffix with the rotation timestamp. So the implementation distinguishes between the log file (stored in baseFilename) and rotation files (generated in the doRotate() method. Note that backups are only deleted when rotation takes place, so after the handler has been in use for at least one full interval.
You instead want the base filename itself to carry the time information, and so are varying the log file name itself. There are no 'backups' in this scenario, you simply open a new file at rotation moments. Moreover, you appear to be running short-lived Python code, so you want older files removed immediately, not just when explicitly rotating, which may never be reached.
This is why TimedRotatingFileHandler won't delete any files, because *it never gets to create backup files. No backups means there is no backups to remove. To rotate files, current implementation of the handler expects to be in charge of filename generation, and can't be expected to know about filenames it would not itself generate. When you configure it with the "M" per-minute rotation frequency, it is configured to rotate files to backup files with the pattern {baseFileame}.{now:%Y-%m-%d_%H_%M}, and so will only ever delete rotated backup files that match that pattern. See the documentation:
The system will save old log files by appending extensions to the filename. The extensions are date-and-time based, using the strftime format %Y-%m-%d_%H-%M-%S or a leading portion thereof, depending on the rollover interval.
Instead, what you want is a base filename that itself carries the timestamp, and that on opening a new log file with a different name that older log files (not backup files) are deleted. For this you'd have to create a custom handler.
Luckily, the class hierarchy is specifically designed for easy customisation. You can subclass BaseRotatingHandler here, and provide your own deletion logic:
import os
import time
from itertools import islice
from logging.handlers import BaseRotatingHandler, TimedRotatingFileHandler
# rotation intervals in seconds
_intervals = {
"S": 1,
"M": 60,
"H": 60 * 60,
"D": 60 * 60 * 24,
"MIDNIGHT": 60 * 60 * 24,
"W": 60 * 60 * 24 * 7,
}
class TimedPatternFileHandler(BaseRotatingHandler):
"""File handler that uses the current time in the log filename.
The time is quantisized to a configured interval. See
TimedRotatingFileHandler for the meaning of the when, interval, utc and
atTime arguments.
If backupCount is non-zero, then older filenames that match the base
filename are deleted to only leave the backupCount most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(
self,
filenamePattern,
when="h",
interval=1,
backupCount=0,
encoding=None,
delay=False,
utc=False,
atTime=None,
):
self.when = when.upper()
self.backupCount = backupCount
self.utc = utc
self.atTime = atTime
try:
key = "W" if self.when.startswith("W") else self.when
self.interval = _intervals[key]
except KeyError:
raise ValueError(
f"Invalid rollover interval specified: {self.when}"
) from None
if self.when.startswith("W"):
if len(self.when) != 2:
raise ValueError(
"You must specify a day for weekly rollover from 0 to 6 "
f"(0 is Monday): {self.when}"
)
if not "0" <= self.when[1] <= "6":
raise ValueError(
f"Invalid day specified for weekly rollover: {self.when}"
)
self.dayOfWeek = int(self.when[1])
self.interval = self.interval * interval
self.pattern = os.path.abspath(os.fspath(filenamePattern))
# determine best time to base our rollover times on
# prefer the creation time of the most recently created log file.
t = now = time.time()
entry = next(self._matching_files(), None)
if entry is not None:
t = entry.stat().st_ctime
while t + self.interval < now:
t += self.interval
self.rolloverAt = self.computeRollover(t)
# delete older files on startup and not delaying
if not delay and backupCount > 0:
keep = backupCount
if os.path.exists(self.baseFilename):
keep += 1
delete = islice(self._matching_files(), keep, None)
for entry in delete:
os.remove(entry.path)
# Will set self.baseFilename indirectly, and then may use
# self.baseFilename to open. So by this point self.rolloverAt and
# self.interval must be known.
super().__init__(filenamePattern, "a", encoding, delay)
#property
def baseFilename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
t = self.rolloverAt - self.interval
if self.utc:
time_tuple = time.gmtime(t)
else:
time_tuple = time.localtime(t)
dst = time.localtime(self.rolloverAt)[-1]
if dst != time_tuple[-1] and self.interval > 3600:
# DST switches between t and self.rolloverAt, adjust
addend = 3600 if dst else -3600
time_tuple = time.localtime(t + addend)
return time.strftime(self.pattern, time_tuple)
#baseFilename.setter
def baseFilename(self, _):
# assigned to by FileHandler, just ignore this as we use self.pattern
# instead
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
pattern = self.pattern
for entry in os.scandir(os.path.dirname(pattern)):
if not entry.is_file():
continue
try:
time.strptime(entry.path, pattern)
matches.append(entry)
except ValueError:
continue
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def doRollover(self):
"""Do a roll-over. This basically needs to open a new generated filename.
"""
if self.stream:
self.stream.close()
self.stream = None
if self.backupCount > 0:
delete = islice(self._matching_files(), self.backupCount, None)
for entry in delete:
os.remove(entry.path)
now = int(time.time())
rollover = self.computeRollover(now)
while rollover <= now:
rollover += self.interval
if not self.utc:
# If DST changes and midnight or weekly rollover, adjust for this.
if self.when == "MIDNIGHT" or self.when.startswith("W"):
dst = time.localtime(now)[-1]
if dst != time.localtime(rollover)[-1]:
rollover += 3600 if dst else -3600
self.rolloverAt = rollover
if not self.delay:
self.stream = self._open()
# borrow *some* TimedRotatingFileHandler methods
computeRollover = TimedRotatingFileHandler.computeRollover
shouldRollover = TimedRotatingFileHandler.shouldRollover
Use this with time.strftime() placeholders in the log filename, and those will be filled in for you:
handler = TimedPatternFileHandler("logs/%H-%M.log", when="M", backupCount=2)
Note that this cleans up older files when you create the instance.
I solved the issue, the problem in my case was exactly in the way I named the log file.
Instead of using "example.log" I've tried to use "example" without the extension, like that:
logHandler = handlers.TimedRotatingFileHandler('example', when='M', interval=1, backupCount=2)
In the handlers.py file of the logging lib, specifically in getFilesToDelete() method I've noticed that tiny comment which says exactly what not to do:
# See bpo-44753: Don't use the extension when computing the prefix.
Though the issue seems to be already solved, this is the answer to help the same people as me, searching for it for their case.
As other people already indicated the backupCount will work only if you always log to the file with the same filename and then rotate every now and then. Then you will have log files like #Bsquare indicated.
However in my case, I needed to rotate every day and have my log files have the following names: 2019-07-06.log, 2019-07-07.log, 2019-07-07.log, ...
I found out that it is not possible to do using current implementation of TimedRotatingFileHandler
So I ended up creating my own deleting functionality that suits my needs on top of FileHandler
This is a simple example of a logger class that uses FileHandler and will make sure old logs files are deleted every time you create an instance of this class:
import os
import datetime
import logging
import re
import pathlib
class Logger:
# Maximum number of logs to store
LOGS_COUNT = 3
# Directory to log to
LOGS_DIRECTORY = "logs"
def __init__(self):
# Make sure logs directory is created
self.__create_directory(Logger.LOGS_DIRECTORY)
# Clean old logs every time you create a logger
self.__clean_old_logs()
self.logger = logging.getLogger("Logger")
# If condition will make sure logger handlers will be initialize only once when this object is created
if not self.logger.handlers:
self.logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
file_handler = logging.FileHandler("logs/{:%Y-%m-%d}.log".format(datetime.datetime.now()))
file_handler.setFormatter(formatter)
self.logger.addHandler(file_handler)
def log_info(self, message):
self.logger.info(message)
def log_error(self, message):
self.logger.error(message)
def __clean_old_logs(self):
for name in self.__get_old_logs():
path = os.path.join(Logger.LOGS_DIRECTORY, name)
self.__delete_file(path)
def __get_old_logs(self):
logs = [name for name in self.__get_file_names(Logger.LOGS_DIRECTORY)
if re.match("([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))\.log", name)]
logs.sort(reverse=True)
return logs[Logger.LOGS_COUNT:]
def __get_file_names(self, path):
return [item.name for item in pathlib.Path(path).glob("*") if item.is_file()]
def __delete_file(self, path):
os.remove(path)
def __create_directory(self, directory):
if not os.path.exists(directory):
os.makedirs(directory)
And then you would use it like so:
logger = Logger()
logger.log_info("This is a log message")

how to create a log file by current date with logging.handlers.TimedRotatingFileHandler

i run a list of tasks once an hour,but i want to get the log file named by current date.
def get_mylogger():
# get logger
fmt = '%(asctime)-15s %(levelname)-4s %(message)s'
datefmt = '%Y-%m-%d %H:%M:%S'
mylogger = logging.getLogger()
mylogger.setLevel(logging.INFO)
# log_path = "/opt/spark/logs/pyspark/"
log_path = "H:\upupw\www\spark\logs\pyspark"
if not os.path.exists(log_path):
os.makedirs(log_path)
log_file_name = 'spark.log'
log_name = os.path.join(log_path, log_file_name)
# TimedRotatingFileHandler
timer = TimedRotatingFileHandler(log_name, when='D')
formatter = logging.Formatter(fmt, datefmt=datefmt)
timer.setFormatter(formatter)
mylogger.addHandler(timer)
return mylogger
if i create the first log file 'spark.log' in '10:00:00', But it won't create a new file until '10:00:00' tomorrow. What I want is to create a new file tomorrow at 0!
According to the logging documentation for TimedRotatingFileHandler you might want to use the additional parameter atTime within your TimedRotatingFileHandler function call like this:
timer = TimedRotatingFileHandler(log_name, when='D', atTime=datetime.time(0, 0, 0))
Like described in the docu:
If atTime is not None, it must be a datetime.time instance which specifies the time of day when rollover occurs, for the cases where rollover is set to happen “at midnight” or “on a particular weekday”. Note that in these cases, the atTime value is effectively used to compute the initial rollover, and subsequent rollovers would be calculated via the normal interval calculation.
...you need to provide the rollover time as a instance of datetime.time(). This is done by passing the wanted rollover time as arguments to the datetime.time class. Pass the hours as first, minutes as second and seconds as third argument. The above example sets the rollover time to 00:00:00.
Note: be sure to
import datetime
in the beginning of your code.

Automatically delete old Python log files

I have a Python program that runs daily. I'm using the logging module with FileHandler to write logs to a file. I would like each run's logs to be in its own file with a timestamp. However, I want to delete old files (say > 3 months) to avoid filling the disk.
I've looked at the RotatingFileHandler and TimedRotatingFileHandler but I don't want a single run's logs to be split across multiple files, even if a single run were to take days. Is there a built-in method for that?
The logging module has a built in TimedRotatingFileHandler:
# import module
from logging.handlers import TimedRotatingFileHandler
from logging import Formatter
# get named logger
logger = logging.getLogger(__name__)
# create handler
handler = TimedRotatingFileHandler(filename='runtime.log', when='D', interval=1, backupCount=90, encoding='utf-8', delay=False)
# create formatter and add to handler
formatter = Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handler to named logger
logger.addHandler(handler)
# set the logging level
logger.setLevel(logging.INFO)
# --------------------------------------
# log something
logger.info("test")
Old logs automatically get a timestamp appended.
Every day a new backup will be created.
If more than 91 (current+backups) files exist the oldest will be deleted.
import logging
import time
from logging.handlers import RotatingFileHandler
logFile = 'test-' + time.strftime("%Y%m%d-%H%M%S")+ '.log'
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler(logFile, mode='a', maxBytes=50*1024*1024,
backupCount=5, encoding=None, delay=False)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
As suggest by #MartijnPieters in this question, you could easily extend the FileHandler class in order to handle your own deletion logic.
For example, my class will hold only the last "backup_count" files.
import os
import re
import datetime
import logging
from itertools import islice
class TimedPatternFileHandler(logging.FileHandler):
"""File handler that uses the current time fo the log filename,
by formating the current datetime, according to filename_pattern, using
the strftime function.
If backup_count is non-zero, then older filenames that match the base
filename are deleted to only leave the backup_count most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(self, filename_pattern, mode, backup_count):
self.filename_pattern = os.path.abspath(filename_pattern)
self.backup_count = backup_count
self.filename = datetime.datetime.now().strftime(self.filename_pattern)
delete = islice(self._matching_files(), self.backup_count, None)
for entry in delete:
# print(entry.path)
os.remove(entry.path)
super().__init__(filename=self.filename, mode=mode)
#property
def filename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
return datetime.datetime.now().strftime(self.filename_pattern)
#filename.setter
def filename(self, _):
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
basename = os.path.basename(self.filename_pattern)
pattern = re.compile(re.sub('%[a-zA-z]', '.*', basename))
for entry in os.scandir(os.path.dirname(self.filename_pattern)):
if not entry.is_file():
continue
entry_basename = os.path.basename(entry.path)
if re.match(pattern, entry_basename):
matches.append(entry)
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def create_timed_rotating_log(path):
""""""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedPatternFileHandler('{}_%H-%M-%S.log'.format(path), mode='a', backup_count=5)
logger.addHandler(handler)
logger.info("This is a test!")
Get the date/time. See this answer on how to get the timestamp. If the file is older than the current date by 3 months. Then delete it with
import os
os.remove("filename.extension")
save this file to py2exe, then just use any task scheduler to run this job at startup.
Windows: open the run command and enter shell:startup, then place your exe in here.
On OSX: The old way used to be to create a cron job, this doesn't work in many cases from my experience anymore but still work trying. The new recommended way by apple is CreatingLaunchdJobs. You can also refer to this topic for a more detailed explanation.

How to make log file copy in python?

I want to make log file in python same as in log4j,
meaning as soon the logger.log file get's to a size of 1K make a copy of this file and call it logger(1).log , In case logger(1),log already exists create logger(2).log and of course delete logger.log so next time it will run it will start a clean log.
This is my code but it is good only for first creation of logger file bakup:
b = os.path.getsize('logger.log')
print b
if b >= 1000:
shutil.copy2('logger.log', 'logger(1).log')
This is my log.py file so it can be used globally:
import os
import logging
from logging.config import fileConfig
from logging import handlers
def setup_custom_logger():
configFolder = os.getcwd() + os.sep + 'Conf'
fileConfig(configFolder + os.sep + 'logging_config.ini')
logger = logging.getLogger()
# create a file handler
handler = logging.handlers.RotatingFileHandler('logger.log', maxBytes=1024, encoding="UTF-8")
handler.doRollover()
# create a logging format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
You need to setup a RotatingFileHandler:
import logging
from logging import handlers
logger = logging.getLogger(__name__)
handler = handlers.RotatingFileHandler('logger.log', maxBytes=1000, backupCount=10, encoding="UTF-8")
handler.doRollover()
logger.addHandler(handler)
From the documentation:
You can use the maxBytes and backupCount values to allow the file to
rollover at a predetermined size. When the size is about to be
exceeded, the file is closed and a new file is silently opened for
output. Rollover occurs whenever the current log file is nearly
maxBytes in length.
You can use a RotatingFileHandler.
Such a handler can be added by doing something like this:
import logging
logger = logging.getLogger(__name__)
logger.addHandler(RotatingFileHandler(filename, maxBytes=1024, backupCount=10))
Once the log file reaches this size, a rollover will be done and the old log file will be saved with a name filename.log.1, filename.log.2 etc. till filename.log.10.
Try using python logging module with TimedRotatingFileHandler handler.

Need to do a daily log rotation (0utc) using Python

I'm an admitted noob to Python. I've written a little logger that takes data from the serial port and writes it to a log file. I've got a small procedure that opens the file for append, writes, then closes. I suspect this might not be the best way to do it, but it's what I've figured out so far.
I'd like to be able to have it automagically perform a log-rotate at 00 UTC, but so far, my attempts to do this with RotatingFileHandler have failed.
Here's what the code looks like:
import time, serial, logging, logging.handlers,os,sys
from datetime import *
CT12 = serial.Serial()
CT12.port = "/dev/ct12k"
CT12.baudrate = 2400
CT12.parity = 'E'
CT12.bytesize = 7
CT12.stopbits = 1
CT12.timeout = 3
logStart = datetime.now()
dtg = datetime.strftime(logStart, '%Y-%m-%d %H:%M:%S ')
ctlA = unichr(1)
bom = unichr(2)
eom = unichr(3)
bel = unichr(7)
CT12Name = [ctlA, 'CT12-NWC-test']
CT12Header = ['-Ceilometer Logfile \r\n', '-File created: ', dtg, '\r\n']
def write_ceilo ( text ) :
f = open ('/data/CT12.log', 'a')
f.write (text)
f.close ()
write_ceilo(''.join(CT12Header))
CT12.open()
discard = CT12.readlines()
#print (discard)
while CT12.isOpen():
response = CT12.readline()
if len(response) >= 3:
if response[0] == '\x02' :
now=datetime.now()
dtg=datetime.strftime(now, '-%Y-%m-%d %H:%M:%S\r\n')
write_ceilo(dtg)
write_ceilo(''.join(CT12Name))
write_ceilo(response)
What can I do to make this rotate automatically, affixing either a date of rotation, or a serial number, for identification. I'm not looking to rotate any of these out, just keep a daily log file of the data. (or maybe an hourly file?)
For anyone arriving via Google, please don't move the log file out from under the logger while it is in use by calling the system copy of move commands or etc.
What you are looking for is a TimedRotatingFileHandler:
import time
import logging
from logging.handlers import TimedRotatingFileHandler
# format the log entries
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
handler = TimedRotatingFileHandler('/path/to/logfile.log',
when='midnight',
backupCount=10)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# generate example messages
for i in range(10000):
time.sleep(1)
logger.debug('debug message')
logger.info('informational message')
logger.warn('warning')
logger.error('error message')
logger.critical('critical failure')
You can simply do this:
import os
import time
date1 = time.strftime('%Y%m%d%H%M%S')
cmd1= "cp logfile logfile{0}".format(date1)
cmd2= "cat /dev/null > logfile"
os.system(cmd1)
os.system(cmd2)
'logfile' is the name of the file. I have copied the older log a new log file with a name based on time and date and then emptied the original file. If you want to rotate it every one hour, put this script in cron.
For anyone who does not like the idea of rotating files, but simply wants to use a file handler that immediately writes to a file with a specific date in its name: it is not hard to write your own handler. Here is an example:
class FileHandlerWithOneFilePerPeriod(FileHandler):
"""A handler which writes formatted logging records to files, one file per period."""
def __init__(self, filename_pattern, mode='a', encoding=None, delay=False):
"""
Constructs the file handler.
:param filename_pattern: the filename. Use strftime() directives to specify the format of the period.
For example, %Y%m%d can be used to generate one log file per day.
:param mode: the mode to open the file before writing. Common values are 'w' for writing (truncating the file
if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix
systems, means that all writes append to the end of the file regardless of the current seek position).
:param encoding: encoding is the name of the encoding used to decode or encode the file. This should only be
used in text mode.
:param delay: True if the file is opened when the first log message is emitted; False if the file is opened now
by the constructor.
"""
self.filename_pattern = filename_pattern
filename = datetime.now().strftime(self.filename_pattern)
super().__init__(filename, mode, encoding, delay)
def emit(self, record: LogRecord):
new_filename = datetime.fromtimestamp(record.created).strftime(self.filename_pattern)
if self.stream is None:
self.set_new_filename(new_filename)
elif self.differs_from_current_filename(new_filename):
self.close()
self.set_new_filename(new_filename)
super().emit(record)
def set_new_filename(self, new_filename):
self.baseFilename = new_filename
def differs_from_current_filename(self, filename: str) -> bool:
return filename != self.baseFilename
To use this handler, configure it with the following values in a dictionary (using logging.config.dictConfig():
version: 1
formatters:
simple:
format: '%(asctime)s %(name)s %(levelname)s %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: my_package.my_module.FileHandlerWithOneFilePerPeriod
level: DEBUG
formatter: simple
filename_pattern: my_logging-%Y%m%d.log
root:
level: DEBUG
handlers: [console, file]
This will log to console and to file. One file per day is used. Change my_package and my_module to match the module where you put the handler. Change my_logging to a more appropriate name.
By changing the date pattern in filename_pattern you actually control when new files are created. Each time the pattern applied to the datetime a log message is created differs from the previous applied pattern, a new file will be created.

Categories

Resources