I'm making my first steps in Python and have come to a point where I need a logging module. The reasons I didn't opt for a rotating file handler are:
I wanted a new folder to be created each time the code is run, hosting the new log files.
Using conventional filenames (myName_X.log, and not the default myName.log.X).
Limit log files by number of lines, and not file size (as done by the rotating file handler).
I've written such a module, using Python's built in logging module, but i have two problems:
The new folder and file are created, and the logging data is printed into the file. However, when main() is run for the second time (see code below), the newly created file is locked, and cannot be deleted from the file explorer unless I close the IDE or release the lock through Process Explorer.
The IPython interpreter freezes the second time I run main(). If I try the pdb module, it freezes as well.
I'm using WinPython 3.3.5 (with Spyder 2.3.0beta). I spent hours trying to find a solution to this problem. I don't know if it is a problem in my code or rather a bug with Spyder.
General coding remarks are always welcome.
main_example.py:
import myLogging
def main():
try:
myLoggerInstance = myLogging.MyLogger()
# Do stuff...
# logging example
for i in range(0, 3):
msg = 'Jose Halapeno on a stick {0}'.format(i)
myLoggerInstance.WriteLog('DEBUG', msg)
print('end of prints...')
finally:
myLoggerInstance._closeFileHandler()
print('closed file handle...')
if __name__ == "__main__":
main()
myLogging.py:
import logging
import time
import os
class MyLogger:
_linesCounter = 0
_nNumOfLinesPerFile = 100000
_fileCounter = 0
_dirnameBase = os.path.dirname(os.path.abspath(__file__))
_dirname = ''
_filenameBase = 'logfile_{0}.log'
_logger = logging.getLogger('innerGnnLogger')
_severityDict = {'CRITICAL' : logging.CRITICAL, 'ERROR': logging.ERROR, 'WARNING':
logging.WARNING, 'INFO': logging.INFO, 'DEBUG': logging.DEBUG}
#staticmethod
def __init__():
# remove file handle
MyLogger._closeFileHandler()
# create folder for session
MyLogger._dirname = MyLogger._dirnameBase + time.strftime("\\logs_%Y_%m_%d-
%H_%M_%S\\")
MyLogger._dirname = MyLogger._dirname.replace('\\\\', '/')
if not os.path.exists(MyLogger._dirname):
os.makedirs(MyLogger._dirname)
# set logger
MyLogger._logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
MyLogger._hConsole = logging.StreamHandler()
MyLogger._hFile = logging.FileHandler(MyLogger._dirname + \
MyLogger._filenameBase.format(MyLogger._fileCounter))
MyLogger._hConsole.setLevel(logging.WARNING)
MyLogger._hFile.setLevel(logging.DEBUG)
# create formatter
MyLogger._formatter = logging.Formatter('%(asctime)s %(filename)s, %(funcName)s, %(lineno)s, %(levelname)s: %(message)s')
# add formatter to handlers
MyLogger._hConsole.setFormatter(MyLogger._formatter)
MyLogger._hFile.setFormatter(MyLogger._formatter)
# add handlers to logger
MyLogger._logger.addHandler(MyLogger._hConsole)
MyLogger._logger.addHandler(MyLogger._hFile)
#staticmethod
def _StartNewFileHandler():
MyLogger._closeFileHandler()
# create new file handler
++MyLogger._fileCounter
MyLogger._hFile = logging.FileHandler(MyLogger._dirname + \
MyLogger._filenameBase.format(MyLogger._fileCounter))
MyLogger._hFile.setLevel(logging.DEBUG)
MyLogger._hFile.setFormatter(MyLogger._formatter)
MyLogger._logger.addHandler(MyLogger._hFile)
#staticmethod
def WriteLog(severity, message):
if (len(MyLogger._logger.handlers) < 2):
MyLogger._StartNewFileHandler()
MyLogger._linesCounter += 1
MyLogger._logger.log(MyLogger._severityDict[severity], message)
if (MyLogger._linesCounter >= MyLogger._nNumOfLinesPerFile):
MyLogger._logger.info('Last line in file')
MyLogger._StartNewFileHandler()
MyLogger._linesCounter = 0
#staticmethod
def _closeFileHandler():
if (len(MyLogger._logger.handlers) > 1):
MyLogger._logger.info('Last line in file')
MyLogger._logger.handlers[1].stream.close()
MyLogger._logger.removeHandler(MyLogger._logger.handlers[1])
Related
I am trying to have two different handlers where one handler will print the logs on console and other different handler will print the logs on console. Conslole handler is given by one inbuilt python modbus-tk library and I have written my own file handlers.
LOG = utils.create_logger(name="console", record_format="%(message)s") . ---> This is from modbus-tk library
LOG = utils.create_logger("console", level=logging.INFO)
logging.basicConfig(filename="log", level=logging.DEBUG)
log = logging.getLogger("simulator")
handler = RotatingFileHandler("log",maxBytes=5000,backupCount=1)
log.addHandler(handler)
What I need:
LOG.info("This will print message on console")
log.info("This will print message in file")
But problem is both the logs are getting printed on the console and both are going in file. I want only LOG to be printed on the console and log to be printed in the file.
edited:
Adding snippet from utils.create_logger
def create_logger(name="dummy", level=logging.DEBUG, record_format=None):
"""Create a logger according to the given settings"""
if record_format is None:
record_format = "%(asctime)s\t%(levelname)s\t%(module)s.%(funcName)s\t%(threadName)s\t%(message)s"
logger = logging.getLogger("modbus_tk")
logger.setLevel(level)
formatter = logging.Formatter(record_format)
if name == "udp":
log_handler = LogitHandler(("127.0.0.1", 1975))
elif name == "console":
log_handler = ConsoleHandler()
elif name == "dummy":
log_handler = DummyHandler()
else:
raise Exception("Unknown handler %s" % name)
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
return logger
I have an own customized logging module. I have modified a little and I think now it can be proper for your problem. It is totally configurable and it can handle more different handlers.
If you want to combine the console and file logging, you only need to remove the return statement (I use this way).
I have written comment to code for more understandable and You can found a test section in if __name__ == "__main__": ... statement.
Code:
import logging
import os
# Custom logger class with multiple destinations
class CustomLogger(logging.Logger):
"""
Customized Logger class from the original logging.Logger class.
"""
# Format for console log
FORMAT = (
"[%(name)-30s][%(levelname)-19s] | %(message)-100s "
"| (%(filename)s:%(lineno)d)"
)
# Format for log file
LOG_FILE_FORMAT = "[%(name)s][%(levelname)s] | %(message)s " "| %(filename)s:%(lineno)d)"
def __init__(
self,
name,
log_file_path=None,
console_level=logging.INFO,
log_file_level=logging.DEBUG,
log_file_open_format="w",
):
logging.Logger.__init__(self, name)
consol_color_formatter = logging.Formatter(self.FORMAT)
# If the "log_file_path" parameter is provided,
# the logs will be visible only in the log file.
if log_file_path:
fh_formatter = logging.Formatter(self.LOG_FILE_FORMAT)
file_handler = logging.FileHandler(log_file_path, mode=log_file_open_format)
file_handler.setLevel(log_file_level)
file_handler.setFormatter(fh_formatter)
self.addHandler(file_handler)
return
# If the "log_file_path" parameter is not provided,
# the logs will be visible only in the console.
console = logging.StreamHandler()
console.setLevel(console_level)
console.setFormatter(consol_color_formatter)
self.addHandler(console)
if __name__ == "__main__": # pragma: no cover
current_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_log.log")
console_logger = CustomLogger(__file__, console_level=logging.INFO)
file_logger = CustomLogger(__file__, log_file_path=current_dir, log_file_level=logging.DEBUG)
console_logger.info("test_to_console")
file_logger.info("test_to_file")
Console output:
>>> python3 test.py
[test.py][INFO ] | test_to_console | (test.py:55)
Content of test_log.log file:
[test.py][INFO] | test_to_file | test.py:56)
If something is not clear of you have question/remark, let me know and I will try to help.
EDIT:
If you change the GetLogger to Logger in your implementation, it will work.
Code:
import logging
def create_logger(name="dummy", level=logging.DEBUG, record_format=None):
"""Create a logger according to the given settings"""
if record_format is None:
record_format = "%(asctime)s\t%(levelname)s\t%(module)s.%(funcName)s\t%(threadName)s\t%(message)s"
logger = logging.Logger("modbus_tk")
logger.setLevel(level)
formatter = logging.Formatter(record_format)
if name == "console":
log_handler = logging.StreamHandler()
else:
raise Exception("Wrong type of handler")
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
return logger
console_logger = create_logger(name="console")
# logging.basicConfig(filename="log", level=logging.DEBUG)
file_logger = logging.Logger("simulator")
handler = logging.FileHandler("log", "w")
file_logger.addHandler(handler)
console_logger.info("info to console")
file_logger.info("info to file")
Console output:
>>> python3 test.py
2019-12-16 13:10:45,963 INFO test.<module> MainThread info to console
Content of log file:
info to file
There are a few problems in your code and without seeing the whole configuration it is hard to tell what exactly causes this, but most likely what is happening is that the logs are propagated.
First of all when you call basicConfig you are configuring the root logger and tell it to create a FileHandler with the filename log, but just two lines after that you are creating a RotatingFileHandler that uses the same file. Both loggers are writing to the same file now.
I find it always helps to understand the flow of how logging works in python: https://docs.python.org/3/howto/logging.html#logging-flow
And if you don't want all logs to be sent to the root logger too you should set LOG.propagate = False. That stops this logger from propagating their logs.
I have a Python program that runs daily. I'm using the logging module with FileHandler to write logs to a file. I would like each run's logs to be in its own file with a timestamp. However, I want to delete old files (say > 3 months) to avoid filling the disk.
I've looked at the RotatingFileHandler and TimedRotatingFileHandler but I don't want a single run's logs to be split across multiple files, even if a single run were to take days. Is there a built-in method for that?
The logging module has a built in TimedRotatingFileHandler:
# import module
from logging.handlers import TimedRotatingFileHandler
from logging import Formatter
# get named logger
logger = logging.getLogger(__name__)
# create handler
handler = TimedRotatingFileHandler(filename='runtime.log', when='D', interval=1, backupCount=90, encoding='utf-8', delay=False)
# create formatter and add to handler
formatter = Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handler to named logger
logger.addHandler(handler)
# set the logging level
logger.setLevel(logging.INFO)
# --------------------------------------
# log something
logger.info("test")
Old logs automatically get a timestamp appended.
Every day a new backup will be created.
If more than 91 (current+backups) files exist the oldest will be deleted.
import logging
import time
from logging.handlers import RotatingFileHandler
logFile = 'test-' + time.strftime("%Y%m%d-%H%M%S")+ '.log'
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler(logFile, mode='a', maxBytes=50*1024*1024,
backupCount=5, encoding=None, delay=False)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
As suggest by #MartijnPieters in this question, you could easily extend the FileHandler class in order to handle your own deletion logic.
For example, my class will hold only the last "backup_count" files.
import os
import re
import datetime
import logging
from itertools import islice
class TimedPatternFileHandler(logging.FileHandler):
"""File handler that uses the current time fo the log filename,
by formating the current datetime, according to filename_pattern, using
the strftime function.
If backup_count is non-zero, then older filenames that match the base
filename are deleted to only leave the backup_count most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(self, filename_pattern, mode, backup_count):
self.filename_pattern = os.path.abspath(filename_pattern)
self.backup_count = backup_count
self.filename = datetime.datetime.now().strftime(self.filename_pattern)
delete = islice(self._matching_files(), self.backup_count, None)
for entry in delete:
# print(entry.path)
os.remove(entry.path)
super().__init__(filename=self.filename, mode=mode)
#property
def filename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
return datetime.datetime.now().strftime(self.filename_pattern)
#filename.setter
def filename(self, _):
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
basename = os.path.basename(self.filename_pattern)
pattern = re.compile(re.sub('%[a-zA-z]', '.*', basename))
for entry in os.scandir(os.path.dirname(self.filename_pattern)):
if not entry.is_file():
continue
entry_basename = os.path.basename(entry.path)
if re.match(pattern, entry_basename):
matches.append(entry)
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def create_timed_rotating_log(path):
""""""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedPatternFileHandler('{}_%H-%M-%S.log'.format(path), mode='a', backup_count=5)
logger.addHandler(handler)
logger.info("This is a test!")
Get the date/time. See this answer on how to get the timestamp. If the file is older than the current date by 3 months. Then delete it with
import os
os.remove("filename.extension")
save this file to py2exe, then just use any task scheduler to run this job at startup.
Windows: open the run command and enter shell:startup, then place your exe in here.
On OSX: The old way used to be to create a cron job, this doesn't work in many cases from my experience anymore but still work trying. The new recommended way by apple is CreatingLaunchdJobs. You can also refer to this topic for a more detailed explanation.
I'm an admitted noob to Python. I've written a little logger that takes data from the serial port and writes it to a log file. I've got a small procedure that opens the file for append, writes, then closes. I suspect this might not be the best way to do it, but it's what I've figured out so far.
I'd like to be able to have it automagically perform a log-rotate at 00 UTC, but so far, my attempts to do this with RotatingFileHandler have failed.
Here's what the code looks like:
import time, serial, logging, logging.handlers,os,sys
from datetime import *
CT12 = serial.Serial()
CT12.port = "/dev/ct12k"
CT12.baudrate = 2400
CT12.parity = 'E'
CT12.bytesize = 7
CT12.stopbits = 1
CT12.timeout = 3
logStart = datetime.now()
dtg = datetime.strftime(logStart, '%Y-%m-%d %H:%M:%S ')
ctlA = unichr(1)
bom = unichr(2)
eom = unichr(3)
bel = unichr(7)
CT12Name = [ctlA, 'CT12-NWC-test']
CT12Header = ['-Ceilometer Logfile \r\n', '-File created: ', dtg, '\r\n']
def write_ceilo ( text ) :
f = open ('/data/CT12.log', 'a')
f.write (text)
f.close ()
write_ceilo(''.join(CT12Header))
CT12.open()
discard = CT12.readlines()
#print (discard)
while CT12.isOpen():
response = CT12.readline()
if len(response) >= 3:
if response[0] == '\x02' :
now=datetime.now()
dtg=datetime.strftime(now, '-%Y-%m-%d %H:%M:%S\r\n')
write_ceilo(dtg)
write_ceilo(''.join(CT12Name))
write_ceilo(response)
What can I do to make this rotate automatically, affixing either a date of rotation, or a serial number, for identification. I'm not looking to rotate any of these out, just keep a daily log file of the data. (or maybe an hourly file?)
For anyone arriving via Google, please don't move the log file out from under the logger while it is in use by calling the system copy of move commands or etc.
What you are looking for is a TimedRotatingFileHandler:
import time
import logging
from logging.handlers import TimedRotatingFileHandler
# format the log entries
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
handler = TimedRotatingFileHandler('/path/to/logfile.log',
when='midnight',
backupCount=10)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# generate example messages
for i in range(10000):
time.sleep(1)
logger.debug('debug message')
logger.info('informational message')
logger.warn('warning')
logger.error('error message')
logger.critical('critical failure')
You can simply do this:
import os
import time
date1 = time.strftime('%Y%m%d%H%M%S')
cmd1= "cp logfile logfile{0}".format(date1)
cmd2= "cat /dev/null > logfile"
os.system(cmd1)
os.system(cmd2)
'logfile' is the name of the file. I have copied the older log a new log file with a name based on time and date and then emptied the original file. If you want to rotate it every one hour, put this script in cron.
For anyone who does not like the idea of rotating files, but simply wants to use a file handler that immediately writes to a file with a specific date in its name: it is not hard to write your own handler. Here is an example:
class FileHandlerWithOneFilePerPeriod(FileHandler):
"""A handler which writes formatted logging records to files, one file per period."""
def __init__(self, filename_pattern, mode='a', encoding=None, delay=False):
"""
Constructs the file handler.
:param filename_pattern: the filename. Use strftime() directives to specify the format of the period.
For example, %Y%m%d can be used to generate one log file per day.
:param mode: the mode to open the file before writing. Common values are 'w' for writing (truncating the file
if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix
systems, means that all writes append to the end of the file regardless of the current seek position).
:param encoding: encoding is the name of the encoding used to decode or encode the file. This should only be
used in text mode.
:param delay: True if the file is opened when the first log message is emitted; False if the file is opened now
by the constructor.
"""
self.filename_pattern = filename_pattern
filename = datetime.now().strftime(self.filename_pattern)
super().__init__(filename, mode, encoding, delay)
def emit(self, record: LogRecord):
new_filename = datetime.fromtimestamp(record.created).strftime(self.filename_pattern)
if self.stream is None:
self.set_new_filename(new_filename)
elif self.differs_from_current_filename(new_filename):
self.close()
self.set_new_filename(new_filename)
super().emit(record)
def set_new_filename(self, new_filename):
self.baseFilename = new_filename
def differs_from_current_filename(self, filename: str) -> bool:
return filename != self.baseFilename
To use this handler, configure it with the following values in a dictionary (using logging.config.dictConfig():
version: 1
formatters:
simple:
format: '%(asctime)s %(name)s %(levelname)s %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: my_package.my_module.FileHandlerWithOneFilePerPeriod
level: DEBUG
formatter: simple
filename_pattern: my_logging-%Y%m%d.log
root:
level: DEBUG
handlers: [console, file]
This will log to console and to file. One file per day is used. Change my_package and my_module to match the module where you put the handler. Change my_logging to a more appropriate name.
By changing the date pattern in filename_pattern you actually control when new files are created. Each time the pattern applied to the datetime a log message is created differs from the previous applied pattern, a new file will be created.
I have a python script that I run to populate my database. I usually run the script inside shell_plus because of the dependencies required. Is there a way to load the script into shell_plus and run everything from my linux command line without actually opening the shell_plus interface?
"Standalone Django scripts"
You bet!
I don't even recommend using shell_plus. I tend to store my utilities scripts in my app utility folder. Then I simply call them from a cron job or manually as needed. Here is framework script I base this off of. (Somewhat simplified)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import logging
import time
import time
import optparse
# DO NOT IMPORT DJANGO MODELS HERE - THIS NEED TO HAPPEN BELOW!!
# This needs to be able to be run when django isn't in the picture (cron) so we need
# to be able to add in the django paths when needed.
def getArgs():
"""
Simply get the options for running update people.
"""
p = optparse.OptionParser()
help = "The Python path to a settings module, e.g. "
help += "\"myproject.settings.main\". If this isn't provided, the "
help += "DJANGO_SETTINGS_MODULE environment variable will be used."
p.add_option("-s", "--settings", action = "store", type = "string",
dest = "settings", help = help)
help = "A directory to add to the Python path, e.g."
help += " \"/home/djangoprojects/myproject\"."
p.add_option("-y", "--pythonpath", action = "store", type = "string",
dest = "pythonpath", help = help)
p.add_option("-v", "--verbose", action = "count", dest = "verbose",
help = "Turn on verbose debugging")
p.set_defaults(settings = os.environ.get("DJANGO_SETTINGS_MODULE",
"settings"),
pythonpath = "", verbose = 0,
)
return p.parse_args()
def update(opt, loglevel=None):
"""
This is the main script used for updating people
"""
start = time.time()
# This ensures that our sys.path is ready.
for path in opt.pythonpath.split(":"):
if os.path.abspath(path) not in sys.path and os.path.isdir(path):
sys.path.append(os.path.abspath(path))
os.environ['DJANGO_SETTINGS_MODULE'] = opt.settings
from django.conf import settings
try:
if settings.SITE_ROOT not in sys.path: pass
except ImportError:
return("Your setting file cannot be imported - not in sys.path??")
# IMPORT YOUR CODE MODELS HERE
from apps.core.utility.ExampleExtractor import ExampleExtractor
# YOUR DJANGO STUFF GOES HERE..
example = ExampleExtractor(loglevel=loglevel, singleton=not(opt.multiple))
raw = example.get_raw()
results = example.update_django(raw)
log.info("Time to update %s entries : %s" % (len(results), time.time() - start))
return results
if __name__ == '__main__':
logging.basicConfig(format = "%(asctime)s %(levelname)-8s %(module)s \
%(funcName)s %(message)s", datefmt = "%H:%M:%S", stream = sys.stderr)
log = logging.getLogger("")
log.setLevel(logging.DEBUG)
opts, args = getArgs()
sys.exit(update(opts))
HTH!
Is there a way to do this? If logging.config.fileConfig('some.log') is the setter, what's the getter? Just curious if this exists.
For my basic usage of a single file log, this worked
logging.getLoggerClass().root.handlers[0].baseFilename
I needed to do something similar in a very simple logging environment, the following routine did the trick
def _find_logger_basefilename(self, logger):
"""Finds the logger base filename(s) currently there is only one
"""
log_file = None
parent = logger.__dict__['parent']
if parent.__class__.__name__ == 'RootLogger':
# this is where the file name lives
for h in logger.__dict__['handlers']:
if h.__class__.__name__ == 'TimedRotatingFileHandler':
log_file = h.baseFilename
else:
log_file = self._find_logger_basefilename(parent)
return log_file
I was looking for the file used by the TimedRotatingFileHandler you might need to change the type of handler you search for, probably FileHandler.
Not sure how it would go in any sort of complex logging environment.
Below simple logic for single file handler:
>>> import logging
>>> logger = logging.getLogger("test")
>>> handler = logging.FileHandler("testlog.log")
>>> logger.addHandler(handler)
>>> print logger.handlers[0].baseFilename
/home/nav/testlog.log
>>>
logging.config.fileConfig('some.log') is going to try to read logging configuration from some.log.
I don't believe there is a general way to retrieve the destination file -- it isn't always guaranteed to even be going to a file. (It may go to syslog, over the network, etc.)
In my case, I used to initialize a single logger (in my main script) and use that in all my packages by doing locallogger = logging.getLogger(__name__). In this setup to get the logging file path I had to modify #John's answer as follows
def find_rootlogger_basefilename():
"""Finds the root logger base filename
"""
log_file = None
rootlogger = logging.getLogger('')
for h in rootlogger.__dict__['handlers']:
if h.__class__.__name__ == 'FileHandler':
log_file = h.baseFilename
break
elif h.__class__.__name__ == 'TimedRotatingFileHandler':
log_file = h.baseFilename
break
return log_file