Is there a way to do this? If logging.config.fileConfig('some.log') is the setter, what's the getter? Just curious if this exists.
For my basic usage of a single file log, this worked
logging.getLoggerClass().root.handlers[0].baseFilename
I needed to do something similar in a very simple logging environment, the following routine did the trick
def _find_logger_basefilename(self, logger):
"""Finds the logger base filename(s) currently there is only one
"""
log_file = None
parent = logger.__dict__['parent']
if parent.__class__.__name__ == 'RootLogger':
# this is where the file name lives
for h in logger.__dict__['handlers']:
if h.__class__.__name__ == 'TimedRotatingFileHandler':
log_file = h.baseFilename
else:
log_file = self._find_logger_basefilename(parent)
return log_file
I was looking for the file used by the TimedRotatingFileHandler you might need to change the type of handler you search for, probably FileHandler.
Not sure how it would go in any sort of complex logging environment.
Below simple logic for single file handler:
>>> import logging
>>> logger = logging.getLogger("test")
>>> handler = logging.FileHandler("testlog.log")
>>> logger.addHandler(handler)
>>> print logger.handlers[0].baseFilename
/home/nav/testlog.log
>>>
logging.config.fileConfig('some.log') is going to try to read logging configuration from some.log.
I don't believe there is a general way to retrieve the destination file -- it isn't always guaranteed to even be going to a file. (It may go to syslog, over the network, etc.)
In my case, I used to initialize a single logger (in my main script) and use that in all my packages by doing locallogger = logging.getLogger(__name__). In this setup to get the logging file path I had to modify #John's answer as follows
def find_rootlogger_basefilename():
"""Finds the root logger base filename
"""
log_file = None
rootlogger = logging.getLogger('')
for h in rootlogger.__dict__['handlers']:
if h.__class__.__name__ == 'FileHandler':
log_file = h.baseFilename
break
elif h.__class__.__name__ == 'TimedRotatingFileHandler':
log_file = h.baseFilename
break
return log_file
Related
I monitor my script with the logging module of the Python Standard Library and I send the loggings to both the console with StreamHandler, and to a file with FileHandler.
I would like to have the option to disable a handler for a LogRecord independantly of its severity. For example, for a specific LogRecord I would like to have the option not to send it to the file destination or to the console (with passing a parameter).
I have found that the library has the Filter class for that reason (which is described as a finer grained way to filter blocks), but haven't figured out how to do it.
Any ideas how to do this in a cosistent way?
Finally, it is quite easy. I used a function as a Handler.filer as suggested in the comments.
This is a working example:
from pathlib import Path
import logging
from logging import LogRecord
def build_handler_filters(handler: str):
def handler_filter(record: LogRecord):
if hasattr(record, 'block'):
if record.block == handler:
return False
return True
return handler_filter
ch = logging.StreamHandler()
ch.addFilter(build_handler_filters('console'))
fh = logging.FileHandler(Path('/tmp/test.log'))
fh.addFilter(build_handler_filters('file'))
mylogger = logging.getLogger(__name__)
mylogger.setLevel(logging.DEBUG)
mylogger.addHandler(ch)
mylogger.addHandler(fh)
When the logger is called, the message is sent to both console and output, i.e.
mylogger.info('msg').
To block for example the file the logger should be called with the extra argument like this
mylogger.info('msg only to console', extra={'block': 'file'})
Disabling console is analogous.
I use import logging module for logging inside the AWS lambda with python 3.7 runtime.
I would like to perform certain manipulations on log statements before they are flushed to stdout, e.g. wrap the message as json and add tracing data, so that they would be parseable by Kibana parser.
I don't want to write my own decorator for that because that won't work for underlying dependencies.
Ideally, it should be something like a configured callback for the logger
so that it would do following work for me:
log_statement = {}
log_statement['message'] = 'this is the message'
log_statement['X-B3-TraceId'] = "76b85f5e32ce7b46"
log_statement['level'] = 'INFO'
sys.stdout.write(json.dumps(log_statement) + '\n')
while having still logger.info('this is the message').
How can I do that?
Answering my own question:
I had to use LoggerAdapter that is quite a good fit for the purpose of pre-processing log statements:
import logging
class CustomAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
log_statement = '{"X-B3-TraceId":"%s", "message":"%s"}' % (self.extra['X-B3-TraceId'], msg) + '\n'
return log_statement, kwargs
See: https://docs.python.org/3/howto/logging-cookbook.html#using-loggeradapters-to-impart-contextual-information
In general, the next step would be just plugging in the adapter like:
import logging
...
logging.basicConfig(format='%(message)s')
logger = logging.getLogger()
logger.setLevel(LOG_LEVEL)
custom_logger = CustomAdapter(logger, {'X-B3-TraceId': "test"})
...
custom_logger.info("test")
Note: I had to put format as a message only because I need to get the whole statement as a JSON string. Unfortunately, thus I lost some predefined log statement parts, e.g. aws_request_id. This is the limitation of LoggerAdapter#process as it handles only the message part. If anyone has a better approach here, pls suggest.
It appears that AWS lambda python runtime somehow interferes with logging facility and changing the format like above did not work. So I had to do additionally this:
FORMAT = "%(message)s"
logger = logging.getLogger()
for h in logger.handlers:
h.setFormatter(logging.Formatter(FORMAT))
See: https://gist.github.com/niranjv/fb95e716151642e8ca553b0e38dd152e
I have a logger.py file which initialises logging.
import logging
logger = logging.getLogger(__name__)
def logger_init():
import os
import inspect
global logger
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
fh = logging.FileHandler(os.getcwd() + os.path.basename(__file__) + ".log")
fh.setLevel(level=logging.DEBUG)
logger.addHandler(fh)
return None
logger_init()
I have another script caller.py that calls the logger.
from logger import *
logger.info("test log")
What happens is a log file called logger.log will be created containing the logged messages.
What I want is the name of this log file to be named after the caller script filename. So, in this case, the created log file should have the name caller.log instead.
I am using python 3.7
It is immensely helpful to consolidate logging to one location. I learned this the hard way. It is easier to debug when events are sorted by time and it is thread-safe to log to the same file. There are solutions for multiprocessing logging.
The log format can, then, contain the module name, function name and even line number from where the log call was made. This is invaluable. You can find a list of attributes you can include automatically in a log message here.
Example format:
format='[%(asctime)s] [%(module)s.%(funcName)s] [%(levelname)s] %(message)s
Example log message
[2019-04-03 12:29:48,351] [caller.work_func] [INFO] Completed task 1.
You can get the filename of the main script from the first item in sys.argv, but if you want to get the caller module not the main script, check the answers on this question.
I have a Python program that runs daily. I'm using the logging module with FileHandler to write logs to a file. I would like each run's logs to be in its own file with a timestamp. However, I want to delete old files (say > 3 months) to avoid filling the disk.
I've looked at the RotatingFileHandler and TimedRotatingFileHandler but I don't want a single run's logs to be split across multiple files, even if a single run were to take days. Is there a built-in method for that?
The logging module has a built in TimedRotatingFileHandler:
# import module
from logging.handlers import TimedRotatingFileHandler
from logging import Formatter
# get named logger
logger = logging.getLogger(__name__)
# create handler
handler = TimedRotatingFileHandler(filename='runtime.log', when='D', interval=1, backupCount=90, encoding='utf-8', delay=False)
# create formatter and add to handler
formatter = Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handler to named logger
logger.addHandler(handler)
# set the logging level
logger.setLevel(logging.INFO)
# --------------------------------------
# log something
logger.info("test")
Old logs automatically get a timestamp appended.
Every day a new backup will be created.
If more than 91 (current+backups) files exist the oldest will be deleted.
import logging
import time
from logging.handlers import RotatingFileHandler
logFile = 'test-' + time.strftime("%Y%m%d-%H%M%S")+ '.log'
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler(logFile, mode='a', maxBytes=50*1024*1024,
backupCount=5, encoding=None, delay=False)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
As suggest by #MartijnPieters in this question, you could easily extend the FileHandler class in order to handle your own deletion logic.
For example, my class will hold only the last "backup_count" files.
import os
import re
import datetime
import logging
from itertools import islice
class TimedPatternFileHandler(logging.FileHandler):
"""File handler that uses the current time fo the log filename,
by formating the current datetime, according to filename_pattern, using
the strftime function.
If backup_count is non-zero, then older filenames that match the base
filename are deleted to only leave the backup_count most recent copies,
whenever opening a new log file with a different name.
"""
def __init__(self, filename_pattern, mode, backup_count):
self.filename_pattern = os.path.abspath(filename_pattern)
self.backup_count = backup_count
self.filename = datetime.datetime.now().strftime(self.filename_pattern)
delete = islice(self._matching_files(), self.backup_count, None)
for entry in delete:
# print(entry.path)
os.remove(entry.path)
super().__init__(filename=self.filename, mode=mode)
#property
def filename(self):
"""Generate the 'current' filename to open"""
# use the start of *this* interval, not the next
return datetime.datetime.now().strftime(self.filename_pattern)
#filename.setter
def filename(self, _):
pass
def _matching_files(self):
"""Generate DirEntry entries that match the filename pattern.
The files are ordered by their last modification time, most recent
files first.
"""
matches = []
basename = os.path.basename(self.filename_pattern)
pattern = re.compile(re.sub('%[a-zA-z]', '.*', basename))
for entry in os.scandir(os.path.dirname(self.filename_pattern)):
if not entry.is_file():
continue
entry_basename = os.path.basename(entry.path)
if re.match(pattern, entry_basename):
matches.append(entry)
matches.sort(key=lambda e: e.stat().st_mtime, reverse=True)
return iter(matches)
def create_timed_rotating_log(path):
""""""
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedPatternFileHandler('{}_%H-%M-%S.log'.format(path), mode='a', backup_count=5)
logger.addHandler(handler)
logger.info("This is a test!")
Get the date/time. See this answer on how to get the timestamp. If the file is older than the current date by 3 months. Then delete it with
import os
os.remove("filename.extension")
save this file to py2exe, then just use any task scheduler to run this job at startup.
Windows: open the run command and enter shell:startup, then place your exe in here.
On OSX: The old way used to be to create a cron job, this doesn't work in many cases from my experience anymore but still work trying. The new recommended way by apple is CreatingLaunchdJobs. You can also refer to this topic for a more detailed explanation.
What I want:
To use the logging library instead of print statements, everywhere. Some times it is nice to not terminate with a new line. Consider this simplified example:
for file in files:
print('Loading {}'.format(file), end='', flush=True)
try:
data = load(file)
print('\rLoaded {}'.format(file))
except:
print('\rFailed loading {}'.format(file))
The obvious way would be to use:
handler = logging.StreamHandler()
handler.terminator = ""
However, I do not want to add a handler to my library, and I do want the default behaviour of my main logger to be to terminate with a new line. Terminating with "" feels like it should be the exception, rather than the rule.
Is there a way that I could do something like:
logger.info(msg, terminator="")
without having to create a lot of subclasses to the logging module?
Is my take on the problem reasonable, or is there a better way of handling this?
I had a similar issue and this is what I use to get the results I wanted, seems to be similar to what you are trying to achieve:
import logging
def getLogger(name, fmt="[%(asctime)s]%(name)s<%(levelname)s>%(message)s",
terminator='\n'):
logger = logging.getLogger(name)
cHandle = logging.StreamHandler()
cHandle.terminator = terminator
cHandle.setFormatter(logging.Formatter(fmt=fmt, datefmt="%H:%M:%S"))
logger.addHandler(cHandle)
return logger
logger = getLogger(r'\n', terminator='\n')
rlogger = getLogger(r'\r', terminator='\r')
logger.setLevel(logging.DEBUG)
rlogger.setLevel(logging.DEBUG)
logger.info('test0')
logger.info('test1')
logger.info('-----------------------\n')
rlogger.info('test2')
rlogger.info('test3\n\n')
for i in range(100000):
rlogger.info("%d/%d", i + 1, 100000)
rlogger.info('\n')
Results:
[14:48:00]\n<INFO>test0
[14:48:00]\n<INFO>test1
[14:48:00]\n<INFO>-----------------------
[14:48:00]\r<INFO>test3
[14:48:04]\r<INFO>100000/100000