Python log time when code kicks off and ends - python

I am writing a logger module to capture a few details of my codes. And one of the things I want to capture is the time the main code started and the time when it ended. I want to incorporate it into logging module, so that I don't have to write those lines in each of my codes. That way after I initialize my logger object I can just call the get_time function at beginning of main and end of it. I am practicing this in OOP, which I am new to. Is incorporating a function within the class CustomLogger (as below in def get_time()) the right way to do it? Or am I doing it wrong?
def __init__(self, loglevel=logging.INFO):
''' log config
Parameters:
loglevel (str): The default level to log. Valid values are 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'. Defaults to 'INFO'
'''
self.loglevel= loglevel
def get_time(self):
return datetime.now().strftime("%d/%m/%Y %H:%M:%S")
def custom_logger(self):
#inspect.stack()[1][3] gets the name of module that calls logger, incase we want multiple log calls from the ETL
logger = logging.getLogger(inspect.stack()[1][3])
logger.setLevel(self.loglevel)
workstation_logs = Path('path_to_write/workstation_logger.log')
log_message_format = logging.Formatter('%(asctime)s %(filename)s %(name)s - %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
#file_handler = logging.FileHandler(workstation_logs)
file_handler = logging.FileHandler(workstation_logs)
file_handler.setFormatter(log_message_format)
logger.addHandler(file_handler)
return logger
#end logger initiation
So my main code will look like this:
log= CustomLogger()
start= log.get_time()
#does a bunch of things
end= log.get_time()
log.custom_logger().info(f'started_at: {start}\t ended_at: {end}')
expected output- workstation_log:
2022-03-21 09:00:52 1608643300.py - INFO: Started at 3/21/2022 9:00:00 Ended at 3/21/2022 9:02:56

Related

Dynamically change level of python logging

In my project, I have setup multiple pipelines(~20). I want to implement logging for each of these pipeline and redirect them to different file for each pipeline.
I have created a class GenericLogger as below:
class GenericLogger(object):
def __init__(self, pipeline):
self.name = pipeline
pass
def get_logger(self):
logger = logging.getLogger(self.name)
log_file = "{0}.log".format(self.name)
console_handler = logging.StreamHandler()
file_handler = logging.handlers.RotatingFileHandler(log_file, maxBytes=LOGS_FILE_SIZE, backupCount=3)
file_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_handler.setFormatter(console_format)
file_handler.setFormatter(file_format)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
return logger
I am importing this class in my pipeline and getting the logger and using as below:
logger_helper = PythonLogger('pipeline_name')
logger = logger_helper.get_logger()
logger.warning("Something happened")
Flow of pipeline:
Once triggered, they will run continuously in interval of T minutes. Currently to avoid piling up of logger objects after each complete execution I am using logger.handlers = [], and then creating a new instance of logger again on the next iteration.
Questions:
1). How Can I dynamically change the level of the logs for each pipeline separately? If I am using logging.ini, Is creating a static handlers/formatters for each pipeline is necessary or Is there something I can do dynamically? don't know much about this.
2). Is the above implementation of the logger is correct or creating a Class for logger is something which should not be done?
To answer your two points:
You could have a mapping between pipeline name and level, and after creating the logger for a pipeline, you can set its level appropriately.
You don't need multiple console handlers, do you? I'd just create one console handler and attach it to the root logger. Likewise, you don't need to create multiple identical file and console formatters - just make one of each. In fact, since they are all apparently identical format strings, you just need one formatter instance. Avoid creating a class like GenericLogger.
Thus, something like:
formatter = logging.Formatter(...)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logging.getLogger().addHandler(console_handler)
for name, level in name_to_level.items():
logger = logging.getLogger(name)
logger.setLevel(level)
file_handler = logging.handlers.RotatingFileHandler(...)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
should do what you need.

Two loggers for two separate python files

I have two file entrypoint.py and op_helper.py that I am trying to send each scripts logs to different log files (webhook.log & op.log). I set up my logger.py file with two different log classes.
import logging
from logging.handlers import TimedRotatingFileHandler
class Logger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
class WebhookLogger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
today = datetime.today()
month = today.strftime("%B")
logger = Logger().create_timed_rotating_log(f'./{month + str(today.year)}Logger.log')
webhook_logger = WebhookLogger().create_timed_rotating_log(f'./{month + str(today.year)}WebhookLogger.log')
In my entrypoint.py script:
from logger import webhook_logger
webhook_logger.info("Something to log")
And in my op_helper.py script:
from logger import logger
logger.info("Something else to log")
But when I run the script, both log statements are logged to both log files.
2021-10-15 14:17:51 INFO Something to log
2021-10-15 14:17:51 INFO Something else to log
Can anyone explain to me what's going on here, and possibly, what I'm doing incorrectly?
Thank you in advance!
Here is an excerpt from the documentation for logging (the bold is mine):
logging.getLogger(name=None)
Return a logger with the specified name or, if name is None, return a logger which is the root logger of the hierarchy. If specified, the name is typically a dot-separated hierarchical name like ‘a’, ‘a.b’ or ‘a.b.c.d’. Choice of these names is entirely up to the developer who is using logging.
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
...
The solution, therefore, is to assign a different name to your second logger.
EDIT:
Keep in mind, however, that, as you can see, calling getLogger either creates a new instance, if one under the given name doesn't exist, or returns an already existing instance. Therefore every following instruction will only modify an existing logger. If your intention is to use your classes to create multiple instances of one logger type, that approach will not work. Right now, they both do exactly the same thing, so there's not really a need for two separate classes either. As you can see, logging doesn't lend itself well to being used with an object-oriented approach, because the objects are already instanced elsewhere and can be accessed as "global" objects. But this is all just a side note.

Can I delay the rotation of the logfile when using the Twisted logger?

When using the logging module to create rotating log files, I can tell the logger to delay the rotation until there is actual data to be logged by using the delay=True argument of the TimedRotatingFileHandler class like this:
import time
import logging
from logging.handlers import TimedRotatingFileHandler
if __name__ == '__main__':
handler = TimedRotatingFileHandler('logfile.log', when='midnight', delay=True)
out_fmt = '[%(asctime)s.%(msecs)03dZ] [%(levelname)s] %(message)s'
dt_fmt = '%Y-%m-%d %H:%M:%S'
logging.Formatter.converter = time.gmtime
formatter = logging.Formatter(out_fmt, dt_fmt)
handler.setFormatter(formatter)
root = logging.getLogger()
root.setLevel(logging.DEBUG)
root.addHandler(handler)
This is useful when there is rarely any new information output to the log - for instance, if there is a whole day without anything new being logged; then you don't want to create an empty log file for that day.
Is it possible to achieve the same effect when using the Twisted logger (twisted.python.logfile.DailyLogFile)?
You can achieve your desired behavior, simply by overriding shouldRotate function in DailyLogFile class.
Something like below should do the trick:
class CustomDailyLogFile(LogFile, DailyLogFile):
def shouldRotate(self):
return self.toDate() > self.lastDate and self.rotateLength and self.size >= self.rotateLength

Python logging creating extra log files

I am trying to use logging to create log files for a program. I'm doing something like this:
if not os.path.exists(r'.\logs'):
os.mkdir(r'.\logs')
logging.basicConfig(filename = rf'.\logs\log_{time.ctime().replace(":", "-").replace(" ", "_")}.log',
format = '%(asctime)s %(name)s %(levelname)s %(message)s',
level = logging.DEBUG)
def foo():
# do stuff ...
logging.debug('Done some stuff')
# do extra stuff ...
logging.debug('Did extra stuff')
# some parallel map that does NOT use logging in the mapping function
logging.debug('Done mapping')
if __name__ == '__main__':
foo()
All goes well ant the log is created with the correct information in it:
logs
log_Wed_Feb_14_09-23-32_2018.log
Only that at the end, for some reason, it also creates 2 additional log files and leaves them empty:
logs
log_Wed_Feb_14_09-23-32_2018.log
log_Wed_Feb_14_09-23-35_2018.log
log_Wed_Feb_14_09-23-39_2018.log
The timestamps are only a few seconds apart, but all of the logging still only goes in the first log file as it should.
Why is it doing this? Also is there a way to stop it from giving me extra empty files aside from just deleting any empty logs at the end of the program?
Solved. Kind of.
The behaviour using basic config kept happening so I tried to make a custom logger class:
class Logger:
"""Class used to encapsulate logging logic."""
__slots__ = ['dir',
'level',
'formatter',
'handler',
'logger']
def __init__(self,
name: str = '',
logdir: str = r'.\logs',
lvl: int = logging.INFO,
fmt: str = '%(asctime)s %(name)s %(levelname)s %(message)s',
hdl: str = rf'.\logs\log_{time.ctime().replace(":", "-").replace(" ", "_")}.log'):
print('construct')
if not os.path.exists(logdir):
os.mkdir(logdir)
self.dir = logdir
self.level = lvl
self.formatter = logging.Formatter(fmt = fmt)
self.handler = logging.FileHandler(filename = hdl)
self.handler.setFormatter(self.formatter)
self.logger = logging.getLogger(name)
self.logger.setLevel(self.level)
self.logger.addHandler(self.handler)
def log(self, msg: str):
"""Logs the given message to the set level of the logger."""
self.logger.log(self.level, msg)
def cleanup(self):
"""Iterates trough the root level of the log folder, removing all log files that have a size of 0."""
for log_file in (rf'{self.dir}\{log}' for log in next(os.walk(self.dir))[2]
if log.endswith('.log') and os.path.getsize(rf'{self.dir}\{log}') is 0):
os.remove(log_file)
def shutdown(self):
"""Prepares and executes the shutdown and cleanul actions."""
logging.shutdown()
self.handler.close()
self.cleanup()
And tried to pass it as a parameter to functions like this:
def foo(logger = Logger('foo_logger')):
But this approach made it construct a whole new logger each time I called the log method which let again to multiple files. By using one instance of Logger and defaulting the arguments to None I solved the problem of multiple files for this case.
However the initial Basic Config situation remains a mistery.

Suspend the formatting of the logger, then go back to it

I have a logging configuration where I log to a file and to the console:
logging.basicConfig(filename=logfile, filemode='w',
level=numlevel,
format='%(asctime)s - %(levelname)s - %(name)s:%(funcName)s - %(message)s')
# add console messages
console = logging.StreamHandler()
console.setLevel(logging.INFO)
consoleformatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console.setFormatter(consoleformatter)
logging.getLogger('').addHandler(console)
At some point in my script, i need to interact with the user by printing a summary and asking for confirmation. The summary is currently produced by prints in a loop. I would like to suspend the current format of the console logs, so that I can printout one big block of text with a question at the end and wait for user input. But i still want all of this to be logged to file!
The function that does this is in a module, where I tried the following :
logger = logging.getLogger(__name__)
def summaryfunc:
logger.info('normal logging business')
clearformatter = logging.Formatter('%(message)s')
logger.setFormatter(clearformatter)
logger.info('\n##########################################')
logger.info('Summary starts here')
Which yields the error: AttributeError: 'Logger' object has no attribute 'setFormatter'
I understand that a logger is a logger, not a handler, but i'm not sure on how to get things to work...
EDIT:
Following the answers, my problem turned into : how can i suspend logging to the console when interacting with the user, while still being able to log to file. IE: suspend only the streamHandler. Since this is happening in a module, the specifics of the handlers are defined elsewhere, so here is how i did it :
logger.debug('Normal logging to file and console')
root_logger = logging.getLogger()
stream_handler = root_logger.handlers[1]
root_logger.removeHandler(stream_handler)
print('User interaction')
logger.info('Logging to file only')
root_logger.addHandler(stream_handler)
logger.info('Back to logging to both file and console')
This relies on the streamHandler always being the second in the list returned by handlers but i believe this is the case because it's in the order I added the handlers to the root logger...
I agree with Vinay that you should use print for normal program output and only use logging for logging purpose. However, if you still want to switch formatting in the middle, then switch back, here is how to do it:
import logging
def summarize():
console_handler.setFormatter(logging.Formatter('%(message)s'))
logger.info('Here is my report')
console_handler.setFormatter(console_formatter)
numlevel = logging.DEBUG
logfile = 's2.log'
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
file_handler = logging.FileHandler(filename=logfile, mode='w')
file_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s:%(funcName)s - %(message)s')
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.info('Before summary')
summarize()
logger.info('After summary')
Discussion
The script creates a logger object and assigned to it two handlers: one for the console and one for file.
In the function summarize(), I switched in a new formatter for the console handler, do some logging, then switched back.
Again, let me remind that you should not use logging to display normal program output.
Update
If you want to suppress console logging, then turn it back on. Here is a suggestion:
def interact():
# Remove the console handler
for handler in logger.handlers:
if not isinstance(handler, logging.FileHandler):
saved_handler = handler
logger.removeHandler(handler)
break
# Interact
logger.info('to file only')
# Add the console handler back
logger.addHandler(saved_handler)
Note that I did not test the handlers against logging.StreamHandler since a logging.FileHandler is derived from logging.StreamHandler. Therefore, I removed those handlers that are not FileHandler. Before removing, I saved that handler for later restoration.
Update 2: .handlers = []
In the main script, if you have:
logger = logging.getLogger(__name__) # __name__ == '__main__'
Then in a module, you do:
logger = logging.getLogger(__name__) # __name__ == module's name, not '__main__'
The problem is, in the script, __name__ == '__main__' and in the module, __name__ == <the module's name> and not '__main__'. In order to achieve consistency, you will need to make up some name and use them in both places:
logger = logging.getLogger('MyScript')
Logging shouldn't be used to provide the actual output of your program - a program is supposed to run the same way if logging is turned off completely. So I would suggest that it's better to do what you were doing before, i.e. prints in a loop.

Categories

Resources