I am trying to set loggers for my python code, I want to set the level of the log from the configuration file. But unable to do by me. Here the code is given below, If you noticed that in the given below code can see logger.setLevel(logging.INFO). I don't want to directly mention as a hardcoded value logging.INFO. Need to get this from the config file, is it possible?
import logging
from logging.config import fileConfig
from datetime import date
class Log:
#staticmethod
def trace():
today = date.today()
# dd/mm/YY
d1 = today.strftime("%d_%m_%Y")
# Gets or creates a logger
logger = logging.getLogger(__name__)
# set log level
logger.setLevel(logging.INFO)
# define file handler and set formatter
file_handler = logging.FileHandler('log/'+d1+'_logfile.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
# add file handler to logger
logger.addHandler(file_handler)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
return logger
You can always use Python built-in Configuration file parser
Have the log levels in a config file and read that value. Since that value will be in string, you can define the dictionary mapping in your code. See below for an example.
import configparser
config= configparser.ConfigParser()
config.read('configfile')
log_level_info = {'logging.DEBUG': logging.DEBUG,
'logging.INFO': logging.INFO,
'logging.WARNING': logging.WARNING,
'logging.ERROR': logging.ERROR,
}
print(config['DEFAULT']['LOG_LEVEL'])
my_log_level_from_config = config['DEFAULT']['LOG_LEVEL']
my_log_level = log_level_info.get(my_log_level_from_config, logging.ERROR)
logger.setLevel(my_log_level)
Your config file would be like below:
user#Inspiron:~/code/advanced_python$ cat configfile
[DEFAULT]
LOG_LEVEL = logging.INFO
user#Inspiron:~/code/advanced_python$
If I understood correctly, you need a way to set your logging level at runtime instead of a hard-coded value. I would say you have two options.
The first solution would be to parse your configuration file, and set the level of logging accordingly. If you don't want to parse it everytime the Log class is invoked, in your main you can set a variable that you pass to the Log class.
The second one, that I also suggest, would be to set handlers with python logging class https://docs.python.org/3/library/logging.config.html
logging level (logging.INFO) is an integer value. can you pass numbers from your config file to set log level
print(logging.INFO)
print(logging.WARN)
print(logging.DEBUG)
print(logging.ERROR)
20
30
10
40
Related
I am trying to configure two loggers, one logger is for INFO level, and the other logger is for DEBUG level. I would like DEBUG content to only go to my log file, and I would like the INFO content to go both a log file and to the console. Please see the below code. Nothing is being written into my files and nothing is being displayed in the console.
logFileDir = os.path.join(os.getcwd(), '.logs')
if not os.path.exists(logFileDir):
os.mkdir(logFileDir)
infoLogFileDir = os.path.join(logFileDir, 'INFO')
if not os.path.exists(infoLogFileDir):
os.mkdir(infoLogFileDir)
debugLogFileDir = os.path.join(logFileDir, 'DEBUG')
if not os.path.exists(debugLogFileDir):
os.mkdir(debugLogFileDir)
LOG_FORMAT = ("%(asctime)s [%(levelname)s]: %(message)s in %(pathname)s:%(lineno)d")
#DEBUG LOGGER
debugLogFileName = os.path.join(debugLogFileDir, 'EFDebugLog.log')
debugLogger = logging.getLogger("debugLogger")
debugLogger.setLevel(logging.DEBUG)
debugHandler = logging.handlers.RotatingFileHandler(filename=debugLogFileName,maxBytes=5000000, backupCount=100)
debugHandler.setLevel(logging.DEBUG)
debugHandler.setFormatter(Formatter(LOG_FORMAT))
debugLogger.addHandler(debugHandler)
#INFO LOGGER
infoLogFileName = os.path.join(infoLogFileDir, 'EFInfoLog.log')
infoLogger = logging.getLogger("infoLogger")
infoLogger.setLevel(logging.INFO)
infoHandler = logging.handlers.RotatingFileHandler(filename=infoLogFileName,maxBytes=5000000, backupCount=100)
infoHandler.setLevel(logging.INFO)
infoHandler.setFormatter(Formatter(LOG_FORMAT))
infoLogger.addHandler(infoHandler)
infoLogger.addHandler(logging.StreamHandler())
The logging.* functions that you are calling log to the root logger. That's why you don't see any output; you haven't configured any handlers for the root logger. You have only configured handlers for your own loggers, which you are not using.
If you want to use the logging.* functions, you need to first configure the root logger, which you can get by calling getLogger without any arguments. So the code might look like:
import logging
import logging.handlers
root_logger = logging.getLogger()
info_handler = logging.handlers.RotatingFileHandler(filename='infolog.txt')
info_handler.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
debug_handler = logging.handlers.RotatingFileHandler(filename='debuglog.txt')
debug_handler.setLevel(logging.DEBUG)
root_logger.addHandler(stream_handler)
root_logger.addHandler(debug_handler)
root_logger.addHandler(info_handler)
# this is needed, since the severity is WARNING by default,
# i.e. it would not log any debug messages
root_logger.setLevel(logging.DEBUG)
root_logger.debug('this is a debug message')
root_logger.info('this is an info message')
I would like to generate a new log file on each iteration of a loop in Python using the logging module. I am analysing data in a for loop, where each iteration of the loop contains information on a new object. I would like to generate a log file per object.
I looked at the docs for the logging module and there is capability to change log file on time intervals or when the log file fills up, but I cannot see how to iteratively generate a new log file with a new name. I know ahead of time how many objects are in the loop.
My imagined pseudo code would be:
import logging
for target in targets:
logfile_name = f"{target}.log"
logging.basicConfig(format='%(asctime)s - %(levelname)s : %(message)s',
datefmt='%Y-%m/%dT%H:%M:%S',
filename=logfile_name,
level=logging.DEBUG)
# analyse target infomation
logging.info('log target info...')
However, the logging information is always appended to the fist log file for target 1.
Is there a way to force a new log file at the beginning of each loop?
Rather than using logging directly, you need to use logger objects. Go thorough the docs here.
Create a new logger object as a first statement in the loop. The below is a working solution.
import logging
import sys
def my_custom_logger(logger_name, level=logging.DEBUG):
"""
Method to return a custom logger with the given name and level
"""
logger = logging.getLogger(logger_name)
logger.setLevel(level)
format_string = ("%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:"
"%(lineno)d — %(message)s")
log_format = logging.Formatter(format_string)
# Creating and adding the console handler
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_format)
logger.addHandler(console_handler)
# Creating and adding the file handler
file_handler = logging.FileHandler(logger_name, mode='a')
file_handler.setFormatter(log_format)
logger.addHandler(file_handler)
return logger
if __name__ == "__main__":
for item in range(10):
logger = my_custom_logger(f"Logger{item}")
logger.debug(item)
This writes to a different log file for each iteration.
This might not be the best solution, but it will create new log file for each iteration. What this is doing is, adding a new file handler in each iteration.
import logging
targets = ["a", "b", "c"]
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
for target in targets:
log_file = "{}.log".format(target)
log_format = "|%(levelname)s| : [%(filename)s]--[%(funcName)s] : %(message)s"
formatter = logging.Formatter(log_format)
# create file handler and set the formatter
file_handler = logging.FileHandler(log_file)
file_handler.setFormatter(formatter)
# add handler to the logger
logger.addHandler(file_handler)
# sample message
logger.info("Log file: {}".format(target))
This is not necessarily the best answer but worked for my case, and just wanted to put it here for future references. I created a function that looks as follows:
def logger(filename, level=None, format=None):
"""A wrapper to the logging python module
This module is useful for cases where we need to log in a for loop
different files. It also will allow more flexibility later on how the
logging format could evolve.
Parameters
----------
filename : str
Name of logfile.
level : str, optional
Level of logging messages, by default 'info'. Supported are: 'info'
and 'debug'.
format : str, optional
Format of logging messages, by default '%(message)s'.
Returns
-------
logger
A logger object.
"""
levels = {"info": logging.INFO, "debug": logging.DEBUG}
if level is None:
level = levels["info"]
else:
level = levels[level.lower()]
if format is None:
format = "%(message)s"
# https://stackoverflow.com/a/12158233/1995261
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logger = logging.basicConfig(filename=filename, level=level, format=format)
return logger
As you can see (you might need to scroll down the code above to see the return logger line), I am using logging.basicConfig(). All modules I have in my package that log stuff, have the following at the beginning of the files:
import logging
import other stuff
logger = logging.getLogger()
class SomeClass(object):
def some_method(self):
logger.info("Whatever")
.... stuff
When doing a loop, I have call things this way:
if __name__ == "__main__":
for i in range(1, 11, 1):
directory = "_{}".format(i)
if not os.path.exists(directory):
os.makedirs(directory)
filename = directory + "/training.log"
logger(filename=filename)
I hope this is helpful.
I'd like to slightly modify #0Nicholas's method. The direction is right, but the first FileHandler will continue log information into the first log file as long as the function is running. Therefore, we would want to pop the handler out of the logger's handlers list:
import logging
targets = ["a", "b", "c"]
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
log_format = "|%(levelname)s| : [%(filename)s]--[%(funcName)s] : %(message)s"
formatter = logging.Formatter(log_format)
for target in targets:
log_file = f"{target}.log"
# create file handler and set the formatter
file_handler = logging.FileHandler(log_file)
file_handler.setFormatter(formatter)
# add handler to the logger
logger.addHandler(file_handler)
# sample message
logger.info(f"Log file: {target}")
# close the log file
file_handler.close()
# remove the handler from the logger. The default behavior is to pop out
# the last added one, which is the file_handler we just added in the
# beginning of this iteration.
logger.handlers.pop()
Here is a working version for this problem. I was only able to get it to work if the targets already have .log before going into the loop so you may want to add one more for before going into targets and override all targets with .log extension
import logging
targets = ["a.log","b.log","c.log"]
for target in targets:
log = logging.getLogger(target)
formatter = logging.Formatter('%(asctime)s - %(levelname)s : %(message)s', datefmt='%Y-%m/%dT%H:%M:%S')
fileHandler = logging.FileHandler(target, mode='a')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
log.addHandler(fileHandler)
log.addHandler(streamHandler)
log.info('log target info...')
I am trying to add in multiple log files into a Logs folder, but instead of having to change the code each time you start the program, i want to make the log file's name "Log(the time).log". I'm using logger at the moment, but i can switch. I've also imported time.
Edit: Here is some of the code i am using:
import logging
logger = logging.getLogger('k')
hdlr = logging.FileHandler('Path to the log file/log.log')
formatter = logging.Formatter('At %(asctime)s, KPY returned %(message)s at level %(levelname)s
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
logger.info('hello')
import logging
import time
fname = "Log({the_time}).log".format(the_time=time.time())
logging.basicConfig(level=logging.DEBUG, filename=fname)
logging.info('hello')
You should do it when you set the FileHandler for the logging object. Use datetime instead of time so that you can include the date for each instance of the log in order to differentiate logs on different days at the same time.
fh = logging.FileHandler("Log"+str(datetime.datetime.now())+'.log')
fh.setLevel(logging.DEBUG)
I got help from another website.
You have to change the hdlr to:
({FOLDER LOCATION}/Logs/log{}.log'.format(datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S_%f')))
I have two log handlers in my code: a StreamHandler to write INFO level logs from the same module to stdout, and a FileHandler to write more verbose, DEBUG logs to a file. This is my code:
import sys
import logging
log = logging.getLogger('mymodule')
log.setLevel(logging.DEBUG)
logf = logging.FileHandler('file.log')
logf.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
log.addHandler(logf)
logs = logging.StreamHandler(sys.stdout)
logs.setLevel(logging.INFO)
logs.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
log.addHandler(logs)
However, I want the FileHandler to also write DEBUG info from other modules. I can achieve this if I remove the name from logging.getLogger(), but this will also affect my StreamHandler, which I only want to print output from my own module.
So is there a way to have either of the handlers use a different name?
There's a name attribute available on the base Handler class. I took a look at the Python source and doesn't look like it's used internally so you could manually set each handler to a different name.
I want to log to 2 different files, but different things.
I was trying this:
runid = str(uuid.uuid1())
logger = logging.getLogger('logger1')
other_logger = logging.getLogger('logger2')
logger.setLevel(logging.DEBUG)
debug_handler = logging.FileHandler('log/debug.log')
info_handler = logging.FileHandler('log/info.log')
other_handler = logging.FileHandler('log/other_info.log')
debug_handler.setLevel(logging.DEBUG)
info_handler.setLevel(logging.INFO)
other_handler.setLevel(logging.INFO)
formatter = logging.Formatter(runid + ' - %(asctime)s - %(levelname)s - %(funcName)s - %(message)s')
debug_handler.setFormatter(formatter)
info_handler.setFormatter(formatter)
other_handler.setFormatter(formatter)
logger.addHandler(debug_handler)
logger.addHandler(info_handler)
other_logger.addHandler(other_handler)
logger.info('message1')
other_logger.info('message2')
But logger and other_logger are working as one and I get both messages in all files, no matter if I call in logger or other_logger.
According doc:
"Loggers have the following attributes and methods. Note that Loggers
are never instantiated directly, but always through the module-level
function logging.getLogger(name). Multiple calls to getLogger() with
the same name will always return a reference to the same Logger
object."
But the parent object is always the same, as in this small test:
import logging
log1 = logging.getLogger('hey')
log2 = logging.getLogger('you')
print log1.parent, log2.parent
enrique#enrique-mbp:$ python /tmp/test.py
<logging.RootLogger object at 0x26f0810> <logging.RootLogger object at 0x26f0810>
How can I solve this?
The problem is here:
logger.setLevel(logging.DEBUG)
You need to set the level for other_logger as well:
logger.setLevel(logging.DEBUG)
other_logger.setLevel(logging.DEBUG) # or INFO because that is the lowest level being used by a handler
Without this, other_logger remains at the default logging level, logging.WARNING, which prevents the other_logger.info('message2') line from logging anything.