I am trying to learn about logging. My code is logging the information that I want but it's duplicating the message.
The first code block below is at the top of my py file.
import logging
import traceback
# setup the file name
log_file_name = dt.date.today().strftime('%Y_%B_%d')
log_path = "C:/some_path/"
# setup the logging
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
file_handler = logging.FileHandler(log_path + log_file_name + '.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
In my function I'm using the line like below to log some information.
logger.debug('some message')
My py file doesn't import any other modules which have any logging information so I'm a bit confused as to why the messages are being duplicated?
Related
logging.py
import logging
def log():
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
file_handler = logging.FileHandler('sample.log')
file_handler.setLevel(logging.ERROR)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
return logger
I have this above piece of code. i want to call this function in every file in the project for logs and to print on command line by streamhandler and store log it in the same sample.log file across a project.
Hello Stack Overflow users,
i'm writing a simple code and i got a "AttributeError" exception
log_level = 'INFO' # DEBUG - INFO - WARNING - ERROR - CRITICAL
logging.basicConfig(
format=('%(asctime)s > [%(levelname)s] in %(funcName)s on line '
'%(lineno)d > %(message)s'), level = logging.log_level, \
filename='logs.log', filemode='w', encoding='utf-8')
Sorry for asking a suck stupid question.
Thank you in advance.
Not very clear on your requirement, but the below approach can be a neat solution
def get_logger(cls, logger_name, create_file=False):
# create logger
log = logging.getLogger(logger_name)
log.setLevel(level=logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
if create_file:
# create file handler for logger.
fh = logging.FileHandler('my_log.log')
fh.setLevel(level=logging.DEBUG)
fh.setFormatter(formatter)
# reate console handler for logger.
ch = logging.StreamHandler()
ch.setLevel(level=logging.DEBUG)
ch.setFormatter(formatter)
# add handlers to logger.
if create_file:
log.addHandler(fh)
log.addHandler(ch)
return log
I am using Kedro but I can't get my logging file to be used. I am following the tutorial. The log file was created but is still empty.
Steps done:
Configured logging
class ProjectContext(KedroContext):
def _setup_logging(self) -> None:
log = logging.getLogger(__name__)
handler = TimedRotatingFileHandler(filename='logs/mypipeline.log', when='d', interval=1)
f_format = logging.Formatter('%(asctime)s %(levelname)s %(funcName)s %(lineno)d %(message)s ')
handler.setFormatter(f_format)
log.addHandler(handler)
log.setLevel(logging.DEBUG)
Use logging (in my nodes.py file)
import logging
log = logging.getLogger(__name__)
log.warning("Issue warning")
log.info("Send information")
And after running the pipeline the log file is created but keeps empty.
Any advice?
Ok, problem solved! It was missing the logger definition on the logging.yml file! Thank you guys for your support!
I am trying to log all exceptions within my application. I want to have different handlers, one handler to log all information to a file and only critical problems to my email.
I created a file_handler, which should log everything and email_handler for the critical problems. The handlers work, apart from catching unexpected errors. I tried to log everything with the root logger, which has the same configuration as the file_handler, which works. Why doesn't my file_handler catch unexpected errors?
Here is my code:
# root logger
logging.basicConfig(filename='flask.log', level=logging.WARNING)
# own logger
logger = logging.getLogger('flask')
file_handler = logging.handlers.TimedRotatingFileHandler('flask.log', when='D', interval=1)
email_handler = SMTPHandler(...)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger.setLevel(logging.DEBUG)
file_handler.setLevel(logging.DEBUG)
email_handler.setLevel(logging.WARNING)
file_handler.setFormatter(formatter)
email_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(email_handler)
If I have an error in a function (example Class A doesn't has attribute X), the unexpected problem only gets logged by the root logger.
Workaround:
If I don't give "my" logger a name, I get the root logger. My handlers then take his information and distribute them. It is not ideal, because it should work with "my" logger as well.
Workaround code:
# own logger
logger = logging.getLogger() # no name -> logger = root logger
file_handler = logging.handlers.TimedRotatingFileHandler('flask.log', when='D', interval=1)
email_handler = SMTPHandler(...)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger.setLevel(logging.DEBUG)
file_handler.setLevel(logging.DEBUG)
email_handler.setLevel(logging.WARNING)
file_handler.setFormatter(formatter)
email_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(email_handler)
I have a python program that utilizes multiprocessing to increase efficiency, and a function that creates a logger for each process. The logger function looks like this:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
# create a logger
if logging in os.environ:
logging_string = os.environ["logging"]
if logging_string == "DEBUG":
logging_level = loggin.DEBUG
else if logging_string == "INFO":
logging_level = logging.INFO
else if logging_string == "WARNING":
logging_level = logging.WARNING
else if logging_string == "ERROR":
logging_level = logging.ERROR
else if logging_string == "CRITICAL":
logging_level = logging.CRITICAL
else:
logging_level = logging.INFO
logger = logging.getLogger(app_name)
logger.setLevel(logging_level)
# Console handler for error output
console_handler = logging.StreamHandler()
console_handler.setLevel(logging_level)
# Formatter to make everything look nice
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
# Add the handlers to the logger
logger.addHandler(console_handler)
return logger
And my processing functions look like this:
import custom_logging
def do_capture(data_dict_access):
"""Process data"""
# Custom logging
LOGGER = custom_logging.create_logger("processor")
LOGGER.debug("Doing stuff...")
However, no matter what the logging environment variable is set to, I still receive debug log messages in the console. Why is my logging level not taking effect, surely the calls to setLevel() should stop the debug messages from being logged?
Here is an easy way to create a logger object:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
logging_level = os.getenv('logging', logging.INFO)
logging.basicConfig(
level=logging_level,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(app_name)
return logger
Discussion
There is no need to convert from "DEBUG" to logging.DEBUG, the logging module understands these strings.
Use basicConfig to ease the pain of setting up a logger. You don't need to create handler, set format, set level, ... This should work for most cases.
Update
I found out why your code does not work, besides the else if. Consider your line:
if logging in os.environ:
On this line loggging without quote refers to the logging library package. What you want is:
if 'logging' in os.environ: