I have a logger.py file which initialises logging.
import logging
logger = logging.getLogger(__name__)
def logger_init():
import os
import inspect
global logger
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
fh = logging.FileHandler(os.getcwd() + os.path.basename(__file__) + ".log")
fh.setLevel(level=logging.DEBUG)
logger.addHandler(fh)
return None
logger_init()
I have another script caller.py that calls the logger.
from logger import *
logger.info("test log")
What happens is a log file called logger.log will be created containing the logged messages.
What I want is the name of this log file to be named after the caller script filename. So, in this case, the created log file should have the name caller.log instead.
I am using python 3.7
It is immensely helpful to consolidate logging to one location. I learned this the hard way. It is easier to debug when events are sorted by time and it is thread-safe to log to the same file. There are solutions for multiprocessing logging.
The log format can, then, contain the module name, function name and even line number from where the log call was made. This is invaluable. You can find a list of attributes you can include automatically in a log message here.
Example format:
format='[%(asctime)s] [%(module)s.%(funcName)s] [%(levelname)s] %(message)s
Example log message
[2019-04-03 12:29:48,351] [caller.work_func] [INFO] Completed task 1.
You can get the filename of the main script from the first item in sys.argv, but if you want to get the caller module not the main script, check the answers on this question.
Related
I have a service running on flask inside docker, that recieves requests.
For every request, i want to export a log file including all logs from all modules in the way.
Right now, it is logging constantly to stream using basicConfig, and its great.
However, I need to add a fileHandler for every request(different file path)
I added the fileHandler when a request arrives in requestHandler.py, and remove the handler when request is handled. This works, file is created properly, but the problem is it does not apply in other modules for some reason. it is only logging to file from the module that created the handler.
Reading the docs, logging states it should be global:
" Using logging in multiple modules
Multiple calls to logging.getLogger('someLogger') return a reference to the same logger
object. This is true not only within the same module, but also across modules as long as it
is in the same Python interpreter process. It is true for references to the same object;
additionally, application code can define and configure a parent logger in one module and
create (but not configure) a child logger in a separate module, and all logger calls to the
child will pass up to the parent. Here is a main module: "
My Modules Setup:
main.py
logging.basicConfig(level=logging.INFO, format=config.LOG_FORMAT)
logger = logging.getLogger()
requestHandler.py
logger = logging.getLogger(__name__)
def add_file_logging(self):
for hdlr in logger.handlers[:]: # remove the existing file handlers
if isinstance(hdlr, logging.FileHandler):
logger.removeHandler(hdlr)
fh = logging.FileHandler(filename=self.log_file_path)
fh.setFormatter(config.LOG_FORMAT)
fh.setLevel(logging.INFO)
logger.addHandler(fh)
logger.info("Added file handler : {}".format(self.log_file_path))
other_module.py
logger = logging.getLogger(__name__)
Console logging works great
File logging include only one log line from requestHandler.py
The loop for deleting handlers is empty - it does not detect any handlers, but the idea is that it should replace the old file handler with the new one.
Please help if you have a solution or a better approach for this feature.
Thanks!:)
I have a main script and multiple modules. Right now I have logging setup where all the logging from all modules go into the same log file. It gets hard to debug when its all in one file. So I would like to separate each module into its own log file. I would also like to see the requests module each module uses into the log of the module that used it. I dont know if this is even possible. I searched everywhere and tried everything I could think of to do it but it always comes back to logging everything into one file or setup logging in each module and from my main module initiate the script instead of import as a module.
main.py
import logging, logging.handlers
import other_script.py
console_debug = True
log = logging.getLogger()
def setup_logging():
filelog = logging.handlers.TimedRotatingFileHandler(path+'logs/api/api.log',
when='midnight', interval=1, backupCount=3)
filelog.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s %(name)-15s %(levelname)-8s %(message)s')
filelog.setFormatter(fileformatter)
log.addHandler(filelog)
if console_debug:
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-15s: %(levelname)-8s %(message)s')
console.setFormatter(formatter)
log.addHandler(console)
if __name__ == '__main__':
setup_logging()
other_script.py
import requests
import logging
log = logging.getLogger(__name__)
One very basic concept of python logging is that every file, stream or other place that logs go is equivalent to one Handler. So if you want every module to log to a different file you will have to give every module it's own handler. This can also be done from a central place. In your main.py you could add this to make the other_script module log to a separate file:
other_logger = logging.getLogger('other_script')
other_logger.addHandler(logging.FileHandler('other_file'))
other_logger.propagate = False
The last line is only required if you add a handler to the root logger. If you keep propagate at the default True you will have all logs be sent to the root loggers handlers too. In your scenario it might be better to not even use the root logger at all, and use a specific named logger like getLogger('__main__') in main.
I am writing an API (python 2.7.x), and I have a worker script for it which does nothing on its own but can be wrapped by a variety of higher level scripts (ie one that feeds the worker data from csv, one from dB etc). The current task requires me to:
log INFO+ to console
log a certain set of INFO+ events to a .csv file
log ALL events to a distinct .log file
I've distilled my code to the following examples:
# SuperExample.py
import logging
import SubExample
def main():
logging.basicConfig(level=logging.INFO)
verbose_log = 'debug.log'
data_log = 'data.csv'
format_string = '%(asctime)s::%(name)s::%(levelname)s::%(message)s'
formatter = logging.Formatter(format_string)
# verbose log is a typical event log used for debugging
verbose = logging.FileHandler(verbose_log, mode='w')
verbose.setLevel(logging.DEBUG)
verbose.setFormatter(formatter)
SubExample.logger.addHandler(verbose)
# data log will eventually have a different formatter and a filter in
# order to get a narrow set of events, formatted for post-processing ease
data = logging.FileHandler(data_log, mode='w')
data.setLevel(logging.INFO)
data.setFormatter(formatter)
SubExample.logger.addHandler(data)
logging.info('Started')
SubExample.do_something()
logging.info('Finished')
if __name__ == '__main__':
main()
and
# SubExample.py
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
def do_something():
logger.debug('hey look I am doing something!')
logger.debug('now I am doing something else!!')
logger.info('this is my result!!!')
which gives me what I want in my files, but gives me this in my console:
INFO:root:Started
DEBUG:SubExample:hey look I am doing something!
DEBUG:SubExample:now I am doing something else!!
INFO:SubExample:this is my result!!!
INFO:root:Finished
I've read about the logging module and it's best practices, but very little of the example code works exactly the way its described when libraries get involved. So, my first question is: is this a basically sane approach? I haven't actually seen anyone else attach handlers to the subscript logger from the wrapper script, but it seems to do what I want.
And my second question is why do the DEBUG statements get into the console? I would think that logging.basicConfig(level=logging.INFO) should prevent this?
In the SuperExample.py file, I removed the basicConfig step and instead did this:
# SuperExample.py
import logging
import SubExample
def main():
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
logger.addHandler(console)
...
...
logger.info('Started')
...
logger.info('Finished')
In the SubExample.py file:
# SubExample.py
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
logger.addHandler(console)
def do_something():
....
Rest of the code is same as yours. When I run SuperExample.py, this is the output:
test_project ~$ python SuperExample.py
Started
this is my result!!!
Finished
The debug.log file has this:
2017-10-25 16:18:08,292::SubExample::DEBUG::hey look I am doing something!
2017-10-25 16:18:08,292::SubExample::DEBUG::now I am doing something else!!
2017-10-25 16:18:08,292::SubExample::INFO::this is my result!!!
The data.csv file has this:
2017-10-25 16:18:08,292::SubExample::INFO::this is my result!!!
So, it seems like the right way to do this is to add a StreamHandler to the logger in each of your modules and set it's level to what you want logged from there to the console. Also, whenever you do logging.getLogger(), you HAVE to set that logger's level to get the expected behavior from the handlers.
I have used scrapy for several months.Several weeks ago,I started to use file to record log information.I wrote log-to-file function as this:
def logging_to_file(file_name):
import logging
from scrapy.utils.log import configure_logging
filename = '%s-log.txt' % file_name
import os
if os.path.isfile(filename):
os.remove(filename)
configure_logging(install_root_handler=False)
logging.basicConfig(
filename=filename,
filemode='a',
format='%(levelname)s: %(message)s',
level=logging.DEBUG
)
return logging.getLogger()
Then,in each scrapy spider class,I use logger = logging_file.logging_to_file('./logs/xxx-%s' % time.strftime('%y%m%d')) in __init__ function to customize log file name.
Something went wrong today,I found if I wrote two scrapy classes in one .py file,and after I started spider of the second class,the log file was also named by the file name which is given in the first class!
I think this is caused by python log rule,but I don't know how to resolve.
I'm not sure if I understand what your question is but in general you don't have to create any functions or anything to configure your logger.
What you should do create a logger and assign it a FileHandler and then just use your created logger to log your info.
import logging
logger = logging.getLogger('mylogger') # skip name for global rules
fh = logging.FileHandler(LOG_FILE_DIR, mode='a')
logger.addHandler(fh)
You can put this anywhere that gets executed on program startup, like __init__.py or something.
Now when you want to log something just:
logger = logging.getLogger('mylogger')
logger.error("error happened, oh no!")
Official Python logging tutorial can be found here
I am trying to beef up the logging in my Python scripts and I am grateful if could share best practices with me. For now I have created this little script (I should say that I run Python 3.4)
import logging
import io
import sys
def Streamhandler(stream, level, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"):
ch = logging.StreamHandler(stream)
ch.setLevel(level)
formatter = logging.Formatter(format)
ch.setFormatter(formatter)
return ch
# get the root logger
logger = logging.getLogger()
stream = io.StringIO()
logger.addHandler(Streamhandler(stream, logging.WARN))
stream_error = io.StringIO()
logger.addHandler(Streamhandler(stream_error, logging.ERROR))
logger.addHandler(Streamhandler(stream=sys.stdout, level=logging.DEBUG))
print(logger)
for h in logger.handlers:
print(h)
print(h.level)
# 'application' code # goes to the root logger!
logging.debug('debug message')
logging.info('info message')
logging.warning('warn message')
logging.error('error message')
logging.critical('critical message')
print(stream.getvalue())
print(stream_error.getvalue())
I have three handlers, 2 of them write into a io.StringIO (this seems to work). I need this to simplify testing but also to send logs via a HTTP email service. And then there is a Streamhandler for the console. However, logging.debug and logging.info messages are ignored on the console here despite setting the level explicitly low enough?!
First, you didn't set the level on the logger itself:
logger.setLevel(logging.DEBUG)
Also, you define a logger but do your calls on logging - which will call on the root logger. Not that it will make any difference in your case since you didn't specify a name for your logger, so logging.getLogger() returns the root logger.
wrt/ "best practices", it really depends on how "complex" your scripts are and of course on your logging needs.
For self-contained simple scripts with simple use cases (single known environment, no concurrent execution, simple logging to a file or stderr etc), a simple call to logging.basicConfig() and direct calls to logging.whatever() are usually good enough.
For anything more complex, it's better to use a distinct config file - either in ini format or as Python dict (using logging.dictConfig), split your script into distinct module(s) or package(s) each defining it's own named logger (with logger = logging.getLogger(__name__)) and only keep your script itself as the "runner" for your code, ie: configure logging, import modules, parse command line args and call the main function - preferably in a try/except block as to properly log any unhandled exception before crashing.
A logger has a threshold level too, you need to set it to DEBUG first:
logger.setLevel(logging.DEBUG)