How do I set up logging in a Python package and the supporting unit tests so that I get a logging file out that I can look at when things go wrong?
Currently package logging seems to be getting captured by nose/unittest and is thrown to the console if there is a failed test; only unit test logging makes it into file.
In both the package and unit test source files, I'm currently getting a logger using:
import logging
import package_under_test
log = logging.getLogger(__name__)
In the unit test script I have been trying to set up log handlers using the basic FileHandler, either directly in-line or via the setUp()/setUpClass() TestCase methods
And the logging config, currently set in the Unit test script setUp() method.
root, ext = os.path.splitext(__file__)
log_filename = root + '.log'
log_format = (
'%(asctime)8.8s %(filename)-12.12s %(lineno)5.5s:'
' %(funcName)-32.32s %(message)s')
datefmt = "%H:%M:%S"
log_fmt = logging.Formatter(log_format, datefmt)
log_handler = logging.FileHandler(log_filename, mode='w')
log_handler.setFormatter(log_fmt)
log.addHandler(log_handler)
log_format = '%(message)s'
log.setLevel(logging.DEBUG)
log.debug('test logging enabled: %s' % log_filename)
The log in the last line does end up in the file but this configuration clearly doesn't filter back into the imported package being tested.
Logging objects operate in a hierarchy, and log messages 'bubble up' the hierarchy chain and are passed to any handlers along the way (provided the log level of the message is at or exceeds the minimal threshold of the logger object you are logging on). Ignoring filtering and global log-level configurations, in pseudo code this is what happens:
if record.level < current_logger.level:
return
for logger_object in (current_logger + current_logger.parents_reversed):
for handler in logger_object.handlers:
if record.level >= handler.level:
handler.handle(record)
if not logger_object.propagate:
# propagation disabled, the buck stops here.
break
Where handlers are actually responsible for putting a log message into a file or write it to the console, etc.
The problem you have is that you added your log handlers to the __name__ logger, where __name__ is the current package identifier. The . separator in the package name are hierarchy separators, so if you run this in, say, acme.tests then only the loggers in acme.tests and contained modules are being sent to this handler. Any code outside of acme.tests will never reach these handlers.
Your log object hierarchy is something akin to this:
- acme
- frobnars
- tests
# logger object
- test1
- test2
- widgets
then only log objects in test1 and test2 will see the same handlers.
You can move your logger to the root logger instead, with logging.root or logger.getLogger() (no name argument or the name set to None). All loggers are child nodes of the root logger, and as long as they don't set the propagate attribute to False, log messages will reach the root handlers.
The other options are to get the acme logger object explicitly, with logging.getLogger('acme'), or always use a single, explicit logger name throughout your code that is the same in your tests and in your library.
Do take into account that Nose also configures handlers on the root logger.
Related
I have a main script and multiple modules. Right now I have logging setup where all the logging from all modules go into the same log file. It gets hard to debug when its all in one file. So I would like to separate each module into its own log file. I would also like to see the requests module each module uses into the log of the module that used it. I dont know if this is even possible. I searched everywhere and tried everything I could think of to do it but it always comes back to logging everything into one file or setup logging in each module and from my main module initiate the script instead of import as a module.
main.py
import logging, logging.handlers
import other_script.py
console_debug = True
log = logging.getLogger()
def setup_logging():
filelog = logging.handlers.TimedRotatingFileHandler(path+'logs/api/api.log',
when='midnight', interval=1, backupCount=3)
filelog.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s %(name)-15s %(levelname)-8s %(message)s')
filelog.setFormatter(fileformatter)
log.addHandler(filelog)
if console_debug:
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-15s: %(levelname)-8s %(message)s')
console.setFormatter(formatter)
log.addHandler(console)
if __name__ == '__main__':
setup_logging()
other_script.py
import requests
import logging
log = logging.getLogger(__name__)
One very basic concept of python logging is that every file, stream or other place that logs go is equivalent to one Handler. So if you want every module to log to a different file you will have to give every module it's own handler. This can also be done from a central place. In your main.py you could add this to make the other_script module log to a separate file:
other_logger = logging.getLogger('other_script')
other_logger.addHandler(logging.FileHandler('other_file'))
other_logger.propagate = False
The last line is only required if you add a handler to the root logger. If you keep propagate at the default True you will have all logs be sent to the root loggers handlers too. In your scenario it might be better to not even use the root logger at all, and use a specific named logger like getLogger('__main__') in main.
I'm currently developing a Tornado application made from different indipendent modules.
The application run using supervisord, so right now, every time I use
logging.info()
the log is printed on the supervisor log and I'm ok with that.
The problem is that now the supervisor log file is full of very different stuff from different modules and very hard to read, so I now want that every module use a specific logger and every logger write on a different file.
So I created the logger:
def set_log_config(filename, when='h', interval=1, backupCount=0):
directory = os.path.dirname(os.path.abspath(filename))
create_folder(directory)
app_log = logging.getLogger("tornado.application.fiscal")
handler = logging.handlers.TimedRotatingFileHandler(filename, when=when, interval=interval, backupCount=backupCount)
formatter = logging.Formatter('[%(levelname)s %(asctime)s.%(msecs)d %(module)s:%(lineno)d] %(message)s', datefmt='%y%m%d %H:%M:%S')
handler.setFormatter(formatter)
app_log.addHandler(handler)
app_log.setLevel(logging.INFO)
return app_log
fiscal_logger = set_log_config(
'/home/dir/Trace/fiscal/fiscal_module_{:%Y-%m-%d}.log'.format(datetime.now(), when='midnight', interval=1, backupCount=21)
)
The logger works, it write on the specific file, but it also always write in the supervisor log file and I don't understand why.
So my question is: how can I write on the specific file when I use the fiscal_logger.info and on the supervisor file when I use logging.info?
First I explain why your logger also writes to the supervisor log file.
Writing to supervisor log file means there is a StreamHandler in your current logger chain.
logging.info basically equals to logging.getLogger().info which means it uses root logger. Additionally, logging.xxx will automatically add a StreamHandler to root logger if root has no handler.
And by default logs will be propagated along logger chain(For example, "tornado.application.fiscal"'s logger chain is root -> tornado -> application -> fiscal). So fiscal_logger's log is propagated to root logger and processed by root's StreamHandler. That's why you see those logs in the supervisor log file.
To fix this problem, you have two options at least.
Do not use logging.xxxx any more. Instead, use another named logger such as console_logger.
Set fiscal_logger.propagate to False.
referring to this question here: LINK
How can I set up a config, that will only log my root script and my own sub-scripts? The question of the link asked for disabling all imported modules, but that is not my intention.
My root setup:
import logging
from exchangehandler import send_mail
log_wp = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(levelname)s [%(filename)s]: %(name)s %(funcName)20s - Message: %(message)s',
datefmt='%d.%m.%Y %H:%M:%S',
filename='C:/log/myapp.log',
filemode='a')
handler = logging.StreamHandler()
log_wp.addHandler(handler)
log_wp.debug('This is from root')
send_mail('address#eg.com', 'Request', 'Hi there')
My sub-module exchangehandler.py:
import logging
log_wp = logging.getLogger(__name__)
def send_mail(mail_to,mail_subject,mail_body, mail_attachment=None):
log_wp.debug('Hey this is from exchangehandler.py!')
m.send_and_save()
myapp.log:
16.07.2018 10:27:40 - DEBUG [test_script.py]: __main__ <module> - Message: This is from root
16.07.2018 10:28:02 - DEBUG [exchangehandler.py]: exchangehandler send_mail - Message: Hey this is from exchangehandler.py!
16.07.2018 10:28:02 - DEBUG [folders.py]: exchangelib.folders get_default_folder - Message: Testing default <class 'exchangelib.folders.SentItems'> folder with GetFolder
16.07.2018 10:28:02 - DEBUG [services.py]: exchangelib.services get_payload - Message: Getting folder ArchiveDeletedItems (archivedeleteditems)
16.07.2018 10:28:02 - DEBUG [services.py]: exchangelib.services get_payload - Message: Getting folder ArchiveInbox (archiveinbox)
My problem is, that the log-file contains also a lot of information of the exchangelib-module, that is imported in exchangehandler.py. Either the imported exchangelib-module is configured incorrectly or I have made a mistake. So how can I reduce the log-output only to my logging messages?
EDIT:
An extract of the folder.py of the exchangelib-module. This is not anything that I have written:
import logging
log = logging.getLogger(__name__)
def get_default_folder(self, folder_cls):
try:
# Get the default folder
log.debug('Testing default %s folder with GetFolder', folder_cls)
# Use cached instance if available
for f in self._folders_map.values():
if isinstance(f, folder_cls) and f.has_distinguished_name:
return f
return folder_cls.get_distinguished(account=self.account)
The imported exchangelib module is not configured at all when it comes to logging. You are configuring it implicitly by calling logging.basicConfig() in your main module.
exchangelib does create loggers and logs to them, but by default these loggers do not have handlers and formatters attached, so they don't do anything visible. What they do, is propagating up to the root logger, which by default also has no handlers and formatters attached.
By calling logging.basicConfig in your main module, you actually attach handlers to the root logger. Your own, desired loggers propagate to the root logger, hence the messages are written to the handlers, but the same is true for the exchangelib loggers from that point onwards.
You have at least two options here. You can explicitly configure "your" named logger(s):
main module
import logging
log_wp = logging.getLogger(__name__) # or pass an explicit name here, e.g. "mylogger"
hdlr = logging.StreamHandler()
fhdlr = logging.FileHandler("myapp.log")
log_wp.addHandler(hdlr)
log_wp.addHandler(fhdlr)
log_wp.setLevel(logging.DEBUG)
The above is very simplified. To explicitly configure multiple named loggers, refer to the logging.config HowTo
If you rather want to stick to just using the root logger (configured via basicConfig()), you can also explicitly disable the undesired loggers after exchangelib has been imported and these loggers have been created:
logging.getLogger("exchangelib.folders").disabled = True
logging.getLogger("exchangelib.services").disabled = True
If you don't know the names of the loggers to disable, logging has a dictionary holding all the known loggers. So you could temporarily do this to see all the loggers your program creates:
# e.g. after the line 'log_wp.addHandler(handler)'
print([k for k in logging.Logger.manager.loggerDict])
Using the dict would also allow you to do sth. like this:
for v in logging.Logger.manager.loggerDict.values():
if v.name.startswith('exchangelib'):
v.disabled = True
I have a generic logger instance specific to my projects. It automatically creates and attaches 2 handlers (StreamHandler and TimedRotatingFileHandler) with different formatting etc. preconfigured to them.
logging_formatters = {
'fmt': "%(asctime)s [%(levelname)8s:%(process)05d] [%(module)10s:%(lineno)03d] (%(name)s) %(message)s",
'datefmt': "%Y-%m-%d %H:%M:%S"
}
def get_logger(
application_name=None,
filename=None,
*args,
**kwargs
):
if not isinstance(application_name, str):
raise ValueError("Logger class expects a string type application name")
if filename is None:
filename = application_name
if not filename.endswith(".log"):
filename = filename.split('.')[0] + ".log"
log_path = kwargs.get('log_path')
service_name = kwargs.get('service_name', '')
console_level = kwargs.get('console_level', logging.INFO)
file_level = kwargs.get('file_level', logging.DEBUG)
logger = logging.getLogger(application_name)
if len(logger.handlers) > 0:
return logger
# Create 2 handlers, and add them to the logger created
# ...
# ...
# ...
Now, suppose my flask project structure is something similar to:
/
- app.py
- settings.py
- dir1/
- __init__.py
- mod1.py
- dir2/
- __init__.py
- mod2.py
I start the server using python app.py. The app.py in itself imports the dir1.mod1 and dir2.mod2 modules. Each of those modules create their own logger instance as follows:
logger = log_package.get_logger(
application_name='{}.{}'.format(settings.APPLICATION_NAME, __name__),
filename=settings.LOG_FILE_NAME,
service_name=settings.SERVICE_NAME,
)
and in case of app.py it is:
logger = log_package.get_logger(
application_name='{}.{}'.format(settings.APPLICATION_NAME, 'run'),
filename=settings.LOG_FILE_NAME,
service_name=settings.SERVICE_NAME,
)
Now, the issue I am facing is that; the TimedRotatingFileHandler is working fine for all the submodules (namely, dir1.mod1, dir2.mod2 etc.) however, the logs from app.py are not being rolled over to the new file. That particular instance is writing the logs to the same file as when the service was started. For eg. if I started the service on 2017-07-11, then app.py will keep writing logs to LOG_FILE_NAME.log.2017-07-11 whereas the other modules are rolling over everyday (when=midnight) and the new logs are being written to LOG_FILE_NAME.log.
What could be the issue behind TimedRotatingFileHandler not working for a particular file? I ran the lsof command for all files in the directory, and this was the output:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 23795 ubuntu 4w REG 202,1 2680401 150244 /path/to/LOG_FILE_NAME.2017-07-12
python 23795 ubuntu 33w REG 202,1 397074 150256 /path/to/LOG_FILE_NAME.log
Do I need to share logger instance across modules in python project? I think this should not be required as logging module is threadsafe in itself.
The TimedRotatingHandler instance shouldn't be writing to a file other than LOG_FILE_NAME.log - the other files such as LOG_FILE_NAME.2017-07-12 would be created when rolling over, and shouldn't be left open.
You should ensure that you have only one TimedRotatingFileHandler instance in your process for a given filename - if you have two different handler instances both referencing the same filename, you may get unexpected behaviour (if one instance rolls over, it changes the file it's using, but the other instance will still have a reference to the old file, and keep writing to that).
You should probably just attach your handlers to the root logger, rather than individual module loggers, and the other loggers would pick those handlers up, under default conditions (loggers' default propagate settings not changed).
Update: %(name)s always gives the name of the logger which was used for the logging call, even when the handlers are attached to an ancestor logger. If propagate is set to False for a logger, then the handlers in ancestor loggers aren't used - so you should leave propagate to its default value of True.
I am trying to beef up the logging in my Python scripts and I am grateful if could share best practices with me. For now I have created this little script (I should say that I run Python 3.4)
import logging
import io
import sys
def Streamhandler(stream, level, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"):
ch = logging.StreamHandler(stream)
ch.setLevel(level)
formatter = logging.Formatter(format)
ch.setFormatter(formatter)
return ch
# get the root logger
logger = logging.getLogger()
stream = io.StringIO()
logger.addHandler(Streamhandler(stream, logging.WARN))
stream_error = io.StringIO()
logger.addHandler(Streamhandler(stream_error, logging.ERROR))
logger.addHandler(Streamhandler(stream=sys.stdout, level=logging.DEBUG))
print(logger)
for h in logger.handlers:
print(h)
print(h.level)
# 'application' code # goes to the root logger!
logging.debug('debug message')
logging.info('info message')
logging.warning('warn message')
logging.error('error message')
logging.critical('critical message')
print(stream.getvalue())
print(stream_error.getvalue())
I have three handlers, 2 of them write into a io.StringIO (this seems to work). I need this to simplify testing but also to send logs via a HTTP email service. And then there is a Streamhandler for the console. However, logging.debug and logging.info messages are ignored on the console here despite setting the level explicitly low enough?!
First, you didn't set the level on the logger itself:
logger.setLevel(logging.DEBUG)
Also, you define a logger but do your calls on logging - which will call on the root logger. Not that it will make any difference in your case since you didn't specify a name for your logger, so logging.getLogger() returns the root logger.
wrt/ "best practices", it really depends on how "complex" your scripts are and of course on your logging needs.
For self-contained simple scripts with simple use cases (single known environment, no concurrent execution, simple logging to a file or stderr etc), a simple call to logging.basicConfig() and direct calls to logging.whatever() are usually good enough.
For anything more complex, it's better to use a distinct config file - either in ini format or as Python dict (using logging.dictConfig), split your script into distinct module(s) or package(s) each defining it's own named logger (with logger = logging.getLogger(__name__)) and only keep your script itself as the "runner" for your code, ie: configure logging, import modules, parse command line args and call the main function - preferably in a try/except block as to properly log any unhandled exception before crashing.
A logger has a threshold level too, you need to set it to DEBUG first:
logger.setLevel(logging.DEBUG)