So I am trying to implement logging within my Python program. The goal is to set it up so that a log file is created and everything the program does through it's various modules is logged (based on logging level). This is what my current code looks like:
Text File for Log Configuration:
#logging.conf
[loggers]
keys=root,MainLogger
[handlers]
keys=consoleHandler
[formatters]
keys=consoleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_MainLogger]
level=DEBUG
handlers=consoleHandler
qualname=MainLogger
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=consoleFormatter
args=(sys.stdout,)
[formatter_consoleFormatter]
format=%(asctime)s | %(levelname)-8s | %(filename)s-%(funcName)s-%lineno)04d | %(message)s
External Module to Test Logs:
#test.py
import logging
logger = logging.getLogger(__name__)
def testLog():
logger.debug("Debug Test")
logger.info("Info Test")
logger.warning("Warning Test")
logger.error("Error Test")
Main file:
#__init__.py
import logging
import logging.config
from datetime import datetime
logging.config.fileConfig('logging.conf', disable_existing_loggers = False)
logger = logging.getLogger('MainLogger')
fileHandler = logging.FileHandler('{:%Y-%m-%d}.log'.format(datetime.now()))
formatter = logging.Formatter('%(asctime)s | %(levelname)-8s | %(lineno)04d | %(message)s')
fileHandler.setFormatter(formatter)
logger.addHandler(fileHandler)
if __name__ == "__main__":
import test
logger.debug("Debug Test")
test.testLog()
Currently, all log messages are currently being displayed withing the IDLE3 shell when I run __init__.py and the log file is being created. However within the log file itself the only message being recording is the "Debug Test" from __init__.py. None of the messages from the test.py module are being recorded in the log file.
What is my problem?
In test.py it grabs a logger object before you configure it later in your __init__.py. Make sure you configure the logging module first before grabbing any logger instance.
Related
I'm trying to log my application and I want to use the logging module for it. The application is running in a docker-container, which made me to log to the stdout and stderr so I can see it in the docker logs. Unfortunately only my root logger is working who is writing to file. I already searched for this case, but I was unable to find a solution.
For better reference:
config.ini
[loggers]
keys=root, info
[handlers]
keys=debug, info
[formatters]
keys=debug, info, error
[logger_root]
level=DEBUG
handlers=debug
[logger_info]
level=INFO
handlers=info
qualname=docker.info
propagate=0
[handler_debug]
class=FileHandler
level=DEBUG
formatter=debug
args=('.logs', 'a+')
[handler_info]
class=StreamHandler
level=INFO
formatter=info
args=(sys.stdout,)
[formatter_debug]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
[formatter_info]
format=%(levelname)s - %(message)s
main.py
import logging.config
logging.config.fileConfig(fname='logger.ini')
logger = logging.getLogger(__name__)
from gevent import monkey
monkey.patch_all()
from gevent import pywsgi
# some logging here
http_server = pywsgi.WSGIServer(('0.0.0.0', 5000), app, log=logger)
http_server.serve_forever()
I've been learning about logging, and got help here earlier on setting up a logger w/external config file.
I've setup based on the example, however the messages only seen on the console and not in a long file (not created.
Can you please see what I'm doing wrong?
utilityLogger:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
My app
'''
# ~~~~~ LOGGING SETUP ~~~~~ #
# set up the first logger for the app
import os
import testLogging as vlog
# path to the current script's dir
scriptdir = os.path.dirname(os.path.realpath(__file__))
LOG_CONFIG = '../config/logging.conf'
print scriptdir
def logpath():
'''
Return the path to the main log file; needed by the logging.yml
use this for dynamic output log file paths & names
'''
global scriptdir
return (vlog.logpath(scriptdir = scriptdir, logfile = 'log.txt'))
logger = vlog.log_setup(config_file=LOG_CONFIG, logger_name="app")
logger.debug("App is starting...")
testLogging:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
Functions to set up the app logger
'''
import logging
import logging.config
import os
LOG_CONFIG = '../config/logging.conf'
def logpath(scriptdir, logfile):
'''
Return the path to the main log file; needed by the logging.yml
use this for dynamic output log file paths & names
'''
log_file = os.path.join(scriptdir, logfile)
print log_file
print scriptdir
print logfile
return(logging.FileHandler(log_file))
def log_setup(config_file, logger_name):
'''
Set up the logger for the script
config = path to YAML config file
'''
# Config file relative to this file
logging.config.fileConfig(config_file)
return(logging.getLogger(logger_name))
logging.conf file:
[loggers]
keys=root
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
qualname=app
[logger_app]
level=DEBUG
handlers=consoleHandler
qualname=app
propagate=true
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=fileFormatter
args=('%(logfilename)s',)
[main]
()=__main__.logpath
level=DEBUG
formatter=simpleFormatter
[formatter_fileFormatter]
format=%(asctime)s (%(name)s:%(funcName)s:%(lineno)d:%(levelname)s) %
(message)s # %(module)s:
datefmt="%Y-%m-%d %H:%M:%S"
[formatter_simpleFormatter]
format=%(asctime)s (%(name)s:%(funcName)s:%(lineno)d:%(levelname)s) %(message)s # %(module)s:
datefmt="%Y-%m-%d %H:%M:%S"
Update, the question has already been marked as answered and I appreciate #zwer help!
Last objective, to understand, is there more pythonic way to instantiate a logger to Class (but I want to be able to log in main as well). With the marked answer I've put together the following, but I'm not sure it's the most elegant solution for both main and classes logging.
class TestLog(object):
def __init__(self, logger):
self.logger = logger
self.__sub_test = 0
def add_test(self):
self.logger.debug('addition')
a = 1 + 1
self.logger.debug('result {}'.format(a, 1))
def sub_test(self):
self.logger.debug('subtraction')
b = 5 -2
self.logger.debug('result {}'.format(b, 1))
def main():
logger = vlog.log_setup(config_file=LOG_CONFIG, logger_name="app",
log_file=LOG_PATH)
logger.debug("App is starting...")
test1 = TestLog(logger)
print test1.add_test()
print test1.sub_test()
if __name__ == "__main__":
sys.exit(main())
Alright, let's pack it as an answer to avoid comment constraints.
The main issue with your config is that you're not initializing your fileHandler at all. If you want to use it, make sure you add it to the [handlers] section, e.g.:
[handlers]
keys=fileHandler
As for your other error, since in your [handler_fileHandler] you define a dynamic argument logfilename for the name of the file so you need to provide it when you're loading your logging config in Python, e.g.:
logging.config.fileConfig(config_file, defaults={"logfilename": "your_log_filename.log"})
That should do the trick.
UPDATE - As long as you provide a proper file path, the stated should work, but you still need to modify your config a little bit more to enable the file logger in all your loggers. So change your config to:
[loggers]
keys=root
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=simpleFormatter,fileFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler,fileHandler
qualname=app
[logger_app]
level=DEBUG
handlers=consoleHandler,fileHandler
qualname=app
propagate=true
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=fileFormatter
args=('%(logfilename)s',)
[main]
()=__main__.logpath
level=DEBUG
formatter=simpleFormatter
[formatter_fileFormatter]
format=%(asctime)s (%(name)s:%(funcName)s:%(lineno)d:%(levelname)s) %(message)s # %(module)s:
datefmt="%Y-%m-%d %H:%M:%S"
[formatter_simpleFormatter]
format=%(asctime)s (%(name)s:%(funcName)s:%(lineno)d:%(levelname)s) %(message)s # %(module)s:
datefmt="%Y-%m-%d %H:%M:%S"
Also, to make it more flexible, change your testLogging.log_setup() to something like:
def log_setup(config_file, logger_name, log_file):
# Config file relative to this file
logging.config.fileConfig(config_file, defaults={"logfilename": log_file})
return logging.getLogger(logger_name)
And finally, when you're setting it up just invoke it as:
LOG_CONFIG = '../config/logging.conf'
LOG_PATH = r"C:\PycharmProjects\scrap\test.log" # make sure it exists and is accessible!
logger = vlog.log_setup(config_file=LOG_CONFIG, logger_name="app", log_file=LOG_PATH)
logger.debug("App is starting...")
Adjusted for your local paths it should work as expected. I just tested it on my side and it's giving a proper result.
Why in python logger.info("print something") does not output. I have seen questions asked before, but solution doesnt exist. I do not want to use logger.debug or logger.warning to see text.
Simply logger.info should print the text, otherwise whats the use of this?
logging.conf file as below
[loggers]
keys=root
[handlers]
keys=stream
[formatters]
keys=formatter
[logger_root]
level=INFO
handlers=stream
[handler_stream]
class=StreamHandler
level=INFO
formatter=formatter
args=(sys.stderr,)
[formatter_formatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
Demo code that access logger:
import logging
logger = logging.getLogger()
if __name__ == '__main__':
logger.info("logger")
print("print")
Output is only print, not the logger. So logger.info does not work.
By default, the root logger (the one you use when you say logger.info) is set at a level of WARN.
You can either do:
logging.basicConfig(level=logging.INFO)
or logging.getLogger().setLevel(logging.INFO)
Seems you do not load your configuration file. You should add this:
logging.config.fileConfig('path_to_logging.conf')
before logger = logging.getLogger()
because right now you are using the default WARNING level.
EDIT: in order to use logging.config, you have to import it too:
import logging.config
So the complete code should be:
import logging
import logging.config
logging.config.fileConfig('path_to_logging.conf')
logger = logging.getLogger()
if __name__ == '__main__':
logger.info("logger")
print("print")
The code above, with the following logging.conf (same as you except I removed the sentry parts):
[loggers]
keys=root
[handlers]
keys=stream
[formatters]
keys=formatter
[logger_root]
level=INFO
handlers=stream
[handler_stream]
class=StreamHandler
level=INFO
formatter=formatter
args=(sys.stderr,)
[formatter_formatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
does work:
$ ./test_script3.py
2016-05-23 15:37:40,437 - root - INFO - logger
print
The Problem:
Given a logging config and a logger that employs that config, I see log messages from the script in which the log handler is configured, but not from the root logger, to which the same handler is assigned.
Details:
(Using Python 2.7)
I have a module my_mod which instantiates a logger. my_mod has a function my_command which logs some messages using that logger. my_mod exists inside of a library my_lib, so I don't want to configure the logger with any handlers; as recommended, I want to leave the log handling to the developer using my_mod. my_mod looks like:
import logging
LOGGER = logging.getLogger(__name__)
def my_command():
LOGGER.debug("This is a log message from module.py")
print "This is a print statement from module.py"
I also have a python script my_script.py, which uses my_mod.my_command. my_script.py instantiates a logger, and in this case I do have handlers and formatters configured. my_script.py configures handlers and formatters using fileConfig and a config file that lives alongside my_script.py:
import os
import logging
import logging.config
from my_mod.module import my_command
logging.config.fileConfig('{0}/logging.cfg'.format(
os.path.dirname(os.path.realpath(__file__))))
LOGGER = logging.getLogger(__name__)
LOGGER.debug("This is a log message from script.py")
my_command()
From what I can tell, my config file appears to be set up correctly...
[loggers]
keys=root,script
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_script]
level=DEBUG
handlers=consoleHandler
qualname=script
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[formatter_simpleFormatter]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s
datefmt=
...but when I run my_script.py I get only the log line from my_script.py, and not the one from my_mod.my_command. I know that my_command is working, though, because the print statement in my_command after the debug log statement successfully prints to the console:
20:27 $ python script.py
2015-06-15 20:27:54,488 [DEBUG] __main__: This is a log message from script.py
This is a print statement from module.py
What am I doing wrong?
NOTE: The example shows using debug, but even when I keep logging.cfg specifying level=DEBUG (I also tried level=NOTSET) for root logger and call LOGGER.info(message) in my_command, nothing is logged to the console.
A potential problem is that you are importing the module before you set up the logger configuration. That way, the module requests a logger before the logging is setup.
Looking a fileConfig()'s documentation, the reason subsequent logging to the pre-obtained loggers fails is the default value for its disable_existing_loggers argument:
logging.config.fileConfig(fname, defaults=None, disable_existing_loggers=True)
If you change your code to
logging.config.fileConfig(
'{0}/logging.cfg'.format(os.path.dirname(os.path.realpath(__file__))),
disable_existing_loggers=False
)
the problem should go away.
Note that existing loggers are only disabled when they are not explicitly named in the configuration file. For example:
import logging
import logging.config
lFooBefore = logging.getLogger('foo')
lScriptBefore = logging.getLogger('script')
logging.config.fileConfig('logger.ini')
lFooBefore.debug('Does not log')
lScriptBefore.debug('Does log')
logging.getLogger('foo').debug('Does also not log')
logging.getLogger('bar').debug('Does log')
No idea why the default value for disable_existing_loggers is the way it is ...
I seem to be having some issues while attempting to implement logging into my python project.
I'm simply attempting to mimic the following configuration:
Python Logging to Multiple Destinations
However instead of doing this inside of code, I'd like to have it in a configuration file.
Below is my config file:
[loggers]
keys=root
[logger_root]
handlers=screen,file
[formatters]
keys=simple,complex
[formatter_simple]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
[formatter_complex]
format=%(asctime)s - %(name)s - %(levelname)s - %(module)s : %(lineno)d - %(message)s
[handlers]
keys=file,screen
[handler_file]
class=handlers.TimedRotatingFileHandler
interval=midnight
backupCount=5
formatter=complex
level=DEBUG
args=('logs/testSuite.log',)
[handler_screen]
class=StreamHandler
formatter=simple
level=INFO
args=(sys.stdout,)
The problem is that my screen output looks like:
2010-12-14 11:39:04,066 - root - WARNING - 3
2010-12-14 11:39:04,066 - root - ERROR - 4
2010-12-14 11:39:04,066 - root - CRITICAL - 5
My file is output, but looks the same as above (although with the extra information included). However the debug and info levels are not output to either.
I am on Python 2.7
Here is my simple example showing failure:
import os
import sys
import logging
import logging.config
sys.path.append(os.path.realpath("shared/"))
sys.path.append(os.path.realpath("tests/"))
class Main(object):
#staticmethod
def main():
logging.config.fileConfig("logging.conf")
logging.debug("1")
logging.info("2")
logging.warn("3")
logging.error("4")
logging.critical("5")
if __name__ == "__main__":
Main.main()
It looks like you've set the levels for your handlers, but not your logger. The logger's level filters every message before it can reach its handlers and the default is WARNING and above (as you can see). Setting the root logger's level to NOTSET as you have, as well as setting it to DEBUG (or whatever is the lowest level you wish to log) should solve your issue.
Adding the following line to the root logger took care of my problem:
level=NOTSET
Just add log level in [logger_root]. It is worked.
[logger_root]
level=DEBUG
handlers=screen,file
A simple approach to both write to terminal and file would be as following:
import logging.config
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("log_file.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
And then use it in your code like this:
logger.info('message')
logger.error('message')
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.handlers
from logging.config import dictConfig
logger = logging.getLogger(__name__)
DEFAULT_LOGGING = {
'version': 1,
'disable_existing_loggers': False,
}
def configure_logging(logfile_path):
"""
Initialize logging defaults for Project.
:param logfile_path: logfile used to the logfile
:type logfile_path: string
This function does:
- Assign INFO and DEBUG level to logger file handler and console handler
"""
dictConfig(DEFAULT_LOGGING)
default_formatter = logging.Formatter(
"[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s",
"%d/%m/%Y %H:%M:%S")
file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8')
file_handler.setLevel(logging.INFO)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(default_formatter)
console_handler.setFormatter(default_formatter)
logging.root.setLevel(logging.DEBUG)
logging.root.addHandler(file_handler)
logging.root.addHandler(console_handler)
[31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module
I think you should add the disable_existing_loggers to false.