I set up logging throughout my python package using a logconfig.ini file.
[loggers]
keys=extracts,root
[formatters]
keys=simple,detailed
[handlers]
keys=file_handler
[formatter_simple]
format=%(module)s - %(levelname)s - %(message)s
datefmt=%Y-%m-%d %H:%M:%S
[formatter_detailed]
format=%(asctime)s %(name)s:%(lineno)s %(levelname)s %(message)s
datefmt=%Y-%m-%d %H:%M:%S
[handler_file_handler]
class=logging.handlers.RotatingFileHandler
level=DEBUG
formatter=detailed
args=('/ebs/logs/foo.log', 'a', 100000000, 3)
[logger_extracts]
level=DEBUG
handlers=file_handler
propagate=1
qualname=extracts
[logger_root]
level=NOTSET
handlers=
But whenever I run my application, I get the following warning message in prompt,
No handlers found for logger __main__
How can I fix this?
You have to call logging.basicConfig() first:
Logging HOWTO
The call to basicConfig() should come before any calls to debug(),
info() etc. As it’s intended as a one-off simple configuration
facility, only the first call will actually do anything: subsequent
calls are effectively no-ops.
Or all logging.info('Starting logger for...') which will call logging.basicConfig() automatically. So something like:
import logging
logging.info('Starting logger for...') # or call logging.basicConfig()
LOG = logging.getLogger(name)
The module author's reason for this behavior is here
I found my error.
It turns out the the root logger is used for main.
I just need to attach a handler to the root logger as so,
[logger_root]
level=NOTSET
handlers=file_handler
Related
Why in python logger.info("print something") does not output. I have seen questions asked before, but solution doesnt exist. I do not want to use logger.debug or logger.warning to see text.
Simply logger.info should print the text, otherwise whats the use of this?
logging.conf file as below
[loggers]
keys=root
[handlers]
keys=stream
[formatters]
keys=formatter
[logger_root]
level=INFO
handlers=stream
[handler_stream]
class=StreamHandler
level=INFO
formatter=formatter
args=(sys.stderr,)
[formatter_formatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
Demo code that access logger:
import logging
logger = logging.getLogger()
if __name__ == '__main__':
logger.info("logger")
print("print")
Output is only print, not the logger. So logger.info does not work.
By default, the root logger (the one you use when you say logger.info) is set at a level of WARN.
You can either do:
logging.basicConfig(level=logging.INFO)
or logging.getLogger().setLevel(logging.INFO)
Seems you do not load your configuration file. You should add this:
logging.config.fileConfig('path_to_logging.conf')
before logger = logging.getLogger()
because right now you are using the default WARNING level.
EDIT: in order to use logging.config, you have to import it too:
import logging.config
So the complete code should be:
import logging
import logging.config
logging.config.fileConfig('path_to_logging.conf')
logger = logging.getLogger()
if __name__ == '__main__':
logger.info("logger")
print("print")
The code above, with the following logging.conf (same as you except I removed the sentry parts):
[loggers]
keys=root
[handlers]
keys=stream
[formatters]
keys=formatter
[logger_root]
level=INFO
handlers=stream
[handler_stream]
class=StreamHandler
level=INFO
formatter=formatter
args=(sys.stderr,)
[formatter_formatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
does work:
$ ./test_script3.py
2016-05-23 15:37:40,437 - root - INFO - logger
print
The Problem:
Given a logging config and a logger that employs that config, I see log messages from the script in which the log handler is configured, but not from the root logger, to which the same handler is assigned.
Details:
(Using Python 2.7)
I have a module my_mod which instantiates a logger. my_mod has a function my_command which logs some messages using that logger. my_mod exists inside of a library my_lib, so I don't want to configure the logger with any handlers; as recommended, I want to leave the log handling to the developer using my_mod. my_mod looks like:
import logging
LOGGER = logging.getLogger(__name__)
def my_command():
LOGGER.debug("This is a log message from module.py")
print "This is a print statement from module.py"
I also have a python script my_script.py, which uses my_mod.my_command. my_script.py instantiates a logger, and in this case I do have handlers and formatters configured. my_script.py configures handlers and formatters using fileConfig and a config file that lives alongside my_script.py:
import os
import logging
import logging.config
from my_mod.module import my_command
logging.config.fileConfig('{0}/logging.cfg'.format(
os.path.dirname(os.path.realpath(__file__))))
LOGGER = logging.getLogger(__name__)
LOGGER.debug("This is a log message from script.py")
my_command()
From what I can tell, my config file appears to be set up correctly...
[loggers]
keys=root,script
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_script]
level=DEBUG
handlers=consoleHandler
qualname=script
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[formatter_simpleFormatter]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s
datefmt=
...but when I run my_script.py I get only the log line from my_script.py, and not the one from my_mod.my_command. I know that my_command is working, though, because the print statement in my_command after the debug log statement successfully prints to the console:
20:27 $ python script.py
2015-06-15 20:27:54,488 [DEBUG] __main__: This is a log message from script.py
This is a print statement from module.py
What am I doing wrong?
NOTE: The example shows using debug, but even when I keep logging.cfg specifying level=DEBUG (I also tried level=NOTSET) for root logger and call LOGGER.info(message) in my_command, nothing is logged to the console.
A potential problem is that you are importing the module before you set up the logger configuration. That way, the module requests a logger before the logging is setup.
Looking a fileConfig()'s documentation, the reason subsequent logging to the pre-obtained loggers fails is the default value for its disable_existing_loggers argument:
logging.config.fileConfig(fname, defaults=None, disable_existing_loggers=True)
If you change your code to
logging.config.fileConfig(
'{0}/logging.cfg'.format(os.path.dirname(os.path.realpath(__file__))),
disable_existing_loggers=False
)
the problem should go away.
Note that existing loggers are only disabled when they are not explicitly named in the configuration file. For example:
import logging
import logging.config
lFooBefore = logging.getLogger('foo')
lScriptBefore = logging.getLogger('script')
logging.config.fileConfig('logger.ini')
lFooBefore.debug('Does not log')
lScriptBefore.debug('Does log')
logging.getLogger('foo').debug('Does also not log')
logging.getLogger('bar').debug('Does log')
No idea why the default value for disable_existing_loggers is the way it is ...
I have an odd problem. I am using logging.config to setup my logger to use the sockethandler. Everything works fine except 2/10+ modules don't seem to be logging anything. By fine, I mean I see output on my log server for everything except for the two. It's so odd that it works for some but not others. I'm initializing the logger for every module with the following lines:
import logging
import logging.config
logging.config.fileConfig(config.main_log_conf)
logger = logging.getLogger("CAKE")
I thought the problem was might've been conflicting logger names, thus the CAKE above, but that didn't work.
Below is the conf file I'm using.
[loggers]
keys=root
[handlers]
keys=socketHandler
[formatters]
keys=simpleFormatter
[logger_root]
handlers=socketHandler
level=DEBUG
[handler_socketHandler]
class=handlers.SocketHandler
level=DEBUG
formatter=simpleFormatter
args=('localhost', handlers.DEFAULT_TCP_LOGGING_PORT)
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=
Don't call fileConfig() from every module. There should generally be just one call to set up the logging configuration in a program (process), called from the if __name__ == '__main__' clause. Also note that you should (in general) pass disable_existing_loggers=False to fileConfig - the default is True only for backward compatibility reasons. See here for more information.
I have a logger configured from a file and would like to change the level of my logging without having to change the .conf file, but instead using inline code;
import logging.config
logging.config.fileConfig('..\\LoggingConfig\\loggingfile.conf')
logging.StreamHandler.setLevel(logging.info)
logging.debug("Debug")
logging.info("Info")
This should only print the "Info" log line to the screen. I don't know on which object to call the setLevel()! logging.StreamHandler.setLevel(logging.info) is just a stab in the dark after 30 mins searching...
The loggingfile.conf file;
[loggers]
keys=root
[logger_root]
handlers=screen
level=NOTSET
[formatter_modfunc]
format=%(module)-20s %(funcName)-25s %(levelno)-3s: %(message)s
[handlers]
keys=screen
[handler_screen]
class=StreamHandler
formatter=modfunc
level=DEBUG
args=(sys.stdout,)
qualname=screen
You need to call setLevel on your Logger instance.
LOGGER = logging.getLogger('your.module.file.name')
LOGGER.setLevel(_level)
LOGGER.info('foo')
If you are only using the basic logger, you can do it like this
logging.basicConfig(level=_level)
logging.info('foo')
See http://docs.python.org/howto/logging.html
When using logging.config.fileConfig and you want to dynamically change level for all child loggers at once, you can...
a) set level for root logger:
logging.getLogger().setLevel(logging.WARNING)
b) disable other levels
logging.disable(logging.INFO)
I seem to be having some issues while attempting to implement logging into my python project.
I'm simply attempting to mimic the following configuration:
Python Logging to Multiple Destinations
However instead of doing this inside of code, I'd like to have it in a configuration file.
Below is my config file:
[loggers]
keys=root
[logger_root]
handlers=screen,file
[formatters]
keys=simple,complex
[formatter_simple]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
[formatter_complex]
format=%(asctime)s - %(name)s - %(levelname)s - %(module)s : %(lineno)d - %(message)s
[handlers]
keys=file,screen
[handler_file]
class=handlers.TimedRotatingFileHandler
interval=midnight
backupCount=5
formatter=complex
level=DEBUG
args=('logs/testSuite.log',)
[handler_screen]
class=StreamHandler
formatter=simple
level=INFO
args=(sys.stdout,)
The problem is that my screen output looks like:
2010-12-14 11:39:04,066 - root - WARNING - 3
2010-12-14 11:39:04,066 - root - ERROR - 4
2010-12-14 11:39:04,066 - root - CRITICAL - 5
My file is output, but looks the same as above (although with the extra information included). However the debug and info levels are not output to either.
I am on Python 2.7
Here is my simple example showing failure:
import os
import sys
import logging
import logging.config
sys.path.append(os.path.realpath("shared/"))
sys.path.append(os.path.realpath("tests/"))
class Main(object):
#staticmethod
def main():
logging.config.fileConfig("logging.conf")
logging.debug("1")
logging.info("2")
logging.warn("3")
logging.error("4")
logging.critical("5")
if __name__ == "__main__":
Main.main()
It looks like you've set the levels for your handlers, but not your logger. The logger's level filters every message before it can reach its handlers and the default is WARNING and above (as you can see). Setting the root logger's level to NOTSET as you have, as well as setting it to DEBUG (or whatever is the lowest level you wish to log) should solve your issue.
Adding the following line to the root logger took care of my problem:
level=NOTSET
Just add log level in [logger_root]. It is worked.
[logger_root]
level=DEBUG
handlers=screen,file
A simple approach to both write to terminal and file would be as following:
import logging.config
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("log_file.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
And then use it in your code like this:
logger.info('message')
logger.error('message')
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.handlers
from logging.config import dictConfig
logger = logging.getLogger(__name__)
DEFAULT_LOGGING = {
'version': 1,
'disable_existing_loggers': False,
}
def configure_logging(logfile_path):
"""
Initialize logging defaults for Project.
:param logfile_path: logfile used to the logfile
:type logfile_path: string
This function does:
- Assign INFO and DEBUG level to logger file handler and console handler
"""
dictConfig(DEFAULT_LOGGING)
default_formatter = logging.Formatter(
"[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s",
"%d/%m/%Y %H:%M:%S")
file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8')
file_handler.setLevel(logging.INFO)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(default_formatter)
console_handler.setFormatter(default_formatter)
logging.root.setLevel(logging.DEBUG)
logging.root.addHandler(file_handler)
logging.root.addHandler(console_handler)
[31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module
I think you should add the disable_existing_loggers to false.