Python Logging: Not showing on terminal - python

logging3.py
import sys
import logging
import first
import Logger
root = logging.getLogger()
root.addHandler(Logger.get_streamhandler())
root.warning('we did something in info')
hi = first.Foo()
hi.baz()
first.py
import logging
import Logger
first_logger = logging.getLogger(__name__)
first_logger.parent = False
first_logger.addHandler(Logger.get_streamhandler())
first_logger.info('in first')
class Foo:
def __init__(self):
print 'making sure we are in first.Foo'
self.logger = logging.getLogger('foo')
self.logger.addHandler(Logger.get_streamhandler())
self.logger.info('created Foo obj')
def baz(self):
self.logger.info('i dont know what baz is')
Logger.py
1 import logging
2 import sys
3
4
5 '''
6 NOTSET means inherit the log level from the parent logger
7 '''
8
9 LEVELS = { 'debug' : logging.DEBUG,
10 'info' : logging.INFO,
11 'warning' : logging.WARNING,
12 'error' : logging.ERROR,
13 'critical': logging.CRITICAL,
14 }
15
16 def getLevel(lvl):
17 return LEVELS.get(lvl) or logging.DEBUG
18
19 def get_streamhandler(lvl=None):
20 sh = logging.StreamHandler()
21 fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
22 sh.setFormatter(logging.Formatter(fmt))
23 sh.setLevel(getLevel(lvl))
24 return sh
~
OUTPUT:
python logging3.py
2013-10-21 14:18:09,687 - first - INFO - in first
2013-10-21 14:18:09,687 - root - WARNING - we did something in info
making sure we are in first.Foo
Where is logging info for Foo object? <---------------
Also, can someone confirm that logging tree for the above is
root
----first
---------foo
or is it
root
----root.first
--------------root.first.foo

that's intentional. The logger has a loglevel (separate for console and file); you can set them with
foo.setConsoleLevel(logging.ERROR)
foo.setFileLevel(logging.INFO)
etc. If you change your loglevel to logging.INFO (which is 0x14 on my python 2.6 session) or below, then you will see the log messages.
Logging messages that are below the current loglevel are suppressed; only messages at or above the current level are passed through. This means that info messages can go to the file, but not to the screen, or that you can change the level to debug to get additional output while debugging, etc.

Related

python logger limit string size

Python logger library -
Hey, is there a way to limit the string size and return another string if it exceeds the max limit?
I have a log file max size set, but it doesn't fit the needs, since the logger sometimes receives a base64 for log, which makes all the logging in the terminal useless.
example:
export = '<Base64 Long String>'
logger.info(f'result: {export}')
Since the code is a part of a big project, I cannot change it in the function itself, is there a way to set it on the logger level?
Use a custom logging.Formatter
import logging
class NotTooLongStringFormatter(logging.Formatter):
def __init__(self, max_length=10):
super(NotTooLongStringFormatter, self).__init__()
self.max_length = max_length
def format(self, record):
if len(record.msg) > self.max_length:
record.msg = record.msg[:self.max_length] + "..."
return super().format(record)
LOG = logging.getLogger("mylogger")
h = logging.StreamHandler()
h.setFormatter(NotTooLongStringFormatter(20))
LOG.addHandler(h)
LOG.error("a" * 10) # aaaaaaaaaa
LOG.info("a" * 100) # aaaaaaaaaaaaaaaaaaaa...
To keep a detailled log, with a specific format, just pass it to super.__init__
def __init__(self, max_length=10):
fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
super(NotTooLongStringFormatter, self).__init__(fmt)
self.max_length = max_length
2022-01-09 12:29:44,862 - mylogger - WARNING - aaaaaaaaaa
2022-01-09 12:29:44,863 - mylogger - WARNING - aaaaaaaaaaaaaaaaaaaa...

Python logger - Redirecting STDOUT to logfile as well as any debug messages

I'm trying to use the logging module in Python so that, when I run my program, I end up with a log file, debug.log, containing:
Every log message (logging.DEBUG, logging.WARNING etc.)
Every time my code prints something to STDOUT
When I run the program, I only want the debug messages to appear in the log file, not to be printed on the terminal.
Based on this answer, here's my example code, test.py:
import logging
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
fh = logging.FileHandler('debug.log')
fh.setLevel(logging.DEBUG)
sh = logging.StreamHandler(sys.stdout)
sh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
sh.setFormatter(formatter)
fh.setFormatter(formatter)
root.addHandler(sh)
root.addHandler(fh)
x = 4
y = 5
logging.debug("X: %s", x)
logging.debug("Y: %s", y)
print("x is", x)
print("y is", y)
print("x * y =", x*y)
print("x^y =", x**y)
And here's what I would want to be the contents of debug.log:
2021-02-01 12:10:48,263 - root - DEBUG - X: 4
2021-02-01 12:10:48,264 - root - DEBUG - Y: 5
x is 4
y is 5
x * y = 20
x^y = 1024
Instead, the contents of debug.log is just the first two lines:
2021-02-01 12:10:48,263 - root - DEBUG - X: 4
2021-02-01 12:10:48,264 - root - DEBUG - Y: 5
When I run test.py, I get this output:
2021-02-01 12:17:04,201 - root - DEBUG - X: 4
2021-02-01 12:17:04,201 - root - DEBUG - Y: 5
x is 4
y is 5
x * y = 20
x^y = 1024
So I've actually got the opposite results to what I want: the log file excludes STDOUT prints where I want them included, and the program output includes the debug messages where I want them excluded.
How can I fix this, so that running test.py only outputs the lines in print statements, and the resulting debug.log file contains both the DEBUG logs and the print lines?
Well, I can make it work, but I don't yet know whether there are any repercussions due to it. Perhaps others will be able to point out any potential pitfalls, such as multithreading.
You can set sys.stdout to any file-like object that you like. This would include the file to which your logging.FileHandler() is writing. Try this:
fh = logging.FileHandler('debug.log')
fh.setLevel(logging.DEBUG)
old_stdout = sys.stdout # in case you want to restore later
sys.stdout = fh.stream # the file to which fh writes
And you can remove the code that deals with sh that is hooked up to stdout.
If you really want all output to stdout to end up in the log file, see e.g. mhawke's answer, as well as the linked questions and answers.
However, if you are really just interested in the output from your own print() calls, then I would replace all of those by Logger.log() calls, using a custom logging level. That gives you fine-grained control over what happens.
Below, I define a custom log level with a value higher than logging.CRITICAL, so our console output is always printed, even if the logger's level is CRITICAL. See docs.
Here's a minimal implementation, based on the OP's example:
import sys
import logging
# define a custom log level with a value higher than CRITICAL
CUSTOM_LEVEL = 100
# dedicated formatter that just prints the unformatted message
# (actually this is the default behavior, but we make it explicit here)
# See docs: https://docs.python.org/3/library/logging.html#logging.Formatter
console_formatter = logging.Formatter('%(message)s')
# your basic console stream handler
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(console_formatter)
# only use this handler for our custom level messages (highest level)
console_handler.setLevel(CUSTOM_LEVEL)
# your basic file formatter and file handler
file_formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler = logging.FileHandler('debug.log')
file_handler.setFormatter(file_formatter)
file_handler.setLevel(logging.DEBUG)
# use a module logger instead of the root logger
logger = logging.getLogger(__name__)
# add the handlers
logger.addHandler(console_handler)
logger.addHandler(file_handler)
# include messages with level DEBUG and higher
logger.setLevel(logging.DEBUG)
# NOW, instead of using print(), we use logger.log() with our CUSTOM_LEVEL
x = 4
y = 5
logger.debug(f'X: {x}')
logger.debug(f'Y: {y}')
logger.log(CUSTOM_LEVEL, f'x is {x}')
logger.log(CUSTOM_LEVEL, f'y is {y}')
logger.log(CUSTOM_LEVEL, f'x * y = {x*y}')
logger.log(CUSTOM_LEVEL, f'x^y = {x**y}')
I don't think you want ALL stdout output going to log file.
What you could do is set the logging level of the console handler to logging.INFO and the logging level of the file handler to logging.DEBUG. Then replace your print statements with calls to logging.info. This way only info messages and above will be output to the console.
Something like this:
import logging
import sys
logger = logging.getLogger(__name__)
console_handler = logging.StreamHandler(sys.stdout)
file_handler = logging.FileHandler("debug.log")
console_handler.setLevel(logging.INFO)
file_handler.setLevel(logging.DEBUG)
console_formatter = logging.Formatter('%(message)s')
file_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(console_formatter)
file_handler.setFormatter(file_formatter)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG) #set root logging level to DEBUG
if __name__ == "__main__":
x = 4
y = 5
logger.debug("X: %s", x)
logger.debug("Y: %s", y)
logger.info("x is {}".format(x))
logger.info("y is {}".format(y))
logger.info("x * y = {}".format(x * y))
logger.info("x^y = {}".format(x ** y))
Demo

Duplicate Logs are generated while using Pytest Parametrize fixture in Python 3.7 [duplicate]

This question already has answers here:
Duplicate log output when using Python logging module
(17 answers)
Closed 2 years ago.
I am getting duplicate logs for each iteration in Pytest Parametrize fixture.
Sample File -
input1 = 1
input2 = 2
input3 = 3
#pytest.mark.parametrize("input",[(input1),(input2),(input3)])
def test_numbers(input):
file_name = inspect.stack()[0][3]
log = customLogger(file_name=file_name)
log.info(input)
Logger Code -
def customLogger(file_name,log_level=logging.INFO):
'''
Important Logging Information
CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
INFO = 20
DEBUG = 10
NOTSET = 0
'''
# Gets the name of the class / method from where this method is called
logger_name = inspect.stack()[1][3]
logger = logging.getLogger(logger_name)
# By default, log all messages
logger.setLevel(logging.DEBUG) # dont change this setting
file_handler = logging.FileHandler("../logs/"+file_name +".log", mode='a')
file_handler.setLevel(log_level)
formatter = logging.Formatter('%(asctime)s - %(funcName)s - %(levelname)s: %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
I get a response in my log file like this -
07/07/2020 02:01:52 AM - test_number - INFO: 1
07/07/2020 02:01:52 AM - test_number - INFO: 2
07/07/2020 02:01:52 AM - test_number - INFO: 2
07/07/2020 02:01:52 AM - test_number - INFO: 3
07/07/2020 02:01:52 AM - test_number - INFO: 3
07/07/2020 02:01:52 AM - test_number - INFO: 3
I am out of my mind in resolving this, I am thinking to write a Log cleaner method to remove duplicates from the logs. Please let me know if I am missing something here ?
Each time you call customLogger, you create and add a new logging handler, that will also log the line, thus the multiplication of logs.
There are several possibilities to ensure that, the easiest is probably just to clear the handlers before adding a new one:
...
file_handler.setFormatter(formatter)
logger.handlers = []
logger.addHandler(file_handler)

How should I pass a custom logger from a file to multiple modules and while maintaining sub-module granularity?

I have a logging class that describes a base logger.
class logger:
def __init__(self):
self.filelocation = 'log/util.log'
self.loggers = {}
def init_logger(self,name="util",):
if self.loggers.get(name):
return self.loggers.get(name)
else:
module_logger = logging.getLogger(name)
module_logger.setLevel(logging.DEBUG)
module_logger.propagate = False
# create file handler which logs even debug messages
fh = logging.FileHandler(self.filelocation)
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.WARNING)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(module)s : %(levelname)s : %(asctime)s : %(name)s : %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
module_logger.addHandler(fh)
module_logger.addHandler(ch)
self.loggers[name] = module_logger
return module_logger
Now I have multiple modules referencing the above said logger. For ex
# mod1.py
logsession = logger().init_logger(name = "athena_s3")
dictsession = logger().init_logger(name = "dictinfo")
class mod1_class():
def __init__(self):
self.var1 = etc
def build_session(self):
"""
Build a session using the athena client
Input: None
Return: Active session
"""
if not self._session:
try:
self._session = Session(aws_access_key_id = self._aws_access_key_id,aws_secret_access_key = self._aws_secret_access_key)
logsession.info("Built new athena session")
Similarly I have another module that could reference the code from the above mod1.py. Now consider a test.py file that imports this mod1.py.
from mod1 import mod1_class
session = mod1_class().build_session()
### Do STUFF
How can I pass the logger from multiple test.py to mod1.py such that it maintains the same logger ?
So for example:
logs could be
test : INFO : time : athena_s3 : message
test : INFO : time : athena_s3 : athena_util : message
test2: INFO : time : athena_s2 : message

logger chain in python

I'm writing python package/module and would like the logging messages mention what module/class/function they come from. I.e. if I run this code:
import mymodule.utils.worker as worker
w = worker.Worker()
w.run()
I'd like to logging messages looks like this:
2010-06-07 15:15:29 INFO mymodule.utils.worker.Worker.run <pid/threadid>: Hello from worker
How can I accomplish this?
Thanks.
I tend to use the logging module in my packages/modules like so:
import logging
log = logging.getLogger(__name__)
log.info("Whatever your info message.")
This sets the name of your logger to the name of the module for inclusion in the log message. You can control where the name is by where %(name)s is found in the format string. Similarly you can place the pid with %(process)d and the thread id with %(thread)d. See the docs for all the options.
Formatting example:
import logging
logging.basicConfig(format="%(asctime)s %(levelname)s %(name)s %(process)d/%(threadName)s: %(message)s")
logging.getLogger('this.is.the.module').warning('Testing for SO')
Gives me:
2010-06-07 08:43:10,494 WARNING this.is.the.module 14980/MainThread: Testing for SO
Here is my solution that came out of this discussion. Thanks to everyone for suggestions.
Usage:
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> from hierlogger import hierlogger as logger
>>> def main():
... logger().debug("test")
...
>>> main()
DEBUG:main:test
By default it will name logger as ... You can also control the depth by providing parameter:
3 - module.class.method default
2 - module.class
1 - module only
Logger instances are also cached to prevent calculating logger name on each call.
I hope someone will enjoy it.
The code:
import logging
import inspect
class NullHandler(logging.Handler):
def emit(self, record): pass
def hierlogger(level=3):
callerFrame = inspect.stack()[1]
caller = callerFrame[0]
lname = '__heirlogger'+str(level)+'__'
if lname not in caller.f_locals:
loggerName = str()
if level >= 1:
try:
loggerName += inspect.getmodule(inspect.stack()[1][0]).__name__
except: pass
if 'self' in caller.f_locals and (level >= 2):
loggerName += ('.' if len(loggerName) > 0 else '') +
caller.f_locals['self'].__class__.__name__
if callerFrame[3] != '' and level >= 3:
loggerName += ('.' if len(loggerName) > 0 else '') + callerFrame[3]
caller.f_locals[lname] = logging.getLogger(loggerName)
caller.f_locals[lname].addHandler(NullHandler())
return caller.f_locals[lname]

Categories

Resources