We are trying to introducing python logging in out existing python project.
As we already have lot more print statement in code, we decided to redirect all print logs to logging file using below statement.
import logging, sys
import os
from logging.handlers import TimedRotatingFileHandler
formatter = logging.Formatter("%(asctime)s - %(pathname)s - %(funcName)s - %(levelname)s - %(message)s")
log_path = '/logs/server.log'
handler = TimedRotatingFileHandler(filename=log_path, when='midnight')
handler.setFormatter(formatter)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(handler)
sys.stderr.write = log.error
sys.stdout.write = log.info
However, it printing two lines with second as empty logs.
2020-09-21 09:03:05,978 - utils.py - logger - INFO - <_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>
2020-09-21 09:03:05,978 - utils.py - logger - INFO -
2020-09-21 09:03:05,978 - utils.py - logger - INFO - 1600693385 Mon Sep 21 09:03:05 2020 19882 Registering functions
2020-09-21 09:03:05,978 - utils.py - logger - INFO -
It is working for logging function like info, error. it only giving issue for print.
It would be great help if someone know cause of it.
In additional detail, we are using gunicorn as server and falcon as rest framework.
You should not modify the write functions of stdout and stderr but add a stream handler for info and for error (one on stdout and the other one on stderr)
You can find an example on SO here: https://stackoverflow.com/a/31459386/14306518
Related
I have a simple script that I run as an exe file on Windows. However, when I am developing the script, I run it from the command line and use the logging module to output debug info to a log file. I would like to turn off the generation of the log file for my production code. How would I go about doing that?
This is the logging config I have setup now:
import logging
...
logging.basicConfig(filename='file.log',
filemode="w",
level=logging.DEBUG,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
...
logging.debug("Debug message")
If you don't mind the generation of an empty log file for production, you can simply increase the threshold of logging to a level above logging.DEBUG, such as logging.INFO, so that messages logged with logging.debug won't get output to the log file:
logging.basicConfig(filename='file.log', # creates an empty file.log
filemode="w",
level=logging.INFO,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
logging.debug("Debug message") # nothing would happen
logging.info("FYI") # logs 'FYI'
If you don't want logging to function at all, an easy approach is to override logging with a Mock object:
import logging
from unittest.mock import Mock
environment = 'production'
if environment == 'production':
logging = Mock()
...
logging.basicConfig(...) # nothing would happen
logging.debug(...) # nothing would happen
import logging
#Create and configure logger
logging.basicConfig(filename="newfile.txt", format='%(asctime)s %(message)s',filemode='w')
logging.debug("Harmless debug Message")
logging.info("Just an information")
logging.warning("Its a Warning")
logging.error("Did you try to divide by zero")
logging.critical("Internet is down")
In the console its printing all these informations. Its never written to a file. Really thankful to those help me sort out this. Searching in internet from morning tried all the possibilities but still the logs not writing to file
Create an new logger with desired stream & file handlers:
import logger, sys
logger = logging.Logger('AmazeballsLogger')
#Stream/console output
logger.handler = logging.StreamHandler(sys.stdout)
logger.handler.setLevel(logging.WARNING)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
logger.handler.setFormatter(formatter)
logger.addHandler(self.handler)
#File output
fh = logging.FileHandler("test.log")
fh.setLevel(logging.DEBUG)
fh.setFormatter(logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s"))
logger.addHandler(fh)
And you're ready to take it for a spin:
print(logger.handlers)
logger.critical("critical")
logger.error("error")
logger.warning("warning")
logger.info("info")
logger.debug("debug")
With the following console output only showing WARNING and higher:
[<StreamHandler stdout (WARNING)>, <FileHandler C:\Users\...\test.log (DEBUG)>]
2020-04-13 17:52:57,729 - CRITICAL - critical
2020-04-13 17:52:57,731 - ERROR - error
2020-04-13 17:52:57,734 - WARNING - warning
While test.log contains all levels:
2020-04-13 17:52:57,729 - AmazeballsLogger - CRITICAL - critical
2020-04-13 17:52:57,731 - AmazeballsLogger - ERROR - error
2020-04-13 17:52:57,734 - AmazeballsLogger - WARNING - warning
2020-04-13 17:52:57,736 - AmazeballsLogger - INFO - info
2020-04-13 17:52:57,736 - AmazeballsLogger - DEBUG - debug
The key is to understand how logging handlers work and checking whether they use correct levels (as in the code cells above, just print logger.handlers). Furthermore, overwriting basic configuration is a bad practice that can lead to problems when you run multiple loggers in the same python environment. I recommend this video, it sheds light on stream/file handlers in python logging.
I got this example from logging documentation
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
filename='myapp.log',
filemode='w')
logging.debug('A debug message')
logging.info('Some information')
logging.warning('A shot across the bows')
and the examples that I have seen on the web they are all creating .logfile. So try changing the extension of file from txt to log
I have a very long running python code that calls multiple api's.
I have also implemented logger at each step to identify the exceptions. But, the FileHandler shows the data only on completion of the python script.
Is there a way to flush the logger after a fixed interval ? I could use it for monitoring the status of the job.
# when executed standalone, the main block is called
if __name__ == '__main__':
# Crate logger for RR api application
logger = logging.getLogger('RR_API')
logger.setLevel(logging.INFO)
# create file handler which logs messages
# If the handler is already defined, then no need to set it again
if not logger.handlers:
fh = logging.FileHandler("C:///Users///2128///python_data_logger.log")
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
#add the handlers to the logger
logger.addHandler(fh)
run_ts = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
logger.info('Starting Time' + '|' + run_ts)
print ('Starting Time',datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
count = 0
token = get_temp_token(run_ts)
portfolio = get_orgs(token,'"https://apidata.ratings.com/v1.0/orgs/"',run_ts)
Output in Log File:
2016-12-30 16:22:01,448 - RR_API - INFO - Starting Time|2016-12-30 16:22:01
2016-12-30 16:22:01,526 - RR_API - CRITICAL - No Access Token was downloaded | 2016-12-30 16:22:01
The above lines are visible in log file only when the script execution has terminated.
I am trying to get module level logging via code working for three outputs: file, console and application internal(QTextEdit).
I can get all three loggers working with the code below but the application internal logger is not logging all events and the console logger (only) prints each line twice.
I have tried using
logging.getLogger(__name__)
for the file logger instead of root (no logs generated), same for the console (works fine with only 1 line per log output) and same for the MyLogHandler (no logs generated) and tried various combinations of root logger and 'name' logger but can't get all logs working and console only printing one line per log event.
def configCodeRootExample_(self):
logFileName = self.getLogLocation()
rootLogger = logging.getLogger('')
#This logger works
fileLogger = logging.FileHandler(logFileName)
fileLogger.setLevel(logging.INFO)
fileFormatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
fileLogger.setFormatter(fileFormatter)
rootLogger.addHandler(fileLogger)
#This logger works but prints output twice
consoleFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(consoleFormatter)
rootLogger.addHandler(console)
#This logger works but only logs a subset of DEBUG events and no INFO events
myLogHandler = GSLLogHandler()
myLogHandler.setLevel(logging.DEBUG)
myLogHandler.setFormatter(fileFormatter)
rootLogger.addHandler(myLogHandler)
also for the record here is the log handler to output to a listening QTextEdit:
import logging
from loggerpackage.logsignals import LogSignals
class MyLogHandler(logging.Handler):
def __init__(self):
logging.Handler.__init__(self)
self.logSignals = LogSignals()
def emit(self, logMsg):
logMsg = self.format(logMsg)
self.logSignals.logEventTriggered.emit(logMsg)
If I change the console logger to the module level:
logger = logging.getLogger(__name__)
consoleFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(consoleFormatter)
logger.addHandler(console)
Then only one line is printed for each log event but the formatting is incorrect, it seems to be some sort of default formatter
See here for a solution to the duplicate console logging: How to I disable and re-enable console logging in Python?
logger = logging.getLogger()
lhStdout = logger.handlers[0]
... add log handlers
logger.removeHandler(lhStdout)
The issue I was having with the MyLogHandler was that the slot on the QTextEdit wasn't connected in time to receive the first few DEBUG and INFO events.
I wanted to log messages from different module in python to a file. Also I need to print some messages to console for debugging purpose. I used logger module for this purpose . But logger module will log all the logs with given severity and above to file or console.
I wanted only some messages logged to file and it should not include the messages from the console.
Similarly the console messages should not contain messages logged to file.
My approach would be to have a singleton class which shares file write operation between various modules.
Is there any easier approach than this in python ?
EDIT:
I am new to Python. Sample program I tried
logger = logging.getLogger('simple_example')
logger.setLevel(logging.INFO)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.CRITICAL)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')
Console prints :
2015-02-03 15:36:00,651 - simple_example - ERROR - error message
2015-02-03 15:36:00,651 - simple_example - CRITICAL - critical message
#I don't want critical messages in console.
Here is a script that creates two loggers, use the one you wish to log to a file or stdout. The question is : on which criteria do you choose to log to stdout or file, knowing that (from your question) you don't want the criteria to be the log level (debug, error, critical...)
#!/usr/bin/python
import logging
logger_stdout = logging.getLogger('logger_stdout')
logger_stdout.setLevel(logging.DEBUG)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger_stdout.addHandler(sh)
logger_stdout.debug('stdout debug message')
logger_file = logging.getLogger('logger_file')
logger_file.setLevel(logging.DEBUG)
fh = logging.FileHandler("foo.log")
fh.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger_file.addHandler(fh)
logger_file.debug('file debug message')
And when I run this script:
D:\jrx\jrxpython
λ python essai.py
2015-02-03 11:12:07,210 - logger_stdout - DEBUG - stdout debug message
D:\jrx\jrxpython
λ cat foo.log
2015-02-03 11:12:07,224 - logger_file - DEBUG - file debug message
D:\jrx\jrxpython
λ
CRITICAL is higher than ERROR:
You can also verify yourself:
>>> import logging
>>> print logging.CRITICAL
50
>>> print logging.ERROR
40
>>>
There are two cases in logging:
Logging the same process - you should have several handlers with different logging levels based on how verbose the logs should be. A higher level means less output. That's why DEBUG is the lowest predefined log level - it writes everything for debug purposes.
Logging different processes - you should have several loggers set up, they can be accessed from anywhere in your code using logging.getLogger(name). This gives the same logger every time, so that logger set-up persists through the code and only needs to be executed once.
The first case demonsrates that you can't have an "error but not critical" log, since this is the opposite of how logs should work. You can have a "critical but not error" log, that is less verbose. This is what you probably want.