I have a simple script that I run as an exe file on Windows. However, when I am developing the script, I run it from the command line and use the logging module to output debug info to a log file. I would like to turn off the generation of the log file for my production code. How would I go about doing that?
This is the logging config I have setup now:
import logging
...
logging.basicConfig(filename='file.log',
filemode="w",
level=logging.DEBUG,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
...
logging.debug("Debug message")
If you don't mind the generation of an empty log file for production, you can simply increase the threshold of logging to a level above logging.DEBUG, such as logging.INFO, so that messages logged with logging.debug won't get output to the log file:
logging.basicConfig(filename='file.log', # creates an empty file.log
filemode="w",
level=logging.INFO,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
logging.debug("Debug message") # nothing would happen
logging.info("FYI") # logs 'FYI'
If you don't want logging to function at all, an easy approach is to override logging with a Mock object:
import logging
from unittest.mock import Mock
environment = 'production'
if environment == 'production':
logging = Mock()
...
logging.basicConfig(...) # nothing would happen
logging.debug(...) # nothing would happen
Related
I have a bootstrap script for a Raspberry Pi that runs in python. I am looking to create a logger that logs to a file as well as to the console.
I was going to do something like this:
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s",
handlers=[
logging.FileHandler("{0}/{1}.log".format(logPath, fileName)),
logging.StreamHandler()
])
But what I would really like is to log INFO to the StreamHandler and DEBUG to the FileHandler... I cannot seem to figure that out.
Can anyone help me out?
Using Python 3.7.5
You could build the logger yourself (either through a config file or in pure python)
The tricky thing that I have wasted several hours on is forgetting to set the log level on the logger as well as on each of the handlers. Ensure that the logger is as permissive as the most permissive handler.
example script
# emits the info line to the console and
# both the info & debug lines to the log file
# test_pylog.py
import logging
log_format = logging.Formatter(
'%(asctime)s %(threadName)s %(levelname)s %(message)s'
)
logger = logging.getLogger(__name__)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(log_format)
logger.addHandler(console_handler)
file_handler = logging.FileHandler('logfile.txt')
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(log_format)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
if __name__ == '__main__':
logger.debug('Panic! at the disco')
logger.info('Weezer')
In Python, with
import logging
logging.basicConfig(filename="logname",
filemode='a',
format='%(asctime)s,%(msecs)03d %(name)s %(levelname)s %(message)s',
datefmt='%D %H:%M:%S',
level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler())
logging.info("=================================================")
logging.info("starting execution")
I am able to log nicely formatted in the log file:
03/30/18 12:52:08,231 root INFO =================================================
03/30/18 12:52:08,232 root INFO starting execution
Unfortunately, for the console the formatting is not obeyed:
Connected to pydev debugger (build 173.4674.37)
=================================================
starting execution
What do I have to write to make the formatting also possible for the console output?
This example from the python documentation looked to do the trick https://docs.python.org/2/howto/logging-cookbook.html#logging-to-multiple-destinations
import logging
logging.basicConfig(filename="logname",
filemode='a',
format='%(asctime)s,%(msecs)03d %(name)s %(levelname)s %(message)s',
datefmt='%D %H:%M:%S',
level=logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
# set a format which is simpler for console use
formatter = logging.Formatter('%(asctime)s,%(msecs)03d %(name)s %(levelname)s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to the root logger
logging.getLogger('').addHandler(console)
logging.info("=================================================")
logging.info("starting execution")
Gives the following console output
2018-03-30 19:15:00,940,940 root INFO =================================================
2018-03-30 19:15:07,768,768 root INFO starting execution
I'm using 'schedule' for a current project:
https://pypi.python.org/pypi/schedule
It's great, but I want to suppress the "Running job Every x seconds" log message that gets triggered every time a scheduled task is run. Example of what I mean below:
Is there any way to achieve this? Below is my current logging.basicConfig, I'm quite new to configuring logging beyond the absolute basics, so the solution may lie more with that:
# Define overall logging settings; these log levels/format go to file
logging.basicConfig(level=variables.settings['log_level_file'],
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
filename='logs\log.log')
# Set up Handlers and Formatters; these log levels/format go to console
console = logging.StreamHandler()
console.setLevel(variables.settings['log_level_console'])
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)
As Meloman pointed out, you can directly set the individual 'schedule' logger to a higher level than the INFO default:
logging.getLogger('schedule').setLevel(logging.CRITICAL)
I am trying to get module level logging via code working for three outputs: file, console and application internal(QTextEdit).
I can get all three loggers working with the code below but the application internal logger is not logging all events and the console logger (only) prints each line twice.
I have tried using
logging.getLogger(__name__)
for the file logger instead of root (no logs generated), same for the console (works fine with only 1 line per log output) and same for the MyLogHandler (no logs generated) and tried various combinations of root logger and 'name' logger but can't get all logs working and console only printing one line per log event.
def configCodeRootExample_(self):
logFileName = self.getLogLocation()
rootLogger = logging.getLogger('')
#This logger works
fileLogger = logging.FileHandler(logFileName)
fileLogger.setLevel(logging.INFO)
fileFormatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
fileLogger.setFormatter(fileFormatter)
rootLogger.addHandler(fileLogger)
#This logger works but prints output twice
consoleFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(consoleFormatter)
rootLogger.addHandler(console)
#This logger works but only logs a subset of DEBUG events and no INFO events
myLogHandler = GSLLogHandler()
myLogHandler.setLevel(logging.DEBUG)
myLogHandler.setFormatter(fileFormatter)
rootLogger.addHandler(myLogHandler)
also for the record here is the log handler to output to a listening QTextEdit:
import logging
from loggerpackage.logsignals import LogSignals
class MyLogHandler(logging.Handler):
def __init__(self):
logging.Handler.__init__(self)
self.logSignals = LogSignals()
def emit(self, logMsg):
logMsg = self.format(logMsg)
self.logSignals.logEventTriggered.emit(logMsg)
If I change the console logger to the module level:
logger = logging.getLogger(__name__)
consoleFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(consoleFormatter)
logger.addHandler(console)
Then only one line is printed for each log event but the formatting is incorrect, it seems to be some sort of default formatter
See here for a solution to the duplicate console logging: How to I disable and re-enable console logging in Python?
logger = logging.getLogger()
lhStdout = logger.handlers[0]
... add log handlers
logger.removeHandler(lhStdout)
The issue I was having with the MyLogHandler was that the slot on the QTextEdit wasn't connected in time to receive the first few DEBUG and INFO events.
I'm trying to write a server that logs exceptions both to the console and to a file. I pulled some code off the cookbook. Here it is:
logger = logging.getLogger('server_logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('server.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
This code logs perfectly fine to the console, but nothing is logged to the file. The file is created, but nothing is ever written to it. I've tried closing the handler, but that doesn't do anything. Neither does flushing it. I searched the Internet, but apparently I'm the only one with this problem. Does anybody have any idea what the problem is? Thanks for your answers.
Try calling
logger.error('This should go to both console and file')
instead of
logging.error('this will go to the default logger which you have not changed the config of')
Try to put the import and the basicConfig at the very beggining of the script. Something like this:
import logging
logging.basicConfig(filename='log.log', level=logging.INFO)
.
.
import ...
import ...
Put this
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
in front of the
logging.basicConfig(...)
see also
Logging module not writing to file
I know that this question might be a bit too old but I found the above method a bit of an overkill. I ran into a similar issue, I was able to solve it by:
import logging
logging.basicConfig(format = '%(asctime)s %(message)s',
datefmt = '%m/%d/%Y %I:%M:%S %p',
filename = 'example.log',
level=logging.DEBUG)
This will write to example.log all logs that are of level debug or higher.
logging.debug("This is a debug message") will write This is a debug message to example.log. Level is important for this to work.
In order to both write to terminal and file you can do like below:
import logging.config
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("log_file.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
usage in the code:
logger.info('message')
logger.error('message')
If root.handlers is not empty, log file will not be created. We should empty root.handlers before calling basicConfig() method. source
Snippet:
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
The full code is below:
import logging
##loging
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S',
filename= 'log.txt',
filemode='w')
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# add the handler to the root logger
logging.getLogger().addHandler(console)
logging.info("\nParameters:")
for i in range(10):
logging.info(i)
logging.info("end!")