Schedule package: Suppress "Running job Every" messages - python

I'm using 'schedule' for a current project:
https://pypi.python.org/pypi/schedule
It's great, but I want to suppress the "Running job Every x seconds" log message that gets triggered every time a scheduled task is run. Example of what I mean below:
Is there any way to achieve this? Below is my current logging.basicConfig, I'm quite new to configuring logging beyond the absolute basics, so the solution may lie more with that:
# Define overall logging settings; these log levels/format go to file
logging.basicConfig(level=variables.settings['log_level_file'],
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
filename='logs\log.log')
# Set up Handlers and Formatters; these log levels/format go to console
console = logging.StreamHandler()
console.setLevel(variables.settings['log_level_console'])
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)

As Meloman pointed out, you can directly set the individual 'schedule' logger to a higher level than the INFO default:
logging.getLogger('schedule').setLevel(logging.CRITICAL)

Related

Prevent Generation of Log File with Python logging

I have a simple script that I run as an exe file on Windows. However, when I am developing the script, I run it from the command line and use the logging module to output debug info to a log file. I would like to turn off the generation of the log file for my production code. How would I go about doing that?
This is the logging config I have setup now:
import logging
...
logging.basicConfig(filename='file.log',
filemode="w",
level=logging.DEBUG,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
...
logging.debug("Debug message")
If you don't mind the generation of an empty log file for production, you can simply increase the threshold of logging to a level above logging.DEBUG, such as logging.INFO, so that messages logged with logging.debug won't get output to the log file:
logging.basicConfig(filename='file.log', # creates an empty file.log
filemode="w",
level=logging.INFO,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
logging.debug("Debug message") # nothing would happen
logging.info("FYI") # logs 'FYI'
If you don't want logging to function at all, an easy approach is to override logging with a Mock object:
import logging
from unittest.mock import Mock
environment = 'production'
if environment == 'production':
logging = Mock()
...
logging.basicConfig(...) # nothing would happen
logging.debug(...) # nothing would happen

Python log level cannot be set

Here is my code
logger = logging.getLogger("JarvisAI")
# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.FileHandler(logname)
c_handler.setLevel(logging.WARNING)
f_handler.setLevel(logging.INFO)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S")
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S")
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logging
logger.addHandler(c_handler)
logger.addHandler(f_handler)
Running logger.info("Test") does not produce anything in the logfile.
However logger.warning and other higher log level works fine both in console and file.
Pls help.
The logger itself also has a logging level, and that needs to be below (or above, depending on your point of view) that of the handlers to show the handlers' output:
logger.setLevel(logging.INFO)
(or even debug level: the formatters' levels will prevent debugging info from being output anyway) will do that.
This is also shown in the first code block of the Python logging cookbook. Have a read through it.
The reason you are getting warning and higher log level output, is because warning is the default logging level.

Python module level logging configuration via code issue

I am trying to get module level logging via code working for three outputs: file, console and application internal(QTextEdit).
I can get all three loggers working with the code below but the application internal logger is not logging all events and the console logger (only) prints each line twice.
I have tried using
logging.getLogger(__name__)
for the file logger instead of root (no logs generated), same for the console (works fine with only 1 line per log output) and same for the MyLogHandler (no logs generated) and tried various combinations of root logger and 'name' logger but can't get all logs working and console only printing one line per log event.
def configCodeRootExample_(self):
logFileName = self.getLogLocation()
rootLogger = logging.getLogger('')
#This logger works
fileLogger = logging.FileHandler(logFileName)
fileLogger.setLevel(logging.INFO)
fileFormatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
fileLogger.setFormatter(fileFormatter)
rootLogger.addHandler(fileLogger)
#This logger works but prints output twice
consoleFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(consoleFormatter)
rootLogger.addHandler(console)
#This logger works but only logs a subset of DEBUG events and no INFO events
myLogHandler = GSLLogHandler()
myLogHandler.setLevel(logging.DEBUG)
myLogHandler.setFormatter(fileFormatter)
rootLogger.addHandler(myLogHandler)
also for the record here is the log handler to output to a listening QTextEdit:
import logging
from loggerpackage.logsignals import LogSignals
class MyLogHandler(logging.Handler):
def __init__(self):
logging.Handler.__init__(self)
self.logSignals = LogSignals()
def emit(self, logMsg):
logMsg = self.format(logMsg)
self.logSignals.logEventTriggered.emit(logMsg)
If I change the console logger to the module level:
logger = logging.getLogger(__name__)
consoleFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(consoleFormatter)
logger.addHandler(console)
Then only one line is printed for each log event but the formatting is incorrect, it seems to be some sort of default formatter
See here for a solution to the duplicate console logging: How to I disable and re-enable console logging in Python?
logger = logging.getLogger()
lhStdout = logger.handlers[0]
... add log handlers
logger.removeHandler(lhStdout)
The issue I was having with the MyLogHandler was that the slot on the QTextEdit wasn't connected in time to receive the first few DEBUG and INFO events.

How to share a file between modules for logging in python

I wanted to log messages from different module in python to a file. Also I need to print some messages to console for debugging purpose. I used logger module for this purpose . But logger module will log all the logs with given severity and above to file or console.
I wanted only some messages logged to file and it should not include the messages from the console.
Similarly the console messages should not contain messages logged to file.
My approach would be to have a singleton class which shares file write operation between various modules.
Is there any easier approach than this in python ?
EDIT:
I am new to Python. Sample program I tried
logger = logging.getLogger('simple_example')
logger.setLevel(logging.INFO)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.CRITICAL)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')
Console prints :
2015-02-03 15:36:00,651 - simple_example - ERROR - error message
2015-02-03 15:36:00,651 - simple_example - CRITICAL - critical message
#I don't want critical messages in console.
Here is a script that creates two loggers, use the one you wish to log to a file or stdout. The question is : on which criteria do you choose to log to stdout or file, knowing that (from your question) you don't want the criteria to be the log level (debug, error, critical...)
#!/usr/bin/python
import logging
logger_stdout = logging.getLogger('logger_stdout')
logger_stdout.setLevel(logging.DEBUG)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger_stdout.addHandler(sh)
logger_stdout.debug('stdout debug message')
logger_file = logging.getLogger('logger_file')
logger_file.setLevel(logging.DEBUG)
fh = logging.FileHandler("foo.log")
fh.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger_file.addHandler(fh)
logger_file.debug('file debug message')
And when I run this script:
D:\jrx\jrxpython
λ python essai.py
2015-02-03 11:12:07,210 - logger_stdout - DEBUG - stdout debug message
D:\jrx\jrxpython
λ cat foo.log
2015-02-03 11:12:07,224 - logger_file - DEBUG - file debug message
D:\jrx\jrxpython
λ
CRITICAL is higher than ERROR:
You can also verify yourself:
>>> import logging
>>> print logging.CRITICAL
50
>>> print logging.ERROR
40
>>>
There are two cases in logging:
Logging the same process - you should have several handlers with different logging levels based on how verbose the logs should be. A higher level means less output. That's why DEBUG is the lowest predefined log level - it writes everything for debug purposes.
Logging different processes - you should have several loggers set up, they can be accessed from anywhere in your code using logging.getLogger(name). This gives the same logger every time, so that logger set-up persists through the code and only needs to be executed once.
The first case demonsrates that you can't have an "error but not critical" log, since this is the opposite of how logs should work. You can have a "critical but not error" log, that is less verbose. This is what you probably want.

python logging module is not writing anything to file

I'm trying to write a server that logs exceptions both to the console and to a file. I pulled some code off the cookbook. Here it is:
logger = logging.getLogger('server_logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('server.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
This code logs perfectly fine to the console, but nothing is logged to the file. The file is created, but nothing is ever written to it. I've tried closing the handler, but that doesn't do anything. Neither does flushing it. I searched the Internet, but apparently I'm the only one with this problem. Does anybody have any idea what the problem is? Thanks for your answers.
Try calling
logger.error('This should go to both console and file')
instead of
logging.error('this will go to the default logger which you have not changed the config of')
Try to put the import and the basicConfig at the very beggining of the script. Something like this:
import logging
logging.basicConfig(filename='log.log', level=logging.INFO)
.
.
import ...
import ...
Put this
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
in front of the
logging.basicConfig(...)
see also
Logging module not writing to file
I know that this question might be a bit too old but I found the above method a bit of an overkill. I ran into a similar issue, I was able to solve it by:
import logging
logging.basicConfig(format = '%(asctime)s %(message)s',
datefmt = '%m/%d/%Y %I:%M:%S %p',
filename = 'example.log',
level=logging.DEBUG)
This will write to example.log all logs that are of level debug or higher.
logging.debug("This is a debug message") will write This is a debug message to example.log. Level is important for this to work.
In order to both write to terminal and file you can do like below:
import logging.config
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("log_file.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
usage in the code:
logger.info('message')
logger.error('message')
If root.handlers is not empty, log file will not be created. We should empty root.handlers before calling basicConfig() method. source
Snippet:
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
The full code is below:
import logging
##loging
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S',
filename= 'log.txt',
filemode='w')
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# add the handler to the root logger
logging.getLogger().addHandler(console)
logging.info("\nParameters:")
for i in range(10):
logging.info(i)
logging.info("end!")

Categories

Resources