How to turn sqlalchemy logging off completely - python

sqlalchemy is keep loggin to console even I have the following code
import logging
logger = logging.getLogger()
logger.disabled = True
How to turn off sqlalchemy's logging completely?

Did you pass echo=True to create_engine()? By default it creates StreamHandler which outputs to console. As documentation says, if you didn't provide any echo=True arguments and didn't configure root sqlalchemy logger, it will not log anything.

You can turn off the sqlalchemy logger using:
import logging
logging.basicConfig()
logging.getLogger('sqlalchemy').setLevel(logging.ERROR)
For more info, see the docs.

A more drastic solution:
import logging
logging.disable(logging.WARNING)

Related

Python logging with multiple module imports

I'm trying to establish logging in all modules I'm using. My project structure is
# driver.py
import logging
logger = logging.getLogger(__name__)
class driver:
....
# driver_wrapper.py
from driver import driver
device = driver(...)
def driver_func():
logging.info("...")
....
# main.py
import logging
import driver_wrapper
logging.basicConfig(stream=sys.stdout, level=logging.WARNING)
driver_wrapper.driver_func()
My problem now is that I still get INFO level messages and also the output is 'INFO:root'. But I would expect the module name instead of root.
Is there a way to set the logging level in the main.py for all modules or is it already correct how I do it? There are a lot of posts about this problem but the solutions don't seem to work for me.
All your modules that use logging should have the logger = logging.getLogger(__name__) line, and thereafter you always log to e.g.logger.info(...), and never call e.g. logging.info(...). The latter is equivalent to logging to the root logger, not the module's logger. That "all your modules" includes driver_wrapper.py in your example.

Tensorflow suppresses logging messages bug

Tensorflow makes logging messages hide and not appear when I run the code.
I have tried the following stuff but couldn't find a way to make my code work.
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
So my code is the following
import logging
import tensorflow as tf
logging.basicConfig(filename='example.log', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
I expected to have the debug messages into my file example.log but nothing appeared inside example log.
When I import tensorflow, the messages don't appear and when I don't they do.
I need to use both tensorflow and logging because I use an existing code. Is there a way so logging suppresses Tensorflow?
Two facts:
logging.basicConfig will do nothing if the root logger is already configured:
This function does nothing if the root logger already has handlers configured for it.
tensorflow has the absl-py dependency that will try to initialize logging when imported by appending a NullHandler to the root handler:
# The absl handler will always be attached to root, not the absl logger.
if not logging.root.handlers:
# Attach the absl handler at import time when there are no other handlers.
# Otherwise it means users have explicitly configured logging, and the absl
# handler will only be attached later in app.run(). For App Engine apps,
# the absl handler is not used.
logging.root.addHandler(_absl_handler)
Not sure why the handler is attached to the root logger instead of the absl logger, though - might be a bug or a workaround for some other issue.
So the problem is that the import tensorflow call will call import absl.logging which causes an early logger configuration. The subsequent call (yours) to logging.basicConfig will hence do nothing. To fix that, you need to configure logging before importing tensorflow:
import logging
logging.basicConfig(filename='example.log', level=logging.DEBUG)
import tensorflow as tf
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Rule of thumb: always call your logging configuration as early as possible.
Writing logs in default format to file
If you just want to write the default logs to file, abseil logger can also do that:
from absl import logging as absl_logging
absl_logging.get_absl_handler().use_absl_log_file(
program_name='mytool',
log_dir='/var/logs/'
)
Besides the method offered by #hoefling, you can also clear the handlers of the root logger just before your logging configurations:
logging.getLogger().handlers = []
# ...
logging.basicConfig(level=level, handlers=handlers)

How to enable logging for requests_oauthlib?

I'm trying to enable logging to stdout for requests_oauthlib. The example in the docs suggests this:
# Uncomment for detailed oauthlib logs
#import logging
#import sys
#log = logging.getLogger('oauthlib')
#log.addHandler(logging.StreamHandler(sys.stdout))
#log.setLevel(logging.DEBUG)
But it doesn't seem to have any effect. What's the proper way to do it?
The root logger's name should be requests_oauthlib, i.e. the package name. The modules in the package define loggers like this
logger = logging.getLogger(__name__)
so configuring the root logger as described in the example should work:
import logging
import sys
log = logging.getLogger('requests_oauthlib')
log.addHandler(logging.StreamHandler(sys.stdout))
log.setLevel(logging.DEBUG)

Enable module's logger

I'm seeing a behavior that I have no way of explaining... Here's my simplified setup:
module x:
import logging
logger = logging.getLogger('x')
def test_debugging():
logger.debug('Debugging')
test for module x:
import logging
import unittest
from x import test_debugging
class TestX(unittest.TestCase):
def test_test_debugging(self):
test_debugging()
if __name__ == '__main__':
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
# logging.debug('another test')
unittest.main()
If I uncomment the logging.debug('another test') line I can also see the log from x. Note, it is not a typo, I'm calling debug on logging, not on the logger from module x. And if I call debug on logger, I don't see logs.
What is this, I can't even?..
In your setup, you didn't actually configure logging. Although the configuration can be pretty complex, it would suffice to set the log level in your example:
if __name__ == '__main__':
# note I configured logging, setting e.g. the level globally
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
unittest.main()
This will create a simple StreamHandler with a predefined output format that prints all the log records to the stdout. I suggest you to quickly look over the relevant docs for more info.
Why did it work with the logging.debug call? Because the logging.{info,debug,warn,error} functions all call logging.basicConfig internally, so once you have called logging.debug, you configured logging implicitly.
Edit: let's take a quick look under the hood what is the actual meaning of the logging.{info,debug,error,warning} functions. Let's take the following snippet:
import logging
logger = logging.getLogger('mylogger')
logger.warning('hello world')
If you run the snippet, hello world will be not printed (and this is correct so!). Why not? It's because you didn't actually specify how the log records should be treated - should they be printed to stdout, or maybe printed to a file, or maybe sent to some server that will email them to the recipients? The logger mylogger will receive the log record hello world, but it doesn't know yet what to do with it. So, to actually print the record, let's do some configuration for the logger:
import logging
logger = logging.getLogger('mylogger')
formatter = logging.Formatter('Logger received message %(message)s at time %(asctime)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('hello world')
We now attached a handler that handles the record by printing it to the stdout in the format specified by formatter. Now the record hello world will be printed to the stdout. We could attach more handlers and the record would be handled by each of the handler. Example: try to attach another StreamHandler and you will notice that the record is now printed twice.
So, what's with the logging functions now? If you have some simple program that has only one logger that should print the messages and that's all, you can replace the manual configuration by using convenience logging functions:
import logging
logging.warning('hello world')
This will configure the root logger to print the messages to stdout by adding a StreamHandler to it with some default formatter, so you don't have to configure it yourself. After that, it will tell the root logger to process the record hello world. Merely a convenience, nothing more. If you want to explicitly trigger this basic configuration of the root logger, issue
logging.basicConfig()
with or without the additional configuration parameters.
Now, let's go through my first code snippet once again:
logging.basicConfig(level=logging.DEBUG)
After this line, the root logger will print all log records with level DEBUG and higher to the command line.
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
We did not configure this logger explicitly, so why are the records still being printed? This is because by default, any logger will propagate the log records to the root logger. So the logger x does not print the records - it has not been configured for that, but it will pass the record further up to the root logger that knows how to print the records.

add custom handler with Python logging

I have been working on this almost all day couldn't figure what I am missing. I am trying to add a custom handler to emit all log data into a GUI session. It works but the handler doesn't extend to the submodules and just emits records from the main module. Here is a small snippet I tried
I have two files
# main.py
import logging
import logging_two
def myapp():
logger = logging.getLogger('myapp')
logging.basicConfig()
logger.info('Using myapp')
ch = logging.StreamHandler()
logger.addHandler(ch)
logging_two.testme()
print logger.handlers
myapp()
Second module
#logging_two
import logging
def testme():
logger = logging.getLogger('testme')
logger.info('IN test me')
print logger.handlers
I would expect the logger in logging_two.testme to have the handler I have added in the main module. I looked at the docs to me it seems this should work but I am not sure if I got it wrong?
the result I get is
[]
[<logging.StreamHandler object at 0x00000000024ED240>]
In myapp() you are adding the handler to the logger named 'myapp'. Since testme() is getting the logger named 'testme' it does not have the handler since it is a different part of the logging hierarchy.
If you just have logger = logger.getLogger() in myapp() then it would work since you are adding the handler to the root of the hierarchy.
Check out the python logging docs.

Categories

Resources