I created a Flask app. All Flask code is inside api.py. This app uses other files, for example utils.py. This file contains functions that will be used from the api.py.
Inside api.py I am using app.logger for logging, like
app.logger.debug('HI')
This log is displayed in the console.
but in utils.py I am using:
import logging
logger = logging.getLogger('utils')
...
logger.debug('SOME MESSAGE')
But nothing is displayed in the console.
One awful, awful, Awful hack that I am using now, is importing app from api.py
from . import api
api.app.logger.debug('SOME MESSAGE')
And this message is displayed in the console. But I know that I am wrongdoing here. Is there a better way?
Flask uses the global app object to store this stuff. What you most likely want to do is
from flask import current_app
current_app.logger.debug('hi')
Alternatively, you could configure the default logger to log to a file the same way that flask configures its logger to do so using standard python logging methods.
# app_logger.py
import logging
import logging.config
logging.config.fileConfig('logging_config.ini')
logger = logging.getLogger()
Then in other files...
#utils.py
from app_logger import logger
logger.debug('hi')
This second way is how I set up logging for my flask apps, create a global "logger" object and import that object everywhere I want to log (allowing me to easily figure out where to change the config to do things like log to stdout or a file or whatever)
Related
I'm trying to establish logging in all modules I'm using. My project structure is
# driver.py
import logging
logger = logging.getLogger(__name__)
class driver:
....
# driver_wrapper.py
from driver import driver
device = driver(...)
def driver_func():
logging.info("...")
....
# main.py
import logging
import driver_wrapper
logging.basicConfig(stream=sys.stdout, level=logging.WARNING)
driver_wrapper.driver_func()
My problem now is that I still get INFO level messages and also the output is 'INFO:root'. But I would expect the module name instead of root.
Is there a way to set the logging level in the main.py for all modules or is it already correct how I do it? There are a lot of posts about this problem but the solutions don't seem to work for me.
All your modules that use logging should have the logger = logging.getLogger(__name__) line, and thereafter you always log to e.g.logger.info(...), and never call e.g. logging.info(...). The latter is equivalent to logging to the root logger, not the module's logger. That "all your modules" includes driver_wrapper.py in your example.
I'm trying to import from the following module:
https://github.com/dmlc/gluon-cv/blob/master/gluoncv/torch/data/gluoncv_motion_dataset/dataset.py
However this includes the lines:
logging.basicConfig(level=logging.INFO)
log = logging.getLogger()
Which is messing up the logging settings I'm trying to apply in my main file. How can I import from this module, but overwrite the above log settings?
Tensorflow makes logging messages hide and not appear when I run the code.
I have tried the following stuff but couldn't find a way to make my code work.
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
So my code is the following
import logging
import tensorflow as tf
logging.basicConfig(filename='example.log', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
I expected to have the debug messages into my file example.log but nothing appeared inside example log.
When I import tensorflow, the messages don't appear and when I don't they do.
I need to use both tensorflow and logging because I use an existing code. Is there a way so logging suppresses Tensorflow?
Two facts:
logging.basicConfig will do nothing if the root logger is already configured:
This function does nothing if the root logger already has handlers configured for it.
tensorflow has the absl-py dependency that will try to initialize logging when imported by appending a NullHandler to the root handler:
# The absl handler will always be attached to root, not the absl logger.
if not logging.root.handlers:
# Attach the absl handler at import time when there are no other handlers.
# Otherwise it means users have explicitly configured logging, and the absl
# handler will only be attached later in app.run(). For App Engine apps,
# the absl handler is not used.
logging.root.addHandler(_absl_handler)
Not sure why the handler is attached to the root logger instead of the absl logger, though - might be a bug or a workaround for some other issue.
So the problem is that the import tensorflow call will call import absl.logging which causes an early logger configuration. The subsequent call (yours) to logging.basicConfig will hence do nothing. To fix that, you need to configure logging before importing tensorflow:
import logging
logging.basicConfig(filename='example.log', level=logging.DEBUG)
import tensorflow as tf
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Rule of thumb: always call your logging configuration as early as possible.
Writing logs in default format to file
If you just want to write the default logs to file, abseil logger can also do that:
from absl import logging as absl_logging
absl_logging.get_absl_handler().use_absl_log_file(
program_name='mytool',
log_dir='/var/logs/'
)
Besides the method offered by #hoefling, you can also clear the handlers of the root logger just before your logging configurations:
logging.getLogger().handlers = []
# ...
logging.basicConfig(level=level, handlers=handlers)
For example, i have some script test1.py with code like this:
import logging
from logging.handlers import RotatingFileHandler
import some_module
handler = RotatingFileHandler('TEST1.log', maxBytes=18000, backupCount=7)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logging.getLogger("some_module").addHandler(handler)
do_something():
some_module.do_smth()
do_something()
And I have another script test2.py with code like this:
import logging
from logging.handlers import RotatingFileHandler
import some_module
handler = RotatingFileHandler('TEST2.log', maxBytes=18000, backupCount=7)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logging.getLogger("some_module").addHandler(handler)
do_something():
some_module.do_smth_else()
do_something()
Then i import both scripts in file app.py, which can call one of the scripts for some reasons.
The problem is, that all log messages for module some_module from script test1.py are written to both log files: and TEST1.log, and TEST2.log.
As i understood, the problem is about singleton pattern, so module logging is something like global module for all my scripts, which are working in the same process. So, when i import test1.py to app.py it adds handler for some_module first time, then, when i import test2.py to app.py, it adds handler for some_module another time, and this module now has 2 handlers.
Is there a way to add handlers for this module separately, so all debug messages, which are being called by test1.py, will be written in TEST1.log, but not in TEST2.log.
UPDATE:
In my case i am trying to do it with this module, and it seems, that with it it's not working:
logging.getLogger("TeleBot.test1").setLevel(logging.DEBUG)
logging.getLogger("TeleBot.test1").addHandler(handler)
And nothing is being written in my log file, but if i just do simply:
logging.getLogger("TeleBot").setLevel(logging.DEBUG)
logging.getLogger("TeleBot").addHandler(handler)
It's working, but, as i mentioned in the question, it writes debug messages to all files.
So, is it a bug in this particular module?
Doing logging.getLogger("some_module") in both files returns the same Logger object as you have already observed.
To get a separate Logger in each file simply provide a different name in getLogger() each time.
E.g. in test1.py
logging.getLogger("some_module.test1").addHandler(handler)
and in test2.py
logging.getLogger("some_module.test2").addHandler(handler)
I can't get it to work to use one module that creates the Flask application object and runs it, and one module that implements the views (routes and errorhandlers). The modules are not contained in a Python package.
app.py
from flask import Flask
app = Flask('graphlog')
import config
import views
if __name__ == '__main__':
app.run(host=config.host, port=config.port, debug=config.debug)
views.py
from app import app
#app.route('/')
def index():
return 'Hello!'
config.py
host = 'localhost'
port = 8080
debug = True
I always get Flask's default "404 Not Found" page. If I move the contents of view.py to app.py however, it works. What's the problem here?
You have four modules here:
__main__, the main script, the file you gave to the Python command to run.
config, loaded from the config.py file.
views, loaded from the views.py file.
app, loaded from app.py when you use import app.
Note that the latter is separate from the first! The initial script is not loaded as app and Python sees it as different. You have two Flask objects, one referenced as __main__.app, the other as app.app.
Create a separate file to be the main entry point for your script; say run.py:
from app import app
import config
if __name__ == '__main__':
app.run(host=config.host, port=config.port, debug=config.debug)
and remove the import config line from app.py, as well as the last two lines.
Alternatively (but much uglier), use from __main__ import app in views.py.