I am using import logging to save changes to my bokeh server and I want to save it to a file with a .log extension, but when I run the bokeh server, the file is not created and the can not save operations to .log file.
There is a part of the code I wrote below.
Could it be that I am making a mistake in the code or bokeh server does it not work in accordance with logging?
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename = "test.log",
level = logging.DEBUG,
format = LOG_FORMAT,
filemode="w")
logger = logging.getLogger()
When you use bokeh serve %some_python_file%, the Bokeh server is started right away, but your code is executed only when you actually open the URL that points to the Bokeh document that you fill in that code.
bokeh serve configures logging by using logging.basicConfig as well, and calling this function again does not override anything - that's just how logging.basicConfig works.
Instead of using logging directly, you should just create and configure your own logger:
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
file_handler = logging.FileHandler(filename='test.log', mode='w')
file_handler.setFormatter(logging.Formatter(LOG_FORMAT))
logger = logging.getLogger(__name__)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
logger.info('Hello there')
Eugene's answer is correct. Calling logging.basicConfig() for a second time does not have any effect. Nevertheless, if you are using python >= 3.8 then you can use force=True which will disable all existing logging handlers and setup a new one. This practically means that your own logging.basicCOnfig() will just work:
logging.basicConfig(..., force=True)
docs
Related
I have a Cloud Function running Python 3.7 runtime triggered from a Pub/Sub Topic.
In the code, I have places where I use print() to write logs. However, when I go to the logs tab of my function, I see that an extra blank line is added after each log. I would like to remove these, since this is basically doubling my usage of the Logging API.
I have tried using print(message, end="") but this did not remove the blank lines.
Thanks in advance.
Although I have not found out the root cause for the blank line, I was able to resolve this by using the google-cloud-logging library as suggested by John in the comment of my question.
Resulting code is as below:
import google.cloud.logging
import logging
# set up logging client when run on GCP
if not os.environ.get("DEVELOPMENT"): # custom environment variable
# only on GCP
logging_client = google.cloud.logging.Client()
logging_client.setup_logging()
# define logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # min logging level for logger
# define handler only on local
# only add handlers in local, since Cloud Function there already is a handler attached to the logger
# adding another handler in Cloud Function will result in duplicate logging with severity = ERROR
if os.environ.get("DEVELOPMENT"):
console_handler = logging.StreamHandler() # handler to write to stream
console_handler.setLevel(logging.DEBUG) # min logging level for handler
# add handler to logger
logger.addHandler(console_handler)
def my_function():
logger.info('info')
This code will,
not send code to GCP logs when function is executed on local
will print INFO and DEBUG logs both on local and on GCP
Thank you both for your suggestions.
Instead of using print. Use Logger.
import logging
import logging.handlers as handlers
logger = logging.getLogger("YourCloudFunctionLoggerName")
logger.setLevel(logging.DEBUG)
I have a problem. In my project I'm using python logging, to describe every single step of my program. Code is simple:
log = logging.getLogger()
logging.basicConfig(
handlers=[
logging.FileHandler("./logs/{:%d-%m-%Y}/".format(datetime.now())+"{:%Y-%m-%d-%H-%M-%S}.log".format(datetime.now()), 'w', 'utf-8'),
logging.StreamHandler()
],
level=logging.INFO,
format='[%(asctime)s] %(levelname)s - %(message)s',
datefmt='%H:%M:%S'
)
I don't know why, but pipeline in Azure Dev Ops automaticly treats every log as error, no matter that log is type of INFO or ERROR:
Same thing is in console output, everything is in red.
How can I fix it?
That is a default case in Azure DevOps actually to consider python logging as Warning or Error. To make the logging to use perfect in azure DevOps follow the steps below.
Set Loggin Level:
import logging
logger = logging.getLogger('azure.mgmt.resource')
# Set the desired logging level
logger.setLevel(logging.DEBUG)
Use General Namespaces:
import logging
logger = logging.getLogger('azure.storage')
logger.setLevel(logging.INFO)
logger = logging.getLogger('azure')
logger.setLevel(logging.ERROR)
print(f"Logger enabled for ERROR={logger.isEnabledFor(logging.ERROR)}, " \
f"WARNING={logger.isEnabledFor(logging.WARNING)}, " \
f"INFO={logger.isEnabledFor(logging.INFO)}, " \
f"DEBUG={logger.isEnabledFor(logging.DEBUG)}")
This is the link of standard logging libraries.
Check out the documentation on official procedure to handle the logging
The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')
I am about to write a flask app for something very trivial .. and I hit a roadblock in my logging practices.
This is my simple flask app, I wrote this to explain the problem I ran into and have been stuck at for sometime trying to figure out what is happening with python-logging & flask.
# standard
from flask import Flask
from flask_restful import Api
import logging
import json
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
logging.basicConfig(
filename="test.log",
filemode="w",
format=log_fmt,
level=logging.DEBUG
)
# create an object of flask (flask is a web framework)
app = Flask(__name__)
api = Api(app)
# health check /
#app.route("/", methods=['GET'])
def default():
logging.debug("/ request received")
out_dict = {
"hello": "world"
}
logging.debug("/ response" + str(out_dict))
return json.dumps(out_dict)
# main function, entry point
if __name__ == "__main__":
# invokes src and runs the application
logging.debug("starting")
# COMMENTING below - gets me the log file! Whats happening with flask & logging?
app.run(host="0.0.0.0", port=7001, debug=True)
logging.debug("stopping")
Now this is the pattern I generally adopt when I need logging. But when I apply this pattern of logging along with app.run(..) the log-file never gets created. I am unable to figure out why this happens.
But on the contrary .. if I comment the app.run(..), the log file gets created with corresponding debug logs I have in place.
I have been struggling to understand this - and I did land on the flask inbuilt log handler, but looking at its implementation it attaches to logging module itself. So the whole thing is still not making sense. Any help or direction here will be appreciated.
Using logging.basicConfig makes a number of assumptions which upon calling app.run(...) may have undone, as Flask does also make use of the logging module to set up logging output as you noted. However, if you manually set up a file handler and attach it to the root logger like so (i.e. replace the # logging config section with):
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
handler = logging.FileHandler('test.log')
handler.setFormatter(logging.Formatter(log_fmt))
root_logger = logging.getLogger()
root_logger.addHandler(handler)
root_logger.setLevel(logging.DEBUG)
This sets up the logging handler with the formatter set to the log_fmt you had specified, then attaching that handler to the root logger returned by logging.getLogger(). Running the application, give it some requests and quitting, you should see appropriate entries showing up inside test.log in the current working directory, while some of the typical logging output produced by flask will also be shown.
I'm using Python's Tornado library for my web service and I want every log that is created from my code as well as from Tornado to be json formatted. I've tried setting the formatter on the root logger, setting the formatter and all the other loggers. This is the hack I'm currently trying to get working. It seems to me like it should work... However, when I run the app all of the logs from Tornado are still in their standard format.
import logging
from tornado.log import access_log, app_log, gen_log
import logmatic
loggers = [
logging.getLogger(),
logging.getLogger('tornado.access'),
logging.getLogger('tornado.application'),
logging.getLogger('tornado.general'),
access_log,
gen_log,
app_log
]
json_formatter = logmatic.JsonFormatter()
for logger in loggers:
for hand in logger.handlers:
hand.setFormatter(json_formatter)
logging.getLogger('tornado.access').warning('All the things')
# WARNING:tornado.access (172.26.0.6) 0.47ms
# NOT JSON???
NOTE: When I include the loggers for my service logging.getLogger('myservice') in the loggers list and run this they do get the updated formatter and spit out json. This rules out problems with the logmatic formatter. Can't get the formatter to work for the Tornado loggers.
Tornado's loggers don't have any handlers before calling loop.start(), so you should add a handler with predefined formatting to loggers.
formatter = logging.Formatter(...)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
for l in loggers:
l.setLevel(logging.WARNING)
l.addHandler(handler)