Can't disable flask/werkzeug logging - python

I have been trying for way to long to disable the logger of werkzeug. I'm trying to create a socketio server in python but werkzeug keeps logging all POST and GET requests. It's really annoying because my logging get's flooded.
async_mode = 'gevent'
import logging
from flask import Flask, render_template
import socketio
sio = socketio.Server(logger=False, async_mode=async_mode)
app = Flask(__name__)
app.wsgi_app = socketio.Middleware(sio, app.wsgi_app)
app.config['SECRET_KEY'] = 'secret!'
thread = None
app.logger.disabled = True
log = logging.getLogger('werkzeug')
log.disabled = True
#app.route('/')
def index():
#global thread
#if thread is None:
# thread = sio.start_background_task(background_thread)
return render_template('index.html')
#sio.on('answer', namespace='/test')
def test_answer(sid, message):
print(message)
if __name__ == '__main__':
if sio.async_mode == 'threading':
# deploy with Werkzeug
app.run(threaded=True)
elif sio.async_mode == 'eventlet':
# deploy with eventlet
import eventlet
import eventlet.wsgi
eventlet.wsgi.server(eventlet.listen(('', 5000)), app)
elif sio.async_mode == 'gevent':
# deploy with gevent
from gevent import pywsgi
try:
from geventwebsocket.handler import WebSocketHandler
websocket = True
except ImportError:
websocket = False
if websocket:
pywsgi.WSGIServer(('', 5000), app,
handler_class=WebSocketHandler).serve_forever()
else:
pywsgi.WSGIServer(('', 5000), app).serve_forever()
elif sio.async_mode == 'gevent_uwsgi':
print('Start the application through the uwsgi server. Example:')
#print('uwsgi --http :5000 --gevent 1000 --http-websockets --master '
# '--wsgi-file app.py --callable app')
else:
print('Unknown async_mode: ' + sio.async_mode)
Everywhere is see this as the solution but it doesn't stop werkzeug from logging.
app.logger.disabled = True
log = logging.getLogger('werkzeug')
log.disabled = True
These are the kind of messages:
::1 - - [2018-02-28 22:09:03] "GET /socket.io/?EIO=3&transport=polling&t=M7UFq6u HTTP/1.1" 200 345 0.000344
::1 - - [2018-02-28 22:09:03] "POST /socket.io/?EIO=3&transport=polling&t=M7UFq7A&sid=daaf8a43faf848a7b2ae185802e7f164 HTTP/1.1" 200 195 0.000284
::1 - - [2018-02-28 22:09:03] "GET /socket.io/?EIO=3&transport=polling&t=M7UFq7B&sid=daaf8a43faf848a7b2ae185802e7f164 HTTP/1.1" 200 198 0.000153
::1 - - [2018-02-28 22:10:03] "GET /socket.io/?EIO=3&transport=polling&t=M7UFq7N&sid=daaf8a43faf848a7b2ae185802e7f164 HTTP/1.1" 400 183 60.058020
I've tried to set the level to only critical, but that didn't help either. I've also tried to use grep to suppress the messages but it seems that grep doesn't work with python console output.
Edit: I'm using python 3.5.2 on linux but had the same problem on 3.6 on windows. werkzeug is 0.14.1, flaks is 0.12.2 and python-socketio is 1.8.4
Edit2: I was able to fix the problem by using grep, the problem was that werkzeug send everything to stderr which should be handled differently in the command line.
python app.py 2>&1 | grep -v 'GET\|POST'
This gives the result I wanted.

The quick answer is to pass log=None when you create your WSGIServer:
pywsgi.WSGIServer(('', 5000), app, log=None).serve_forever()
The gevent WSGI server logging is apparently a bit special according to the documentation:
[...] loggers are likely to not be gevent-cooperative. For example, the socket and syslog handlers use the socket module in a way that can block, and most handlers acquire threading locks.
If you want to have more control over the gevent WSGI server logging, you can pass in your own logger (and error_log). Just make sure to wrap it in a LoggingLogAdapter first:
from gevent.pywsgi import LoggingLogAdapter
server_log = LoggingLogAdapter(logging.getLogger(__file__))
# server_log.disabled = True # Now you can disable it like a normal log
...
pywsgi.WSGIServer(('', 5000), app, log=server_log).serve_forever()
As a side note, I checked which loggers were instantiated with this little patch to logging.getLogger. Maybe it will be helpful for others trying to understand where log output comes from:
import logging
old_getLogger = logging.getLogger
def getLogger(*args, **kwargs):
print('Getting logger', args, kwargs)
return old_getLogger(*args, **kwargs)
logging.getLogger = getLogger
The output was something like:
Getting logger ('concurrent.futures',) {}
Getting logger ('asyncio',) {}
Getting logger ('engineio.client',) {}
Getting logger ('engineio.server',) {}
Getting logger ('socketio.client',) {}
Getting logger ('socketio',) {}
Getting logger ('socketio',) {}
Getting logger ('socketio',) {}
Getting logger ('socketio.server',) {}
Getting logger ('socketio.client',) {}
Getting logger () {}
Getting logger ('main',) {}
Getting logger ('flask.app',) {}
Getting logger ('flask',) {}
Getting logger ('werkzeug',) {}
Getting logger ('wsgi',) {}
But of course disabling any of these loggers doesn't work since the default gevent WSGI logger is just printing directly to stderr.

Related

Python FastAPI runs in PyCharm in Debug Mode and on Docker but not when Run

I have some weird behaviour with an API I am currently developing with PyCharm.
I already developed many REST-API with Python FastAPI successfully. But this time, I am facing some very weird behaviour of my application.
To test locally, I added in the main.py script the following line, which works fine for every other application I ever developed:
if __name__ == "__main__":
logger.info("starting server with dev configuration")
uvicorn.run(app, host="127.0.0.1", port=5000)
The code runs when I set a breakpoint on line uvicorn.run(...) and then resume from there.
The code does also run, when I build the docker image and then run it on docker when I use the following dockerfile:
FROM python:3.10-slim
...
EXPOSE 80
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "1"]
But when I run normally, uvicorn does not seem to boot as no messages are logged like I would expect:
INFO: Started server process [6828]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
The problem occurred, after I added Azure application insights handler.
Configuration of application insights:
import logging
from fastapi import Request
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace import attributes_helper, samplers
from opencensus.trace.span import SpanKind
from opencensus.trace.tracer import Tracer
from starlette.types import ASGIApp
HTTP_HOST = attributes_helper.COMMON_ATTRIBUTES["HTTP_HOST"]
HTTP_METHOD = attributes_helper.COMMON_ATTRIBUTES["HTTP_METHOD"]
HTTP_PATH = attributes_helper.COMMON_ATTRIBUTES["HTTP_PATH"]
HTTP_ROUTE = attributes_helper.COMMON_ATTRIBUTES["HTTP_ROUTE"]
HTTP_URL = attributes_helper.COMMON_ATTRIBUTES["HTTP_URL"]
HTTP_STATUS_CODE = attributes_helper.COMMON_ATTRIBUTES["HTTP_STATUS_CODE"]
STACKTRACE = attributes_helper.COMMON_ATTRIBUTES["STACKTRACE"]
_logger = logging.getLogger(__name__)
class AzureApplicationInsightsMiddleware:
def __init__(self, app: ASGIApp, azure_exporter: AzureExporter) -> None:
self._app = app
self._sampler = samplers.AlwaysOnSampler()
self._exporter = azure_exporter
async def __call__(self, request: Request, call_next):
tracer = Tracer(exporter=self._exporter, sampler=self._sampler)
with tracer.span("main") as span:
span.span_kind = SpanKind.SERVER
response = await call_next(request)
# does not seem to return a response
span.name = "[{}]{}".format(request.method, request.url)
tracer.add_attribute_to_current_span(HTTP_HOST, request.url.hostname)
tracer.add_attribute_to_current_span(HTTP_METHOD, request.method)
tracer.add_attribute_to_current_span(HTTP_PATH, request.url.path)
tracer.add_attribute_to_current_span(HTTP_URL, str(request.url))
try:
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
except Exception:
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, "500")
return response
Added as middleware:
from fastapi import FastAPI
from opencensus.ext.azure.trace_exporter import AzureExporter
from app.startup.middleware.application_insights import AzureApplicationInsightsMiddleware
from app.startup.middleware.cors import add_cors
from config import app_config
def create_middleware(app: FastAPI) -> FastAPI:
if app_config.appinsights_instrumentation_key is not None:
azure_exporter = AzureExporter(connection_string=app_config.appinsights_instrumentation_key)
app.middleware("http")(AzureApplicationInsightsMiddleware(app=app, azure_exporter=azure_exporter))
app = add_cors(app)
return app
Added handler to logger:
import logging
import sys
from logging import StreamHandler
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt="%(asctime)s %(levelname)-8s %(name)-15s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S")
handler = StreamHandler(stream=sys.stdout)
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
handler = AzureLogHandler(connection_string=connection_string)
handler.setLevel(logging.INFO)
handler.setFormatter(formatter)
logger.addHandler(handler)
I use the standard template, which works for all my other application but this time obviously it does not. I wonder, there might be an issue with maybe some of the installed modules I am using in requirements.txt:
fastapi
uvicorn
azure-servicebus
pymongo[srv]
pandas
numpy
jinja2
svgwrite
matplotlib
Has anyone faced the same problem yet?
changed host to localhost --> uvicorn server still not booting

Python logging with multithreading + multiprocessing

Please take time to read full question to understand the exact issue. Thankyou.
I have a runner/driver program that listens to a Kafka topic and dispatches tasks using a ThreadPoolExecuter whenever a new message is received on the topic ( as shown below ) :
consumer = KafkaConsumer(CONSUMER_TOPIC, group_id='ME2',
bootstrap_servers=[f"{KAFKA_SERVER_HOST}:{KAFKA_SERVER_PORT}"],
value_deserializer=lambda x: json.loads(x.decode('utf-8')),
enable_auto_commit=False,
auto_offset_reset='latest',
max_poll_records=1,
max_poll_interval_ms=300000)
with ThreadPoolExecutor(max_workers=10) as executor:
futures = []
for message in consumer:
futures.append(executor.submit(SOME_FUNCTION, ARG1, ARG2))
There is a bunch of code in between but that code is not important here so I have skipped it.
Now, the SOME_FUNCTION is from another python script that is imported ( infact there is a hierarchy of imports that happen in later stages ). What is important is that at some point in these scripts, I call the Multiprocessing Pool because I need to do parallel processing on data ( SIMD - single instruction multiple data ) and use the apply_async function to do so.
for loop_message_chunk in loop_message_chunks:
res_list.append(self.pool.apply_async(self.one_matching.match, args=(hash_set, loop_message_chunk, fields)))
Now, I have 2 versions of the runner/driver program :
Kafka based ( the one shown above )
This version spawns threads that start multiprocessing
Listen To Kafka -> Start A Thread -> Start Multiprocessing
REST based ( using flask to achieve same task with a REST call )
This version does not start any threads and calls multiprocessing right away
Listen to REST endpoint -> Start Multiprocessing
Why 2 runner/driver scripts you ask? - this microservice will be used by multiple teams and some want synchronous REST based while some teams want a real time and asynchronous system that is KAFKA based
When I do logging from the parallelized function ( self.one_matching.match in above example ) it works when called through the REST version but not when called using the KAFKA version ( basically when multiprocessing is kicked off by a thread - it does not work ).
Also notice that only the logging from the parallelized function does not work. rest of the scripts in the hierarchy from runner to the script that calls apply_async - which includes scripts that are called from within the thread - log successfully.
Other details :
I configure loggers using yaml file
I configure the logger in the runner script itself for either KAFKA or REST version
I do a logging.getLogger in every other script called after the runner script to get specific loggers to log to different files
Logger Config ( values replaced with generic since I cannot chare exact names ):
version: 1
formatters:
simple:
format: '%(asctime)s | %(name)s | %(filename)s : %(funcName)s : %(lineno)d | %(levelname)s :: %(message)s'
custom1:
format: '%(asctime)s | %(filename)s :: %(message)s'
time-message:
format: '%(asctime)s | %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
handler1:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
formatter: simple
level: DEBUG
filename: logs/logfile1.log
handler2:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 30
formatter: custom1
level: INFO
filename: logs/logfile2.log
handler3:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 30
formatter: time-message
level: DEBUG
filename: logs/logfile3.log
handler4:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 30
formatter: time-message
level: DEBUG
filename: logs/logfile4.log
handler5:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
formatter: simple
level: DEBUG
filename: logs/logfile5.log
loggers:
logger1:
level: DEBUG
handlers: [console, handler1]
propagate: no
logger2:
level: DEBUG
handlers: [console, handler5]
propagate: no
logger3:
level: INFO
handlers: [handler2]
propagate: no
logger4:
level: DEBUG
handlers: [console, handler3]
propagate: no
logger5:
level: DEBUG
handlers: [console, handler4]
propagate: no
kafka:
level: WARNING
handlers: [console]
propogate: no
root:
level: INFO
handlers: [console]
propogate: no
Possible answer: get rid of the threads and use asyncio instead
example pseudocode structure (cobbled together from these examples)
#pseudocode example structure: probably has bugs...
from aiokafka import AIOKafkaConsumer
import asyncio
from concurrent.futures import ProcessPoolExecutor
from functools import partial
async def SOME_FUNCTION_CO(executor, **kwargs):
res_list = []
for loop_message_chunk in loop_message_chunks:
res_list.append(executor.submit(self.one_matching.match, hash_set, loop_message_chunk, fields))
#call concurrent.futures.wait on res_list later, and cancel unneeded futures (regarding one of your prior questions)
return res_list
async def consume():
consumer = AIOKafkaConsumer(
'my_topic', 'my_other_topic',
bootstrap_servers='localhost:9092',
group_id="my-group")
# Get cluster layout and join group `my-group`
await consumer.start()
#Global executor:
#I would also suggest using a "spawn" context unless you really need the
#performance of "fork".
ctx = multiprocessing.get_context("spawn")
tasks = [] #similar to futures in your example (Task subclasses asyncio.Future which is similar to concurrent.futures.Future as well)
with ProcessPoolExecutor(mp_context=ctx) as executor:
try:
# Consume messages
async for msg in consumer:
tasks.append(asyncio.create_task(SOME_FUNCTION_CO(executor, **kwargs)))
finally:
# Will leave consumer group; perform autocommit if enabled.
await consumer.stop()
if __name__ == "__main__":
asyncio.run(consume())
I keep going back and forth on how I think I should represent SOME_FUNCTION in this example, but the key point here is that in the loop over msg in consumer, you are scheduling the tasks to be complete eventually. If any of these tasks take a long time it could block the main loop (which is also running the async for msg in consumer line). Instead; any of these tasks that could take a long time should return a future of some type quickly so you can simply access the result once it's ready.
First of all, I'm not using exactly the same stack. I'm using fastaapi and Redis pubsub and it would be tedious for me to replicate it for flask and Kafka now. I think in principle it should work the same way. At least it might point you tome some misconfiguration in your code. Also, I'm hardcoding the logger config.
I'm sorry to paste a lot of code but I want to provide a complete working example, maybe I'm missing something in your description, you haven't provided a minimal working example.
I have four files:
app.py (fastapi application)
config.py (setup config variables and logger)
redis_ps (redis consumer/listener)
utils (processing function (some_function), redis publish function)
and redis container
docker pull redis
Run
docker run --restart unless-stopped --publish 6379:6379 --name redis -d redis
python3 app.py (will run server and pubsub listener)
python3 utils.py (will publish message over pubsub)
curl -X 'POST' \
'http://0.0.0.0:5000/sync' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '[[2,4],[6, 8]]'
Output
[2021-12-08 17:54:32,688] DEBUG in utils: Run some_function, caller: pubsub
[2021-12-08 17:54:32,688] DEBUG in utils: Run some_function, caller: pubsub
[2021-12-08 17:54:32,698] DEBUG in utils: caller: pubsub, Processing 1, result 1
[2021-12-08 17:54:32,698] DEBUG in utils: caller: pubsub, Processing 3, result 9
[2021-12-08 17:54:32,698] DEBUG in utils: caller: pubsub, Processing 5, result 25
[2021-12-08 17:54:32,698] DEBUG in utils: caller: pubsub, Processing 7, result 49
[2021-12-08 17:54:39,519] DEBUG in utils: Run some_function, caller: rest api
[2021-12-08 17:54:39,520] DEBUG in utils: Run some_function, caller: rest api
[2021-12-08 17:54:39,531] DEBUG in utils: caller: rest api, Processing 8, result 64
[2021-12-08 17:54:39,531] DEBUG in utils: caller: rest api, Processing 6, result 36
[2021-12-08 17:54:39,531] DEBUG in utils: caller: rest api, Processing 2, result 4
[2021-12-08 17:54:39,531] DEBUG in utils: caller: rest api, Processing 4, result 16
Source code
app.py
from concurrent import futures
from typing import List
import uvicorn
from fastapi import FastAPI, APIRouter
from redis_ps import PubSubWorkerThreadListen
from utils import some_function
router = APIRouter()
#router.post("/sync")
def sync_process(data: List[List[int]]):
with futures.ThreadPoolExecutor(max_workers=2) as executor:
future_all = [executor.submit(some_function, loop_message_chunks=d, caller="rest api") for d in data]
return [future.result() for future in future_all]
def create_app():
app = FastAPI(title="app", openapi_url="/openapi.json", docs_url="/")
app.include_router(router)
thread = PubSubWorkerThreadListen()
thread.start()
return app
if __name__ == "__main__":
_app = create_app()
uvicorn.run(_app, host="0.0.0.0", port=5000, debug=True, log_level="debug")
config.py
import sys
import logging
COMPONENT_NAME = "test_logger"
REDIS_URL = "redis://localhost:6379"
def setup_logger(logger_name: str, log_level=logging.DEBUG, fmt: logging.Formatter = None):
fmt = fmt or logging.Formatter("[%(asctime)s] %(levelname)s in %(module)s: %(message)s")
handler = logging.StreamHandler(sys.stdout)
handler.name = "h_console"
handler.setFormatter(fmt)
handler.setLevel(log_level)
logger_ = logging.getLogger(logger_name)
logger_.addHandler(handler)
logger_.setLevel(log_level)
return logger_
setup_logger(COMPONENT_NAME)
redis.ps
import json
import logging
import threading
import time
from concurrent import futures
from typing import Dict, List, Union
import redis
from config import COMPONENT_NAME, REDIS_URL
from utils import some_function
logger = logging.getLogger(COMPONENT_NAME)
class PubSubWorkerThreadListen(threading.Thread):
def __init__(self):
super().__init__()
self._running = threading.Event()
#staticmethod
def connect_pubsub() -> redis.client.PubSub:
while True:
try:
r = redis.Redis.from_url(REDIS_URL)
p = r.pubsub()
p.psubscribe(["*:*:*"])
logger.info("Connected to Redis")
return p
except Exception:
time.sleep(0.1)
def run(self):
if self._running.is_set():
return
self._running.set()
while self._running.is_set():
p = self.connect_pubsub()
try:
listen(p)
except Exception as e:
logger.error(f"Failed to process Redis message or failed to connect: {e}")
time.sleep(0.1)
def stop(self):
self._running.clear()
def get_data(msg) -> Union[Dict, List]:
data = msg.get("data")
if isinstance(data, int):
# the first message has {'data': 1}
return []
try:
return json.loads(data)
except Exception as e:
logger.warning("Failed to parse data in the message (%s) with error %s", msg, e)
return []
def listen(p_):
logger.debug("Start listening")
while True:
for msg_ in p_.listen():
data = get_data(msg_)
if data:
with futures.ThreadPoolExecutor(max_workers=2) as executor:
future_all = [executor.submit(some_function, loop_message_chunks=d, caller="pubsub") for d in data]
[future.result() for future in future_all]
utils.py
import json
import logging
from multiprocessing import Pool
from typing import List
import redis
from config import COMPONENT_NAME, REDIS_URL
logger = logging.getLogger(COMPONENT_NAME)
def one_matching(v, caller: str = ""):
logger.debug(f"caller: {caller}, Processing {v}, result {v*v}")
return v * v
def some_function(loop_message_chunks: List[int], caller: str):
logger.debug(f"Run some_function, caller: {caller}")
with Pool(2) as pool:
v = [pool.apply_async(one_matching, args=(i, caller)) for i in loop_message_chunks]
res_list = [res.get(timeout=1) for res in v]
return res_list
def publish():
data = [[1, 3], [5, 7]]
r_ = redis.Redis.from_url(REDIS_URL)
logger.debug("Published message %s %s", "test", data)
r_.publish("test:test:test", json.dumps(data).encode())
if __name__ == "__main__":
publish()

Unable to get application logs on gunicorn access log file

Here's the Gunicorn config file:
import multiprocessing
certfile="domain.crt"
keyfile="server.key"
bind = "0.0.0.0:8443"
workers = multiprocessing.cpu_count() * 2 + 1
threads = 2
timeout = 300
graceful_timeout = 300
accesslog = "-"
errorlog = "-"
loglevel = "INFO"
As per Gunicorn documentation (https://docs.gunicorn.org/en/stable/settings.html#accesslog), if you set
"accesslog" value as "-", then all gunicorn access logs will go to stdout.
"errorlog" value as "-", then all gunicorn error logs will go to stderr.
Now, I want my application logs to be saved in the same place as gunicorn logs.
Therefore, I want my application logs to go to stdout for now (because in gunicorn config, "accesslog" value is set as "-" which means stdout).
Here is the current application code:
from flask import Flask, jsonify
app = Flask(__name__)
if __name__ != '__main__':
gunicorn_logger = logging.getLogger('gunicorn.error')
logging.basicConfig(format=LOGGING_FORMAT, datefmt=DATE_FORMAT, level=gunicorn_logger.level)
#app.route('/')
def default_route():
"""Default route"""
app.logger.debug('this is a DEBUG message')
app.logger.info('this is an INFO message')
app.logger.warning('this is a WARNING message')
app.logger.error('this is an ERROR message')
app.logger.critical('this is a CRITICAL message')
return jsonify('hello world')
(Copied from https://trstringer.com/logging-flask-gunicorn-the-manageable-way/)
But here's the issue:
All these application logs are redirected to stderr instead of stdout.
I am only getting gunicorn access logs on stdout, although I should get the application logs as well.
How to solve this issue? What changes to make in the application code?
There'a hacky fix which is working, but I am looking for better solutions.
Hacky fix (Logging stdout to gunicorn access log?):
In the application code, I can configure the logger as:
gunicorn_logger = logging.getLogger('gunicorn.error')
logging.basicConfig(format=LOGGING_FORMAT, datefmt=DATE_FORMAT, level=gunicorn_logger.level, stream=sys.stdout)
This seems to redirect my application logs to stdout, but this solution seems like hard coding the value of stream. I want it to be linked to gunicorn configuration, so that if in future I want to get all the logs into a file, I can just go and change it in gunicorn config file, instead of going in application code.

Is this the right way of logging in Flask?

I have written a Flask App as follows:
import logging
from flask import Flask, jsonify
from mongo import Mongo
app = Flask(__name__)
app.config.from_object("config.ProductionConfig")
# routes to a particular change and patch number
#app.route("/<project>/")
def home(project):
app.logger.info('testing info log')
Mongo_obj = Mongo(ip=app.config["DB_HOST"], port=app.config["DB_PORT"],
username=app.config["DB_USERNAME"],
.....................
if __name__ == '__main__':
app.run(host = '0.0.0.0', port = '5000')
Now, the problem I face is, when I look at the logs of the Flask application, all I see is the following:
* Serving Flask app "service" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
172.16.40.189 - - [01/Oct/2019 14:44:29] "GET /abc HTTP/1.1" 200 -
172.16.11.231 - - [01/Oct/2019 14:44:29] "GET /abc HTTP/1.1" 200 -
............
Is there something specific that needs to be done in order to see the log message? Do I need to run the Flask App in debug mode?
If not in debug mode the default log level is WARNING. That's why you don't see your logs. If you'd like your logs to contain INFO level ones you must set it within the logger, eg:
app = Flask(__name__)
app.logger.setLevel(logging.INFO)
This is how I set log levels in our app. I use the create_app function to create a flask app with error handling logging & other necessary configurations. Here is the snippet:
from flask import Flask
import logging
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__)
app.logger.setLevel(logging.ERROR)
# Test Log levels
app.logger.debug("debug log info")
app.logger.info("Info log information")
app.logger.warning("Warning log info")
app.logger.error("Error log info")
app.logger.critical("Critical log info")
return app
app = create_app()
Output show only error and lower logs are visible for now
* Restarting with stat
[2022-12-12 17:38:39,375] ERROR in __init__: Error log info
[2022-12-12 17:38:39,375] CRITICAL in __init__: Critical log info

How to get arrived timestamp of a request in flask

I have an ordinary Flask application, with just one thread to process requests. There are many requests arriving at the same time. They queue up to wait for be processed. How can I get the waiting time in queue of each request?
from flask import Flask, g
import time
app = Flask(__name__)
#app.before_request()
def before_request():
g.start = time.time()
g.end = None
#app.teardown_request
def teardown_request(exc):
g.end = time.time()
print g.end - g.start
#app.route('/', methods=['POST'])
def serve_run():
pass
if __name__ == '__main__':
app.debug = True
app.run()
There is no way to do that using Flask's debug server in single-threaded mode (which is what your example code uses). That's because by default, the Flask debug server merely inherits from Python's standard HTTPServer, which is single-threaded. (And the underlying call to select.select() does not return a timestamp.)
I just have one thread to process requests.
OK, but would it suffice to spawn multiple threads, but prevent them from doing "real" work in parallel? If so, you might try app.run(..., threaded=True), to allow the requests to start immediately (in their own thread). After the start timestamp is recorded, use a threading.Lock to force the requests to execute serially.
Another option is to use a different WSGI server (not the Flask debug server). I suspect there's a way to achieve what you want using GUnicorn, configured with asynchronous workers in a single thread.
You can doing something like this
from flask import Flask, current_app, jsonify
import time
app = Flask(__name__)
#app.before_request
def before_request():
Flask.custom_profiler = {"start": time.time()}
#app.after_request
def after_request(response):
current_app.custom_profiler["end"] = time.time()
print(current_app.custom_profiler)
print(f"""execution time: {current_app.custom_profiler["end"] - current_app.custom_profiler["start"]}""")
return response
#app.route('/', methods=['GET'])
def main():
return jsonify({
"message": "Hello world"
})
if __name__ == '__main__':
app.run()
And testing like this
→ curl http://localhost:5000
{"message":"Hello world"}
Flask message
→ python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
{'start': 1622960256.215391, 'end': 1622960256.215549}
execution time: 0.00015807151794433594
127.0.0.1 - - [06/Jun/2021 13:17:36] "GET / HTTP/1.1" 200 -

Categories

Resources