Simple flask/gevent request isn't running concurrently - python

I have this simple flask/gevent demo code.
#!/usr/bin/env python
import gevent
from gevent.pywsgi import WSGIServer
from gevent import monkey
monkey.patch_socket()
from flask import Flask, Response
app = Flask(__name__)
#app.route('/')
def stream():
def gen():
for i in range(10):
yield "data: %d\r\n" % i
gevent.sleep(1)
return Response(gen())
if __name__ == '__main__':
http = WSGIServer(('', 5000), app)
http.serve_forever()
When I run it and open multiple urls in the browser, all but one of them block. What am I doing wrong?
I have tried running it with monkey.patch_all(), and running it with gunicorn streaming:app -k gevent - it still blocks in the browser.

Multiple tabs in browsers will block. That doesn't mean gevent/gunicorn isn't running the requests concurrently. I tried it with concurrent curl requests and XmlHttpRequest - it works as expected. Also note that curl buffers output. "\r\n" is required to make it print line by line.
Sidenote: Thanks to mitsuhiko on #pocoo for resolving it. If you haven't tried flask, you should. Both mitushiko and flask are awesome.

Related

Flask SocketIO + Gevent - buffering events from external processes

I want to send socket from asynchronous class in my Flask project. But when I send it, it takes enormous time before it arrives to JavaScript. I am sending it as:
socket_io.emit("event_name", {"foo": "bar"}, broadcast=True, namespace="/com")
App with socketio is initialised as:
app = Flask(__name__, template_folder="templates", static_folder="static", static_url_path="/static")
socketio = SocketIO(app=app, cookie="cookie_name", async_mode=None)
And it is started by this command:
socketio.run(app=app, host="0.0.0.0", port=5000, log_output=False)
My Python library versions are:
# Python == 3.8.5
Flask==1.1.2
Flask-SocketIO==5.0.1
python-engineio==4.0.0
python-socketio==5.04
gevent==20.12.1
gevent-websocket==0.10.1
JavaScript SocketIO: v3.0.4
When I send socket normally by emit command in socket_io handler, it works ok. But when I want to send same socket from external process, it takes a long time.
Does anyone know, how can I solve this problem?
Thank you
The problem was with monkey patching and 32-bit Python3. I must install 64-bit Python3 and then add this at the first line:
import gevent.monkey; gevent.monkey.patch_all()

Slow requests to Bottle server

Let's run a simple Bottle server:
from bottle import route, run
#route('/')
def index():
return 'Hello'
if __name__ == "__main__":
run(port=80)
I wanted to test the speed of the requests with:
import requests
for i in range(10):
requests.get('http://localhost/?param=%i' % i)
and then I was about to use multithreading/parallel to also test what happens if 10 requests arrive at the same time, and not one after another... but I was stopped by something strange:
Animated console of the incoming requests
Each request seems to take precisely 1 second!
Why such a long time for each request on a Bottle server? (I know it's a micro-framework, but this is surely not the reason for 1-second to respond!)
PS: it's the same with urllib.request.urlopen. This code uses the default WSGIRefServer:
Bottle v0.12.18 server starting up (using WSGIRefServer())...
and was tested on Windows 7 + Python 3.7 64 bit.

How to deal with flask-socketio timeout for a function

My end goal is to have a button on my website (dashboard created in React) which allows me to run a Selenium test (written in python).
I am using socket.io in hopes that I can stream test results live back to dashboard, but I seem to be hitting some sort of time limit at about 29 seconds.
To debug I made this test case, which completes on the server side, but my connection is severed before emit('test_progress', 29) happens.
from flask import Flask, render_template
from flask_socketio import SocketIO, join_room, emit
import time
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
#socketio.on('run_test')
def handle_run_test(run_test):
print('received run_test')
for x in range(1, 30):
time.sleep(1)
print(x)
emit('test_progress', x)
time.sleep(1)
print('TEST FINISHED')
emit('test_finished', {'data':None})
if __name__ == '__main__':
socketio.run(app, debug=True)
(Some of) my JavaScript
import settings from './settings.js';
import io from 'socket.io-client';
const socket = io(settings.socketio);
socket.on('test_progress', function(data){
console.log(data);
});
My console in browser
...
App.js:154 27
App.js:154 28
polling-xhr.js:269 POST http://127.0.0.1:5000/socket.io/?EIO=3&transport=polling&t=Mbl7mEI&sid=72903901182d49eba52a4a813772eb06 400 (BAD REQUEST)
...
(reconnects)
Eventually, I'll have a test running that could take 40-60 seconds instead of the arbitrary time.sleep(1) calls, so I would like the function to be able to use more than 29 seconds. Am I going about this wrong or is there a way to change this time limit?
My solution was to use threading as described in this question
I also needed to implement #copy_current_request_context so that the thread could communicate

How enable/implement Multi Threading in the WSGIServer of Flask Python

I have a Flask API which servers to Web and Mobile Apps.
But sometimes on heavy load, app or websites stop quick response and displays results taking time,
I just want to enable multithreading in the flask running with WSGIServer.
def main():
"""Main entry point of the app."""
try:
http_server = WSGIServer(('0.0.0.0', 8084), app, log=logging, error_log=logging)
http_server.serve_forever()
except Exception as exc:
logger.error(exc.message)
logger.exception(traceback.format_exc())
finally:
# Do something here
pass
Thanks,
The built-in Flask development server, whilst not intended for multithreaded use or deployment does allow multithreading:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return 'Hello, world!'
if __name__ == '__main__':
app.run(threaded=True)
The above code is a simple Hello World script that uses multithreading; not that any process is using another thread, but you get the idea.

How to get arrived timestamp of a request in flask

I have an ordinary Flask application, with just one thread to process requests. There are many requests arriving at the same time. They queue up to wait for be processed. How can I get the waiting time in queue of each request?
from flask import Flask, g
import time
app = Flask(__name__)
#app.before_request()
def before_request():
g.start = time.time()
g.end = None
#app.teardown_request
def teardown_request(exc):
g.end = time.time()
print g.end - g.start
#app.route('/', methods=['POST'])
def serve_run():
pass
if __name__ == '__main__':
app.debug = True
app.run()
There is no way to do that using Flask's debug server in single-threaded mode (which is what your example code uses). That's because by default, the Flask debug server merely inherits from Python's standard HTTPServer, which is single-threaded. (And the underlying call to select.select() does not return a timestamp.)
I just have one thread to process requests.
OK, but would it suffice to spawn multiple threads, but prevent them from doing "real" work in parallel? If so, you might try app.run(..., threaded=True), to allow the requests to start immediately (in their own thread). After the start timestamp is recorded, use a threading.Lock to force the requests to execute serially.
Another option is to use a different WSGI server (not the Flask debug server). I suspect there's a way to achieve what you want using GUnicorn, configured with asynchronous workers in a single thread.
You can doing something like this
from flask import Flask, current_app, jsonify
import time
app = Flask(__name__)
#app.before_request
def before_request():
Flask.custom_profiler = {"start": time.time()}
#app.after_request
def after_request(response):
current_app.custom_profiler["end"] = time.time()
print(current_app.custom_profiler)
print(f"""execution time: {current_app.custom_profiler["end"] - current_app.custom_profiler["start"]}""")
return response
#app.route('/', methods=['GET'])
def main():
return jsonify({
"message": "Hello world"
})
if __name__ == '__main__':
app.run()
And testing like this
→ curl http://localhost:5000
{"message":"Hello world"}
Flask message
→ python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
{'start': 1622960256.215391, 'end': 1622960256.215549}
execution time: 0.00015807151794433594
127.0.0.1 - - [06/Jun/2021 13:17:36] "GET / HTTP/1.1" 200 -

Categories

Resources