How enable/implement Multi Threading in the WSGIServer of Flask Python - python

I have a Flask API which servers to Web and Mobile Apps.
But sometimes on heavy load, app or websites stop quick response and displays results taking time,
I just want to enable multithreading in the flask running with WSGIServer.
def main():
"""Main entry point of the app."""
try:
http_server = WSGIServer(('0.0.0.0', 8084), app, log=logging, error_log=logging)
http_server.serve_forever()
except Exception as exc:
logger.error(exc.message)
logger.exception(traceback.format_exc())
finally:
# Do something here
pass
Thanks,

The built-in Flask development server, whilst not intended for multithreaded use or deployment does allow multithreading:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return 'Hello, world!'
if __name__ == '__main__':
app.run(threaded=True)
The above code is a simple Hello World script that uses multithreading; not that any process is using another thread, but you get the idea.

Related

Python Flask socketIO server not running

I have this simple script with a Flask webserver. When I try to run the Python script, nothing happens, it just freezes.
I have already installed eventlet but this has not fixed the issue.
from flask import Flask, render_template
from flask_socketio import SocketIO
app = Flask(__name__, static_folder="statics", template_folder="templates")
socketio = SocketIO(app)
#app.route("/")
def main():
return render_template('index.html')
#socketio.event
def connect(sid, environ):
print(sid, 'connected')
#socketio.event
def disconnect(sid):
print(sid, 'disconnected')
if __name__ == "__main__":
socketio.run(app)
How can I stop this script from freezing and make it serve the webpage?
I am not sure what you mean by 'it freezes' but if it means you don't get any output in the terminal for debugging, you can fix that by setting debug mode to true using:
socketio.run(app, debug=True)

How to emit data from background process to Flask-SocketIO

emitted data from different process to socketio but not working
I created Flask App in which I am using Flask-SocketIO framework. The code for flask app is below:
from web import create_app, socketio
app = create_app()
if __name__ == '__main__':
socketio.run()
I am running this using flask run command.
But I have another python script in which I am importing socketio and want to emit data to client's browser.
# cli-script.py
import time
from web import socketio
def demo():
while 1:
socketio.emit('my-event', ("My Data"))
time.sleep(10)
demo()
My flask application folder structure looks like this:
/-
web
__init__.py
code.py
web-script.py
cli-script.py
and I am running two python processes:
flask run
python cli-script.py
Why this doesn't work ?

How to get arrived timestamp of a request in flask

I have an ordinary Flask application, with just one thread to process requests. There are many requests arriving at the same time. They queue up to wait for be processed. How can I get the waiting time in queue of each request?
from flask import Flask, g
import time
app = Flask(__name__)
#app.before_request()
def before_request():
g.start = time.time()
g.end = None
#app.teardown_request
def teardown_request(exc):
g.end = time.time()
print g.end - g.start
#app.route('/', methods=['POST'])
def serve_run():
pass
if __name__ == '__main__':
app.debug = True
app.run()
There is no way to do that using Flask's debug server in single-threaded mode (which is what your example code uses). That's because by default, the Flask debug server merely inherits from Python's standard HTTPServer, which is single-threaded. (And the underlying call to select.select() does not return a timestamp.)
I just have one thread to process requests.
OK, but would it suffice to spawn multiple threads, but prevent them from doing "real" work in parallel? If so, you might try app.run(..., threaded=True), to allow the requests to start immediately (in their own thread). After the start timestamp is recorded, use a threading.Lock to force the requests to execute serially.
Another option is to use a different WSGI server (not the Flask debug server). I suspect there's a way to achieve what you want using GUnicorn, configured with asynchronous workers in a single thread.
You can doing something like this
from flask import Flask, current_app, jsonify
import time
app = Flask(__name__)
#app.before_request
def before_request():
Flask.custom_profiler = {"start": time.time()}
#app.after_request
def after_request(response):
current_app.custom_profiler["end"] = time.time()
print(current_app.custom_profiler)
print(f"""execution time: {current_app.custom_profiler["end"] - current_app.custom_profiler["start"]}""")
return response
#app.route('/', methods=['GET'])
def main():
return jsonify({
"message": "Hello world"
})
if __name__ == '__main__':
app.run()
And testing like this
→ curl http://localhost:5000
{"message":"Hello world"}
Flask message
→ python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
{'start': 1622960256.215391, 'end': 1622960256.215549}
execution time: 0.00015807151794433594
127.0.0.1 - - [06/Jun/2021 13:17:36] "GET / HTTP/1.1" 200 -

Replacing flask internal web server with Apache

I have written a single user application that currently works with Flask internal web server. It does not seem to be very robust and it crashes with all sorts of socket errors as soon as a page takes a long time to load and the user navigates elsewhere while waiting. So I thought to replace it with Apache.
The problem is, my current code is a single program that first launches about ten threads to do stuff, for example set up ssh tunnels to remote servers and zmq connections to communicate with a database located there. Finally it enters run() loop to start the internal server.
I followed all sorts of instructions and managed to get Apache service the initial page. However, everything goes wrong as I now don't have any worker threads available, nor any globally initialised classes, and none of my global variables holding interfaces to communicate with these threads do not exist.
Obviously I am not a web developer.
How badly "wrong" my current code is? Is there any way to make that work with Apache with a reasonable amount of work? Can I have Apache just replace the run() part and have a running application, with which Apache communicates? My current app in a very simplified form (without data processing threads) is something like this:
comm=None
app = Flask(__name__)
class CommsHandler(object):
__init__(self):
*Init communication links to external servers and databases*
def request_data(self, request):
*Use initialised links to request something*
return result
#app.route("/", methods=["GET"]):
def mainpage():
return render_template("main.html")
#app.route("/foo", methods=["GET"]):
def foo():
a=comm.request_data("xyzzy")
return render_template("foo.html", data=a)
comm = CommsHandler()
app.run()
Or have I done this completely wrong? Now when I remove app.run and just import app class to wsgi script, I do get a response from the main page as it does not need reference to global variable comm.
/foo does not work, as "comm" is an uninitialised variable. And I can see why, of course. I just never thought this would need to be exported to Apache or any other web server.
So the question is, can I launch this application somehow in a rc script at boot, set up its communication links and everyhing, and have Apache/wsgi just call function of the running application instead of launching a new one?
Hannu
This is the simple app with flask run on internal server:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
To run it on apache server Check out fastCGI doc :
from flup.server.fcgi import WSGIServer
from yourapplication import app
if __name__ == '__main__':
WSGIServer(app).run()

Simple flask/gevent request isn't running concurrently

I have this simple flask/gevent demo code.
#!/usr/bin/env python
import gevent
from gevent.pywsgi import WSGIServer
from gevent import monkey
monkey.patch_socket()
from flask import Flask, Response
app = Flask(__name__)
#app.route('/')
def stream():
def gen():
for i in range(10):
yield "data: %d\r\n" % i
gevent.sleep(1)
return Response(gen())
if __name__ == '__main__':
http = WSGIServer(('', 5000), app)
http.serve_forever()
When I run it and open multiple urls in the browser, all but one of them block. What am I doing wrong?
I have tried running it with monkey.patch_all(), and running it with gunicorn streaming:app -k gevent - it still blocks in the browser.
Multiple tabs in browsers will block. That doesn't mean gevent/gunicorn isn't running the requests concurrently. I tried it with concurrent curl requests and XmlHttpRequest - it works as expected. Also note that curl buffers output. "\r\n" is required to make it print line by line.
Sidenote: Thanks to mitsuhiko on #pocoo for resolving it. If you haven't tried flask, you should. Both mitushiko and flask are awesome.

Categories

Resources