My end goal is to have a button on my website (dashboard created in React) which allows me to run a Selenium test (written in python).
I am using socket.io in hopes that I can stream test results live back to dashboard, but I seem to be hitting some sort of time limit at about 29 seconds.
To debug I made this test case, which completes on the server side, but my connection is severed before emit('test_progress', 29) happens.
from flask import Flask, render_template
from flask_socketio import SocketIO, join_room, emit
import time
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
#socketio.on('run_test')
def handle_run_test(run_test):
print('received run_test')
for x in range(1, 30):
time.sleep(1)
print(x)
emit('test_progress', x)
time.sleep(1)
print('TEST FINISHED')
emit('test_finished', {'data':None})
if __name__ == '__main__':
socketio.run(app, debug=True)
(Some of) my JavaScript
import settings from './settings.js';
import io from 'socket.io-client';
const socket = io(settings.socketio);
socket.on('test_progress', function(data){
console.log(data);
});
My console in browser
...
App.js:154 27
App.js:154 28
polling-xhr.js:269 POST http://127.0.0.1:5000/socket.io/?EIO=3&transport=polling&t=Mbl7mEI&sid=72903901182d49eba52a4a813772eb06 400 (BAD REQUEST)
...
(reconnects)
Eventually, I'll have a test running that could take 40-60 seconds instead of the arbitrary time.sleep(1) calls, so I would like the function to be able to use more than 29 seconds. Am I going about this wrong or is there a way to change this time limit?
My solution was to use threading as described in this question
I also needed to implement #copy_current_request_context so that the thread could communicate
Related
Let's run a simple Bottle server:
from bottle import route, run
#route('/')
def index():
return 'Hello'
if __name__ == "__main__":
run(port=80)
I wanted to test the speed of the requests with:
import requests
for i in range(10):
requests.get('http://localhost/?param=%i' % i)
and then I was about to use multithreading/parallel to also test what happens if 10 requests arrive at the same time, and not one after another... but I was stopped by something strange:
Animated console of the incoming requests
Each request seems to take precisely 1 second!
Why such a long time for each request on a Bottle server? (I know it's a micro-framework, but this is surely not the reason for 1-second to respond!)
PS: it's the same with urllib.request.urlopen. This code uses the default WSGIRefServer:
Bottle v0.12.18 server starting up (using WSGIRefServer())...
and was tested on Windows 7 + Python 3.7 64 bit.
I have a Flask API which servers to Web and Mobile Apps.
But sometimes on heavy load, app or websites stop quick response and displays results taking time,
I just want to enable multithreading in the flask running with WSGIServer.
def main():
"""Main entry point of the app."""
try:
http_server = WSGIServer(('0.0.0.0', 8084), app, log=logging, error_log=logging)
http_server.serve_forever()
except Exception as exc:
logger.error(exc.message)
logger.exception(traceback.format_exc())
finally:
# Do something here
pass
Thanks,
The built-in Flask development server, whilst not intended for multithreaded use or deployment does allow multithreading:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return 'Hello, world!'
if __name__ == '__main__':
app.run(threaded=True)
The above code is a simple Hello World script that uses multithreading; not that any process is using another thread, but you get the idea.
Context
I have a server called "server.py" that functions as a post-commit webhook from GitLab.
Within "server.py", there is a long-running process (~40 seconds)
SSCCE
#!/usr/bin/env python
import time
from flask import Flask, abort, jsonify
debug = True
app = Flask(__name__)
#app.route("/", methods=['POST'])
def compile_metadata():
# the long running process...
time.sleep(40)
# end the long running process
return jsonify({"success": True})
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8082, debug=debug, threaded=True)
Problem Statement
GitLab's webhooks expect return codes to be returned quickly. Since my webhook returns after or around 40 seconds; GitLab sends a retry sending my long running process in a loop until GitLab tries too many times.
Question
Am I able to return a status code from Flask back to GitLab, but still run my long running process?
I've tried adding something like:
...
def compile_metadata():
abort(200)
# the long running process
time.sleep(40)
but abort() only supports failure codes.
I've also tried using #after_this_request:
#app.route("/", methods=['POST'])
def webhook():
#after_this_request
def compile_metadata(response):
# the long running process...
print("Starting long running process...")
time.sleep(40)
print("Process ended!")
# end the long running process
return jsonify({"success": True})
Normally, flask returns a status code only from python's return statement, but I obviously cannot use that before the long running process as it will escape from the function.
Note: I am not actually using time.sleep(40) in my code. That is there only for posterity, and for the SSCCE. It will return the same result
Have compile_metadata spawn a thread to handle the long running task, and then return the result code immediately (i.e., without waiting for the thread to complete). Make sure to include some limitation on the number of simultaneous threads that can be spawned.
For a slightly more robust and scalable solution, consider some sort message queue based solution like celery.
For the record, a simple solution might look like:
import time
import threading
from flask import Flask, abort, jsonify
debug = True
app = Flask(__name__)
def long_running_task():
print 'start'
time.sleep(40)
print 'finished'
#app.route("/", methods=['POST'])
def compile_metadata():
# the long running process...
t = threading.Thread(target=long_running_task)
t.start()
# end the long running process
return jsonify({"success": True})
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8082, debug=debug, threaded=True)
I was able to achieve this by using multiprocessing.dummy.Pool. After using threading.Thread, it proved unhelpful as Flask would still wait for the thread to finish (even with t.daemon = True)
I achieved the result of returning a status code before the long-running task like such:
#!/usr/bin/env python
import time
from flask import Flask, jsonify, request
from multiprocessing.dummy import Pool
debug = True
app = Flask(__name__)
pool = Pool(10)
def compile_metadata(data):
print("Starting long running process...")
print(data['user']['email'])
time.sleep(5)
print("Process ended!")
#app.route('/', methods=['POST'])
def webhook():
data = request.json
pool.apply_async(compile_metadata, [data])
return jsonify({"success": True}), 202
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8082, debug=debug, threaded=True)
When you want to return a response from the server quickly, and still do some time consuming work, generally you should use some sort of shared storage like Redis to quickly store all the stuff you need, then return your status code. So the request gets served very quickly.
And have a separate server routinely work that semantic job queue to do the time consuming work. And then remove the job from the queue once the work is done. Perhaps storing the final result in shared storage as well. This is the normal approach, and it scales very well. For example, if your job queue grows too fast for a single server to keep up with, you can add more servers to work that shared queue.
But even if you don't need scalability, it's a very simple design to understand, implement, and debug. If you ever get an unexpected spike in request load, it just means that your separate server will probably be chugging away all night long. And you have peace of mind that if your servers shut down, you won't lose any unfinished work because they're safe in the shared storage.
But if you have one server do everything, performing the long running tasks asynchronously in the background, I guess maybe just make sure that the background work is happening like this:
------------ Serving Responses
---- Background Work
And not like this:
---- ---- Serving Responses
---- Background Work
Otherwise it would be possible that if the server is performing some block of work in the background, it might be unresponsive to a new request, depending on how long that time consuming work takes (even under very little request load). But if the client times out and retries, I think you're still safe from performing double work. But you're not safe from losing unfinished jobs.
I have an ordinary Flask application, with just one thread to process requests. There are many requests arriving at the same time. They queue up to wait for be processed. How can I get the waiting time in queue of each request?
from flask import Flask, g
import time
app = Flask(__name__)
#app.before_request()
def before_request():
g.start = time.time()
g.end = None
#app.teardown_request
def teardown_request(exc):
g.end = time.time()
print g.end - g.start
#app.route('/', methods=['POST'])
def serve_run():
pass
if __name__ == '__main__':
app.debug = True
app.run()
There is no way to do that using Flask's debug server in single-threaded mode (which is what your example code uses). That's because by default, the Flask debug server merely inherits from Python's standard HTTPServer, which is single-threaded. (And the underlying call to select.select() does not return a timestamp.)
I just have one thread to process requests.
OK, but would it suffice to spawn multiple threads, but prevent them from doing "real" work in parallel? If so, you might try app.run(..., threaded=True), to allow the requests to start immediately (in their own thread). After the start timestamp is recorded, use a threading.Lock to force the requests to execute serially.
Another option is to use a different WSGI server (not the Flask debug server). I suspect there's a way to achieve what you want using GUnicorn, configured with asynchronous workers in a single thread.
You can doing something like this
from flask import Flask, current_app, jsonify
import time
app = Flask(__name__)
#app.before_request
def before_request():
Flask.custom_profiler = {"start": time.time()}
#app.after_request
def after_request(response):
current_app.custom_profiler["end"] = time.time()
print(current_app.custom_profiler)
print(f"""execution time: {current_app.custom_profiler["end"] - current_app.custom_profiler["start"]}""")
return response
#app.route('/', methods=['GET'])
def main():
return jsonify({
"message": "Hello world"
})
if __name__ == '__main__':
app.run()
And testing like this
→ curl http://localhost:5000
{"message":"Hello world"}
Flask message
→ python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
{'start': 1622960256.215391, 'end': 1622960256.215549}
execution time: 0.00015807151794433594
127.0.0.1 - - [06/Jun/2021 13:17:36] "GET / HTTP/1.1" 200 -
I have this simple flask/gevent demo code.
#!/usr/bin/env python
import gevent
from gevent.pywsgi import WSGIServer
from gevent import monkey
monkey.patch_socket()
from flask import Flask, Response
app = Flask(__name__)
#app.route('/')
def stream():
def gen():
for i in range(10):
yield "data: %d\r\n" % i
gevent.sleep(1)
return Response(gen())
if __name__ == '__main__':
http = WSGIServer(('', 5000), app)
http.serve_forever()
When I run it and open multiple urls in the browser, all but one of them block. What am I doing wrong?
I have tried running it with monkey.patch_all(), and running it with gunicorn streaming:app -k gevent - it still blocks in the browser.
Multiple tabs in browsers will block. That doesn't mean gevent/gunicorn isn't running the requests concurrently. I tried it with concurrent curl requests and XmlHttpRequest - it works as expected. Also note that curl buffers output. "\r\n" is required to make it print line by line.
Sidenote: Thanks to mitsuhiko on #pocoo for resolving it. If you haven't tried flask, you should. Both mitushiko and flask are awesome.