I developed an API with Bottle and some requests takes long time to send the response, the problem is if during this time I send another short request, I have to wait until the first request is finished.
Here is an example :
from gevent import monkey
monkey.patch_all()
from bottle import route, run
#route('/test', method='GET')
def test():
return 'hello'
#route('/loop', method='GET')
def loop():
for i in range(0, 1000000000):
a = 0
if __name__ == '__main__':
run(host='127.0.0.1', port=45677, debug=True, server='gevent')
If you run /loop and then /test you will have to wait until the /loop is finished to get the /test response.
I tried with many server, always the same problem.
What I'm doing wrong ? Thank you for your help.
You need to understand the async approach. For instance with gevent async doesn't mean multithreaded, so anything that requires CPU will still block. But this is great for stuff that relies on IO, like SQL queries.
So inside your for loop since that is merely cpu bound you will block until it's over unless you provide a sleep condition to allow other contexts to run during the process.
import gevent
from gevent import monkey,spawn as gspawn, sleep as gsleep, socket, signal_handler as sig
monkey.patch_all()
import signal
from bottle import Bottle, static_file, get, post, request, response, template, redirect, hook, route, abort
from gevent.pywsgi import WSGIServer
from geventwebsocket.handler import WebSocketHandler
def sample():
gspawn(myfunc)
#get('/')
def app():
return 'Hello World!'
#get('/test')
def test():
return 'hello'
#route('/loop')
def loop():
for i in range(0, 1000000000):
gsleep(0)
a = 0
if __name__ == '__main__':
botapp = bottle.app()
server = WSGIServer(("0.0.0.0", int(port)), botapp , handler_class=WebSocketHandler)
def shutdown():
print('Shutting down ...')
server.stop(timeout=60)
exit(signal.SIGTERM)
sig(signal.SIGTERM, shutdown)
sig(signal.SIGINT, shutdown)
server.serve_forever()
Related
I am building a python flask service for which I am trying to setup a timeout for each individual POST request.
As I understand whenever someone sends a post request to my RESTful service, a new thread (virtual or real) starts executing it.
Now in order for my server to serve a lot of requests I want it to return a TIME-OUT response if a process runs for more than a constant time defined for it (TIMEOUT_TIME) set for each POST method and stop the execution of that individual thread.
Can you propose me an abstract scheme that I could implement, using flask-methods?
One way to do it is to run the request processing in a separate process and terminate it if a timeout is exceeded:
#!/usr/bin/env python3
import time
from multiprocessing import Process
from flask import Flask, request, jsonify
app = Flask(__name__)
#app.route('/api/sleep', methods=['POST'])
def sleep():
duration = int(request.args.get('duration', 1))
timeout = float(request.args.get('timeout', 2))
proc = Process(target=process_request, args=(duration,))
proc.start()
proc.join(timeout)
if proc.is_alive():
proc.terminate()
proc.join()
return jsonify(success=False, message='timeout exceeded'), 408
return jsonify(success=True, message='well done')
def process_request(t):
time.sleep(t)
if __name__ == '__main__':
app.run(host='localhost', port=8080, debug=True)
In this example, when a sleep duration is less than a given timeout, a user will get a successful response:
curl -X POST http://localhost:8080/api/sleep?duration=1\&timeout=2
{
"message": "well done",
"success": true
}
Otherwise, the user will get 408 error:
curl -X POST http://localhost:8080/api/sleep?duration=2\&timeout=1
{
"message": "timeout exceeded",
"success": false
}
The problem with this approach is noted in the docs
Note that exit handlers and finally clauses, etc., will not be executed.
It means that the running processes won't be able to clean up before exiting which might cause problems. Another solution is to use a special Joiner thread which will be used to join worker processes or threads later on in the case the timeout is exceeded:
#!/usr/bin/env python3
import time
from queue import Queue
from threading import Thread
from flask import Flask, request, jsonify
class Joiner(Thread):
def __init__(self):
super().__init__()
self.workers = Queue()
def run(self):
while True:
worker = self.workers.get()
if worker is None:
break
worker.join()
app = Flask(__name__)
#app.route('/api/sleep', methods=['POST'])
def sleep():
duration = int(request.args.get('duration', 1))
timeout = int(request.args.get('timeout', 2))
worker = Thread(target=process_request, args=(duration,))
worker.start()
worker.join(timeout)
if worker.is_alive():
joiner.workers.put(worker)
return jsonify(success=False, message='timeout exceeded'), 408
return jsonify(success=True, message='well done')
def process_request(t):
time.sleep(t)
if __name__ == '__main__':
joiner = Joiner()
joiner.start()
app.run(host='localhost', port=8080, debug=True)
joiner.workers.put(None)
joiner.join()
Here, before running the flask server a Joiner thread instance is created and started. Once the server is stopped, we put None into the joiner.workers queue to signal the joiner thread to finish.
I start a server and use some data from my function. But I want this function to update data and display new one on my server. However when I start a web server it only takes the first compiled data from function.
I use "schedule" - imported library, that can schedule my task to compile my function in time i choose. Also bottle web framework to start server and make routing.
def read_file():
f=open("345.txt", "r")
hi.contents = f.read()
print(hi.contents)
def server_start():
#route('/as', method = 'GET')
def display_status():
try:
return hi.contents
except Exception:
logging.exception("")
return "Service unavailable. Check logs"
run(host='0.0.0.0', port=8033)
print("sadq")
schedule.every(3).seconds.do(read_file)
server_start()
while True:
schedule.run_pending()
time.sleep(1)
I expect to get updated results on my web server. Would be very glad if you help me or give some good advices. Thak you all.
First I would run bottle with an async process, specifically gevent.
import gevent
from gevent import monkey, signal
monkey.patch_all()
from bottle import app
import scheduler
app = Bottle()
#route('/as', method = 'GET')
def display_status():
try:
return scheduler.contents
except Exception:
logging.exception("")
return "Service unavailable. Check logs"
print("sadq")
server = WSGIServer(("0.0.0.0", int(8083)), app)
def shutdown():
print('Shutting down ...')
server.stop(timeout=60)
exit(signal.SIGTERM)
gevent.signal(signal.SIGTERM, shutdown)
gevent.signal(signal.SIGINT, shutdown) #CTRL C
server.serve_forever()
Then I would launch your scheduler as such in a separate file scheduler.py:
from gevent import spawn, sleep
import schedule
contents = ''
def read_file():
global contents
f=open("345.txt", "r")
contents = f.read()
print(contents)
def start_thread():
while 1:
schedule.run_pending()
sleep(1)
schedule.every(3).seconds.do(read_file)
spawn(start_thread)
I am using the tornado library in python. I have a queue where data gets added in. I have to keep connection open so that when client requests I send out items from queue. Here is a simple implementation of mine. The problem I face is when I add new elements to queue, I don't see it being it returned. Infact, I don't see any code executed below IOLoop.current().start() line.
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application, url,asynchronous
from Queue import Queue
import json
q=Queue()
q.put("one")
q.put("two")
class HelloHandler(RequestHandler):
def get(self):
data=q.get()
self.write(data)
def make_app():
return Application([
url(r"/", HelloHandler),
])
def main():
app = make_app()
app.listen(8888)
IOLoop.current().start() # stops here
time.sleep(2)
q.put("three")
print q
if __name__=='__main__':
main()
first time on this :
http://localhost:8888/
returns "one"
second time on this:
http://localhost:8888/
return "two"
Third time on this"
http://localhost:8888/
blocking
The problem you have is that calling IOLoop.current().start() transfers control to Tornado. It loops until IOLoop.stop() is called.
If you need to do something after the IOLoop has started, then you can use one of the callbacks. For example, here is your code modified to use IOLoop.call_later(). You could also use IOLoop.add_timeout() if you are using an earlier (<4.0) version of Tornado.
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application, url,asynchronous
from Queue import Queue
import json
q=Queue()
q.put("one")
q.put("two")
class HelloHandler(RequestHandler):
def get(self):
data=q.get()
self.write(data)
def make_app():
return Application([
url(r"/", HelloHandler),
])
def main():
app = make_app()
app.listen(8888)
IOLoop.current().call_later(2, q.put, "three")
IOLoop.current().start()
if __name__=='__main__':
main()
I have a bunch of long running scripts which do some number crunching and as they run write output to the console via print I want to invoke these scripts from a browser, and display the progress in the browser as they run. I'm currently playing with bottle and am working through this primer http://bottlepy.org/docs/dev/async.html# which is rather neat.
I'd like to try Event Callbacks http://bottlepy.org/docs/dev/async.html#event-callbacks as this seems to exactly match my problem, the script would run as an AsyncWorker (ideally managed by some message queue to limit the number running at any one instance) and periodically write back it's state. But I cannot figure out what SomeAsyncWorker() is - is it a tornado class or a gevent class I have to implement or something else?
#route('/fetch')
def fetch():
body = gevent.queue.Queue()
worker = SomeAsyncWorker()
worker.on_data(body.put)
worker.on_finish(lambda: body.put(StopIteration))
worker.start()
return body
I've found one way of doing this using gevent.queue here http://toastdriven.com/blog/2011/jul/31/gevent-long-polling-you/ which shouldn't be hard to adapt to work with bottle
# wsgi_longpolling/better_responses.py
from gevent import monkey
monkey.patch_all()
import datetime
import time
from gevent import Greenlet
from gevent import pywsgi
from gevent import queue
def current_time(body):
current = start = datetime.datetime.now()
end = start + datetime.timedelta(seconds=60)
while current < end:
current = datetime.datetime.now()
body.put('<div>%s</div>' % current.strftime("%Y-%m-%d %I:%M:%S"))
time.sleep(1)
body.put('</body></html>')
body.put(StopIteration)
def handle(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
body = queue.Queue()
body.put(' ' * 1000)
body.put("<html><body><h1>Current Time:</h1>")
g = Greenlet.spawn(current_time, body)
return body
server = pywsgi.WSGIServer(('127.0.0.1', 1234), handle)
print "Serving on http://127.0.0.1:1234..."
server.serve_forever()
(Not exactly an answer to your question, but here's another tack you could take.)
I've cobbled together a very simple multi-threaded WSGI server that fits nicely under bottle. Here's an example:
import bottle
import time
from mtbottle import MTServer
app = bottle.Bottle()
#app.route('/')
def foo():
time.sleep(2)
return 'hello, world!\n'
app.run(server=MTServer, host='0.0.0.0', port=8080, thread_count=3)
# app is nonblocking; it will handle up to 3 requests concurrently.
# A 4th concurrent request will block until one of the first 3 completed.
https://github.com/RonRothman/mtwsgi
One down side is that all endpoints on that port will be asynchronous; in contrast, the gevent method (I think) gives you more control over which methods are asynchronous and which are synchronous.
Hope this helps!
I am trying to cobble together a test which allows websockets clients to connect to a Tornado server and I want the Tornado server to send out a message to all clients every X seconds.
The reason I am doing this is because wbesockets connections are being silently dropped somewhere and I am wondering of periodic "pings" sent by the websockets server will maintain the connection.
I'm afraid it's a pretty noobish question and the code below is rather a mess. I just don't have my head wrapped around Tornado and scope enough to make it work.
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import tornado.gen
import time
from tornado import gen
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'http://mailapp.crowdwave.com/girlthumb.jpg'
self.write_message("http://www.example.com/girlthumb.jpg")
def on_message(self, message):
print 'Incoming message:', message
self.write_message("http://www.example.com/girlthumb.jpg")
def on_close(self):
print 'Connection was closed...'
#gen.engine
def f():
yield gen.Task(tornado.ioloop.IOLoop.instance().add_timeout, time.time() + 8)
self.write_message("http://www.example.com/x.png")
print 'x'
#gen.engine
def g():
yield gen.Task(tornado.ioloop.IOLoop.instance().add_timeout, time.time() + 4)
self.write_message("http://www.example.com/y.jpg")
print 'y'
application = tornado.web.Application([
(r'/ws', WSHandler),
])
if __name__ == "__main__":
tornado.ioloop.IOLoop.instance().add_callback(f)
tornado.ioloop.IOLoop.instance().add_callback(g)
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Why don't you try write a scheduler inside it? :)
def schedule_func():
#DO SOMETHING#
#milliseconds
interval_ms = 15
main_loop = tornado.ioloop.IOLoop.instance()
sched = tornado.ioloop.PeriodicCallback(schedule_func,interval_ms, io_loop = main_loop)
#start your period timer
sched.start()
#start your loop
main_loop.start()
Found that the accepted answer for this is almost exactly what I want:
How to run functions outside websocket loop in python (tornado)
With a slight modification, the accepted answer at the above link continually sends out ping messages. Here is the mod:
Change:
def test(self):
self.write_message("scheduled!")
to:
def test(self):
self.write_message("scheduled!")
tornado.ioloop.IOLoop.instance().add_timeout(datetime.timedelta(seconds=5), self.test)