Using gevent and Flask to implement websocket, how to achieve concurrency? - python

So I'm using Flask_Socket to try to implement a websocket on Flask. Using this I hope to notify all connected clients whenever a piece of data has changed. Here's a simplification of my routes/index.py. The issue that I have is that when a websocket connection is opened, it will stay in the notify_change loop until the socket is closed, and in the meantime, other routes like /users can't be accessed.
from flask_sockets import Sockets
sockets = Sockets(app)
#app.route('/users',methods=['GET'])
def users():
return json.dumps(get_users())
data = "Some value" # the piece of data to push
is_dirty = False # A flag which is set by the code that changes the data
#sockets.route("/notifyChange")
def notify_change(ws):
global is_dirty, data
while not ws.closed:
if is_dirty:
is_dirty = False
ws.send(data)
This seems a normal consequence of what is essentially a while True: however, I've been looking online for a way to get around this while still using flask_sockets and haven't had any luck. I'm running the server on GUnicorn
flask/bin/gunicorn -b '0.0.0.0:8000' -k flask_sockets.worker app:app
I tried adding threads by doing --threads 12 but no luck.
Note: I'm only serving up to 4 users at a time, so scaling is not a requirement, and the solution doesn't need to be ultra-optimized.

Related

Python deploying infinite while loop to production and handling failures

I have a script that waits for tasks from a task queue and then runs them. Something like this minimal example:
import redis
cache = redis.Redis(host='127.0.0.1', port=6379)
import time
def main():
while True:
message = cache.blpop('QUEUE', timeout=0)
work(message)
def work(message):
print(f"beginning work: {message}")
time.sleep(10)
if __name__ == "__main__":
main()
I am not using a web server because the script does not need to answer http requests. However I've confused myself a bit about how to make this script robust against errors in production.
With a web server and gunicorn, gunicorn would handle forking a process for each request. If the request causes an error then the worker dies and the request fails but the server continues to run.
How can I achieve this if I'm not running an http server? I could fork a process to perform the "work" function, but the code performing the fork would still be application code.
Is it possible to deploy a non-http server script like mine using Gunicorn? Is there something else I should be using to handle forking processes?
Or is it reasonable to fork inside the application and deploy to production?
what about this:
while True:
try:
message = cache.blpop('QUEUE', timeout=0)
work(message)
except: Exception as e :
print(e)

Flask RuntimeError: working outside of application context

I want to run my Flask app with websocket. Everything seems to be ok as long as I am starting my joiner class (running as thread) and then want to register a call back funktion. This works ok with flask development server.
As I am not very good in Englisch I have problems to understand the context issues with Flask. Any help would be very much appreciated
#socketio.on('change_R8', namespace='/fl')
def change_Relay8(R8_stat):
if R8_stat == 'on':
#print("Relay 8 on")
ui.set_relay(8,1,0)
elif R8_stat == 'off':
#print("Relay 8 off")
ui.set_relay(8,0,0)
# Listen for SocketIO event that will change analog output
#socketio.on('change_ao', namespace='/fl')
def change_ao(ao_value):
#print("setze ao auf: ", ao_value)
ui.set_ao(ao_value)
#- call back function from UniPi_joiner_class----------------------------
def unipi_change(event, data):
#print("Webserver in: ",event,data)
emit_to_all_clients(event, data)
# main program ----------------------------------------------------------
if __name__ == "__main__":
log.text("Flask Web-Server gestartet")
print("Flask Web-Server gestartet")
joiner = unipi_joiner("10.0.0.52",0)
joiner.on_unipi_change(unipi_change)
socketio.run(app, host='127.0.0.1', use_reloader=False, debug=False)
log.text("Flask Web-Server beendet")
The joiner function delivers data from sensors in the format event, data(json) which I emit to my website with broadcast. The data comes from 2 different sources (time dependend) and are joined together in the joiner function using queues. This works ok with Flask development server. When I use eventlet then joiner.on_unipi_change(unipi_change) does not work and shows context error. I tested the server with data from within flask and it worked.
Question: would it be possible to deliver the sensor data through websocket to my Flask server and then from flask server to my web-site. This would be very interesting as I would have different Raspi 3 collecting data and sending it to my web server.
Regarding complete stack trace I need some guidelines (sorry Flask beginner)

redis get function return None

I am working on a flask application that interacts with redis. This applciation is deployed on heroku, with a redis add on.
When I am doing some testing with the interaction, I am not able to get the key value pair that I just set. Instead, I always get None as a return type. Here is the example:
import Flask
import redis
app = Flask(__name__)
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
redis = redis.from_url(redis_url)
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
print redis.get("test") # print None here
return "what the freak"
if __name__ == "__main__":
app.run(host='0.0.0.0')
As shown above, the test route will print None, means the value is not set. I am confused. When I test the server on my local browser it works, and when I tried interacting with redis using heroku python shell it works too.
testing with python shell:
heroku run python
from server import redis
redis.set('test', 'i am here') # return True
redis.get('test') # return i am here
I am confused now. How should I properly interact with redis using Flask?
Redis-py by default constructs a ConnectionPool client, and this is probably what the from_url helper function is doing. While Redis itself is single threaded, the commands from the connection pool have no guaranteed order of execution. For a single client, construct a redis.StrictRedis client directly, or pass through the param connection_pool=none. This is preferable for simple commands, low in number, as there is less connection management overhead. You can alternatively use a pipeline in the context of a connection pool to serialise a batch operation.
https://redis-py.readthedocs.io/en/latest/#redis.ConnectionPool
https://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline
I did more experiments on this. It seems there is an issue related to the delay. the below modification will make it work:
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
time.sleep(5) # add the delay needed to the let the set finish
print redis.get("test") # print "{test1: test}" here
return "now it works"
I read the documentation on redis, redis seems to be single threaded. So I am not sure why it will execute the get function call before the set function is done. Someone with more experience please post an explanation.

Counting the number of requests per second in Tornado

I am new to Python and Tornado WebServer.
I am trying to figure out the number of request and number of requests/second in my server side code. I am using Tornadio2 to implement websockets.
Kindly take a look at the following code and let me know, what modification can be done to it.
I am using the RequestHandler.prepare() to bottleneck all the requests and using a list as it is immutable to store the count.
Consider all modules are included
count=[0]
class IndexHandler(tornado.web.RequestHandler):
"""Regular HTTP handler to serve the chatroom page"""
def prepare(self):
count[0]=count[0]+1
def get(self):
self.render('index1.html')
class SocketIOHandler(tornado.web.RequestHandler):
def get(self):
self.render('../socket.io.js')
partQue=Queue.Queue()
class ChatConnection(tornadio2.conn.SocketConnection):
participants = set()
def on_open(self, info):
self.send("Welcome from the server.")
self.participants.add(self)
def on_message(self, message):
partQue.put(message)
time.sleep(10)
self.qmes=partQue.get()
for p in self.participants:
p.send(self.qmes+" "+str(count[0]))
partQue.task_done()
def on_close(self):
self.participants.remove(self)
partQue.join()
# Create tornadio server
ChatRouter = tornadio2.router.TornadioRouter(ChatConnection)
# Create socket application
sock_app = tornado.web.Application(
ChatRouter.urls,
flash_policy_port = 843,
flash_policy_file = op.join(ROOT, 'flashpolicy.xml'),
socket_io_port = 8002)
# Create HTTP application
http_app = tornado.web.Application(
[(r"/", IndexHandler), (r"/socket.io.js", SocketIOHandler)])
if __name__ == "__main__":
import logging
logging.getLogger().setLevel(logging.DEBUG)
# Create http server on port 8001
http_server = tornado.httpserver.HTTPServer(http_app)
http_server.listen(8001)
# Create tornadio server on port 8002, but don't start it yet
tornadio2.server.SocketServer(sock_app, auto_start=False)
# Start both servers
tornado.ioloop.IOLoop.instance().start()
Also, I am confused about every Websocket messages. Does each Websocket event got to server in the form of an HTTP request? or a Socket.IO request?
I use Siege - excellent tool for testing requests if your running on linux. Example
siege http://localhost:8000/?q=yourquery -c10 -t10s
-c10 = 10 concurrent users
-t10s = 10 seconds
Tornadio2 has built-in statistics module, which includes incoming connections/s and other counters.
Check following example: https://github.com/MrJoes/tornadio2/tree/master/examples/stats
When testing applications, always approach performance testing with a healthy appreciation for the uncertainty principle..
If you want to test a server, hook up two PCs to a HUB where you can monitor traffic from one going to the other. Then bang the hell out of the server. There are a variety of tools for doing this, just look for web load testing tools.
Normal HTTP requests in Tornado create a new RequestHandler instance, which persists until the connection is terminated.
WebSockets use persistent connections. One WebSocketHandler instance is created, and each message sent by the browser to the server calls the on_message method.
From what I understand, Socket.IO/Tornad.IO will use WebSockets if supported by the browser, falling back to long polling.

Tornado connections not closing in freeBSD

I have a tornado web server, something like :
app = tornado.web.Application(handlersList,log_function=printIt)
app.listen(port)
serverInstance = tornado.ioloop.IOLoop.instance()
serverInstance.start()
the handlers are made with tornado.web.RequestHandler .
When I run the server on freeBSD, sometimes the page/resource takes long time to load,
trying to debug I see that when waiting for page to load Tornado didn't create a request object yet, and looking at netstat results, I see a lot of connection with status ESTABLISHED.
So my thought is that there are too many not closed connection, and operating system refuse new connection coming from the same session.
Can this be the case?
I don't do anything in get,post functions after writing, should I somehow shutdown/close the connection before returning?
EDIT 1: get/post are synchronous (no #asynchronous)
EDIT 2: temporarily fixed by forcing no_keep_alive
class BasicFeedHandler(tornado.web.RequestHandler):
def finish(self, chunk=None):
self.request.connection.no_keep_alive = True
tornado.web.RequestHandler.finish(self, chunk)
I'm not sure if keep_alive connections should stay open so long after client closed connection, any way this workaround works.
I found how to do this by looking at HTTPConnection._finish_request, when no keep-alive
this line self.stream.read_until(b("\r\n\r\n"), self._header_callback) runs.
what is \r\n\r\n in this context ?
Try this:
class Application(tornado.web.Application):
def __init__(self):
...
http_server = tornado.httpserver.HTTPServer(Application(),no_keep_alive=True)
http_server.listen(port)
tornado.ioloop.IOLoop.instance().start()

Categories

Resources