I want to listen for commits to a database with SQLAlchemy and post updates to the browser with Server Sent Events.
I have the below view in a Flask app:
#event.listens_for(scoped_session, 'after_commit')
def event_stream(session):
yield 'data: %s\n\n' % 'helloworld'
#app.route('/stream')
def stream():
return Response(event_stream(scoped_session), mimetype="text/event-stream")
And then simply, in js:
var source = new EventSource('/stream');
source.onmessage = function (event) {
console.log(event);
};
The app is filling the request every 3 seconds, and is disregarding my attempted implementation of the ORM decorator. What am I misunderstanding?
SQLAlchemy executes SQLAlchemy event callbacks. It is in no way tied to Flask's request/response cycle (outside the fact that you happen to be using it within Flask) or "server sent events". It is entirely up to SQLAlchemy what happens to things returned from event callbacks, and SQLAlchemy has no feature where yielding from a callback somehow generates a server sent event with Flask.
You can stream a response with Flask, so that the client receives data over time.
It looks like what you're really trying to do is send an event notification to the client from the server. Use a system such as Flask-SocketIO or some other event server + websocket setup to connect a websocket from the client to the server.
Related
I have done threading in my Flask Application. I have to log my data in a separate table in mysql which has to work asynchronously. So my main function collects all the data and before sending the flask response i am calling the thread. So that the Response is also sent on time and also my thread function runs in background. This method works fine when run in local flask server. But when i deploy this in uWSGI server i need to enable thread in uWSGI. After that when my thread function is called the data in the thread is lost and there no value in my variable.
My Mail flask Function
#app.route('/', methods=['POST'])
def mainfunction():
Dictionary['Name'] = 'MyName'
Dictionary['Age'] = 'MyAge'
Dictionary['Address'] = 'MyAddress'
t1 = threading.Thread(target=loadinDBUsingThread, args=(Dictionary,))
t1.start()
return json.dumps(Dictionary)
My Thread Function
def loadinDBUsingThread(Dictionary):
localVariable0 = Dictionary['Name']
localVariable1 = Dictionary['Age']
localVariable2 = Dictionary['Address']
#Insert these variables to the Database
I got the Keyerror Name is not found in the Dictionary. I don't know how my variables are getting lost. Please help me with this.
I am using python v3.5 with the package spyne 2.13 running on a gunicorn server v19.9
I wrote a small SOAP Webservice with python spyne (working well). It takes a string and enqueues it to rabbitmq. It must not neccessarily be rabbitmq, but also a simple DB insert oslt. Right now it works fine, but each time the webservice is called, it
opens a rabbitmq connection (or a DB connection if you'd like)
sends the message
closes the connection again(?)
I'd like to somehow preserve the connection in some sort of 'instance variable' and re-use it everytime the Webservice gets called. So that it connects only once and not everytime i call the ws. Unfortunately spyne does not seem to create any objects, so there are no instance variables.
Generally: How can I preserve a state (DB or RabbitMQ Connection) when using spyne?
So I tried this Trick with static class properties like so:
class Ws2RabbitMQ(ServiceBase):
rabbit_connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost'))
rabbit_channel = rabbit_connection.channel()
#staticmethod
def connectRabbit():
rabbit_cred = pika.PlainCredentials(username='...', password='...')
Ws2RabbitMQ.rabbit_connection = pika.BlockingConnection(pika.ConnectionParameters(
host='...', virtual_host='...', credentials=rabbit_cred))
Ws2RabbitMQ.rabbit_channel = Ws2RabbitMQ.rabbit_connection.channel()
print('Rabbit connected!')
#rpc(AnyXml, _returns=Unicode)
def exportGRID(ctx, payload):
try:
if not Ws2RabbitMQ.rabbit_connection.is_open:
print('RabbitMQ Connection lost - reconnecting...')
Ws2RabbitMQ.connectRabbit()
except Exception as e:
print('RabbitMQ Connection not found - initiating...')
Ws2RabbitMQ.connectRabbit()
Ws2RabbitMQ.rabbit_channel.basic_publish(
exchange='ws2rabbitmq', routing_key="blind", body=payload)
print(" [x] Sent")
return 'OK'
When I call the webservice twice, it works. Now, the Connection is created only once and kept in the Singleton Property.
Here is the scripts output:
RabbitMQ Connection not found - initiating...
Rabbit connected!
[x] Sent
[x] Sent
I want to run my Flask app with websocket. Everything seems to be ok as long as I am starting my joiner class (running as thread) and then want to register a call back funktion. This works ok with flask development server.
As I am not very good in Englisch I have problems to understand the context issues with Flask. Any help would be very much appreciated
#socketio.on('change_R8', namespace='/fl')
def change_Relay8(R8_stat):
if R8_stat == 'on':
#print("Relay 8 on")
ui.set_relay(8,1,0)
elif R8_stat == 'off':
#print("Relay 8 off")
ui.set_relay(8,0,0)
# Listen for SocketIO event that will change analog output
#socketio.on('change_ao', namespace='/fl')
def change_ao(ao_value):
#print("setze ao auf: ", ao_value)
ui.set_ao(ao_value)
#- call back function from UniPi_joiner_class----------------------------
def unipi_change(event, data):
#print("Webserver in: ",event,data)
emit_to_all_clients(event, data)
# main program ----------------------------------------------------------
if __name__ == "__main__":
log.text("Flask Web-Server gestartet")
print("Flask Web-Server gestartet")
joiner = unipi_joiner("10.0.0.52",0)
joiner.on_unipi_change(unipi_change)
socketio.run(app, host='127.0.0.1', use_reloader=False, debug=False)
log.text("Flask Web-Server beendet")
The joiner function delivers data from sensors in the format event, data(json) which I emit to my website with broadcast. The data comes from 2 different sources (time dependend) and are joined together in the joiner function using queues. This works ok with Flask development server. When I use eventlet then joiner.on_unipi_change(unipi_change) does not work and shows context error. I tested the server with data from within flask and it worked.
Question: would it be possible to deliver the sensor data through websocket to my Flask server and then from flask server to my web-site. This would be very interesting as I would have different Raspi 3 collecting data and sending it to my web server.
Regarding complete stack trace I need some guidelines (sorry Flask beginner)
I am trying to listen to new socketIO connections on the user's id namespace. The user ID is stored in the flask session object.
#socketio.on('connect', namespace=session['userId'])
def test_connect():
emit('newMessage')
This code is producing the following error:
raise RuntimeError('working outside of request context')
How can I get the above connect listener to run within the request context?
Thanks!
Unfortunately this cannot be done, because namespaces aren't dynamic, you have to use a static string as a namespace.
The idea of the namespace in SocketIO is not to add information about the connection, but to allow the client to open more than one individual channel with the server. Namespaces allow the SocketIO protocol to multiplex all these channels into a single physical connection.
What you want to do is to provide an input argument of the connection into the server. For that, just add the value to your payload:
#socketio.on('connect', namespace='/chat')
def test_connect():
userid = session['userId']
# ...
I am new to Python and Tornado WebServer.
I am trying to figure out the number of request and number of requests/second in my server side code. I am using Tornadio2 to implement websockets.
Kindly take a look at the following code and let me know, what modification can be done to it.
I am using the RequestHandler.prepare() to bottleneck all the requests and using a list as it is immutable to store the count.
Consider all modules are included
count=[0]
class IndexHandler(tornado.web.RequestHandler):
"""Regular HTTP handler to serve the chatroom page"""
def prepare(self):
count[0]=count[0]+1
def get(self):
self.render('index1.html')
class SocketIOHandler(tornado.web.RequestHandler):
def get(self):
self.render('../socket.io.js')
partQue=Queue.Queue()
class ChatConnection(tornadio2.conn.SocketConnection):
participants = set()
def on_open(self, info):
self.send("Welcome from the server.")
self.participants.add(self)
def on_message(self, message):
partQue.put(message)
time.sleep(10)
self.qmes=partQue.get()
for p in self.participants:
p.send(self.qmes+" "+str(count[0]))
partQue.task_done()
def on_close(self):
self.participants.remove(self)
partQue.join()
# Create tornadio server
ChatRouter = tornadio2.router.TornadioRouter(ChatConnection)
# Create socket application
sock_app = tornado.web.Application(
ChatRouter.urls,
flash_policy_port = 843,
flash_policy_file = op.join(ROOT, 'flashpolicy.xml'),
socket_io_port = 8002)
# Create HTTP application
http_app = tornado.web.Application(
[(r"/", IndexHandler), (r"/socket.io.js", SocketIOHandler)])
if __name__ == "__main__":
import logging
logging.getLogger().setLevel(logging.DEBUG)
# Create http server on port 8001
http_server = tornado.httpserver.HTTPServer(http_app)
http_server.listen(8001)
# Create tornadio server on port 8002, but don't start it yet
tornadio2.server.SocketServer(sock_app, auto_start=False)
# Start both servers
tornado.ioloop.IOLoop.instance().start()
Also, I am confused about every Websocket messages. Does each Websocket event got to server in the form of an HTTP request? or a Socket.IO request?
I use Siege - excellent tool for testing requests if your running on linux. Example
siege http://localhost:8000/?q=yourquery -c10 -t10s
-c10 = 10 concurrent users
-t10s = 10 seconds
Tornadio2 has built-in statistics module, which includes incoming connections/s and other counters.
Check following example: https://github.com/MrJoes/tornadio2/tree/master/examples/stats
When testing applications, always approach performance testing with a healthy appreciation for the uncertainty principle..
If you want to test a server, hook up two PCs to a HUB where you can monitor traffic from one going to the other. Then bang the hell out of the server. There are a variety of tools for doing this, just look for web load testing tools.
Normal HTTP requests in Tornado create a new RequestHandler instance, which persists until the connection is terminated.
WebSockets use persistent connections. One WebSocketHandler instance is created, and each message sent by the browser to the server calls the on_message method.
From what I understand, Socket.IO/Tornad.IO will use WebSockets if supported by the browser, falling back to long polling.