A flask_socketio app is running as a server. In a list managed by the app, there are several instances that inherit threading.Threads, each running a mainloop. Once in a while, I would like to signal the flask-socketio server to emit a broadcast to a certain room. How could I do this?
I am unable to figure out how to do that as flask_socketio is running in a mainloop itself, but I don't have access to this mainloop. Is there a way to have the flask mainloop read from a Queue?
You just need to access the SocketIO instance for this. Somewhere in your app you have:
socketio = SocketIO(app)
In the thread from where you want to emit, just import this object and call the emit method:
from app import socketio
# ...
def emit_to_room(event, data, room):
socketio.emit(event, data, room=room)
Hope this helps!
Related
Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1
I am trying to integrate wxPython and Flask into a single application, but I am not sure how to get them to work together as they both want exclusive use of the main thread.
I am calling the application with:
export FLASK_APP=keypad_controller
python3 -m flask run -p 2020 -h 0.0.0.0 --eager-loading --no-reload
The main code block using Flask is:
from flask import Flask
def create_app(test_config=None):
app = Flask(__name__)
return app
I am not sure how to integrate wxPython (below) into the above code, how do I run flask?
wx_app = wx.App()
main_window = MainWindow(config)
main_window.Show()
wx_app.MainLoop()
Start Flask (app.run()) in a separate thread.
Not that if you want app.run(debug=True), you must also pass use_reloader=False, because things will go off the rails quickly if Flask decides that it wants needs to reload anything from other than the main thread.
I am trying to add websocket functionality to an existing application. The existing structure of the app is
In /server/__init__.py:
from connexion import App
...
connexion_app = App(__name__, specification_dir='swagger/') # Create Connexion App
app = connexion_app.app # Configure Flask Application
...
connexion_app.add_api('swagger.yaml', swagger_ui=True) # Initialize Connexion api
In startserver.py:
from server import connexion_app
connexion_app.run(
processes=8,
debug=True
)
In this way, I was able to specify the number of processes. There are some long-running tasks that make it necessary to have as many processes as possible.
I have modified the application to include websocket functionality as below. It seems to be that I only have one process available. Once the application attempts to run one of the long-running processes, all API calls hang. Also, if the long-runnign process fails, the application is stuck in a hanging state
In /server/__init__.py:
from connexion import App
import socketio
...
connexion_app = App(__name__, specification_dir='swagger/') # Create Connexion App
sio = socketio.Server() # Create SocketIO for websockets
app = connexion_app.app # Configure Flask Application
...
connexion_app.add_api('swagger.yaml', swagger_ui=True) # Initialize Connexion api
In startserver.py:
import socketio
import eventlet
from server import sio
from server import app
myapp = socketio.Middleware(sio, app)
eventlet.wsgi.server(eventlet.listen(('', 5000)), myapp)
What am I missing here?
(side note: If you have any resources available to better understand the behemoth of the Flask object, please point me to them!!)
Exact answer to question: Eventlet built-in WSGI does not support multiple processes.
Approach to get the best solution for described problem: share one file that contains absolute minimum code required to reproduce problem. Maybe here https://github.com/eventlet/eventlet/issues or any other way you prefer.
Way of hope. Random stuff to poke at: eventlet.monkey_patch(), isolate Eventlet and long blocking calls in separate threads or processes.
I have written a single user application that currently works with Flask internal web server. It does not seem to be very robust and it crashes with all sorts of socket errors as soon as a page takes a long time to load and the user navigates elsewhere while waiting. So I thought to replace it with Apache.
The problem is, my current code is a single program that first launches about ten threads to do stuff, for example set up ssh tunnels to remote servers and zmq connections to communicate with a database located there. Finally it enters run() loop to start the internal server.
I followed all sorts of instructions and managed to get Apache service the initial page. However, everything goes wrong as I now don't have any worker threads available, nor any globally initialised classes, and none of my global variables holding interfaces to communicate with these threads do not exist.
Obviously I am not a web developer.
How badly "wrong" my current code is? Is there any way to make that work with Apache with a reasonable amount of work? Can I have Apache just replace the run() part and have a running application, with which Apache communicates? My current app in a very simplified form (without data processing threads) is something like this:
comm=None
app = Flask(__name__)
class CommsHandler(object):
__init__(self):
*Init communication links to external servers and databases*
def request_data(self, request):
*Use initialised links to request something*
return result
#app.route("/", methods=["GET"]):
def mainpage():
return render_template("main.html")
#app.route("/foo", methods=["GET"]):
def foo():
a=comm.request_data("xyzzy")
return render_template("foo.html", data=a)
comm = CommsHandler()
app.run()
Or have I done this completely wrong? Now when I remove app.run and just import app class to wsgi script, I do get a response from the main page as it does not need reference to global variable comm.
/foo does not work, as "comm" is an uninitialised variable. And I can see why, of course. I just never thought this would need to be exported to Apache or any other web server.
So the question is, can I launch this application somehow in a rc script at boot, set up its communication links and everyhing, and have Apache/wsgi just call function of the running application instead of launching a new one?
Hannu
This is the simple app with flask run on internal server:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
To run it on apache server Check out fastCGI doc :
from flup.server.fcgi import WSGIServer
from yourapplication import app
if __name__ == '__main__':
WSGIServer(app).run()
I'm writing a WSGI app using an Nginx / Gunicorn / Bottle stack that accepts a GET request, returns a simple response, and then writes a message to RabbitMQ. If I were running the app through straight Bottle, I'd be reusing the RabbitMQ connection every time the app recieves a GET. However, in Gunicorn, it looks like the workers are destroying and recreating the MQ connection every time. I was wondering if there's a good way to reuse that connection.
More detailed info:
##This is my bottle app
from bottle import blahblahblah
import bottle
from mqconnector import MQConnector
mqc = MQConnector(ip, exchange)
#route('/')
def index():
try:
mqc
except NameError:
mqc = MQConnector(ip, exchange)
mqc.publish('whatever message')
return 'ok'
if __name__ == '__main__':
run(host='blah', port=808)
app = bottle.default_app()
Okay, this took me a little while to sort out. What was happening was, every time a new request came through, Gunicorn was running my index() method and, as such, creating a new instance of MQConnector.
The fix was to refactor MQConnector such that, rather than being a class, it was just a bunch of methods and variables. That way, each worker referred to the same MQConnector every time, rather than creating a new instance of MQConnector. Finally, I passed MQConnector's publish() function along.
#Bottle app
from blah import blahblah
import MQConnector
#route('/')
def index():
blahblah(foo, bar, baz, MQConnector.publish)
and
#MQConnector
import pika
mq_ip = "blah"
exhange_name="blahblah"
connection=pika.BlockingConnection(....
...
def publish(message, r_key):
...
Results: A call that used to take 800ms now takes 4ms. I used to max out at 80 calls/second across 90 Gunicorn workers, and now I max out around 700 calls/second across 5 Gunicorn workers.