Flask + Gevent, monkey.patch_all() breaks flask - python

I need to run monkey.patch_all() as I have a Flask server in my application in combination with the ValvePython library (I get errors about cannot switch to a different thread otherwise) however I'm encountering a problem.
I've tested this without the ValvePython to ensure the issue didn't strictly pend on that.
How I start the server:
from gevent import monkey; monkey.patch_all();
if __name__ == "__main__":
# Create PyQt5 app
app = QApplication(sys.argv)
# Flask server
server = Server('Ryder Engine')
# Create the custom window (The initialize function creates all the
# server endpoints dynamically via the add_endpoint function
window = RyderDisplay()
window.initialize(server)
# Run Server
threading.Thread(target=server.run, daemon=True).start()
# Start the app
sys.exit(app.exec())
My server class:
import socket
from flask import Flask, Response, request
from gevent.pywsgi import WSGIServer
class EndpointAction(object):
def __init__(self, action):
self.action = action
self.response = Response(status=200, headers={})
def __call__(self, *args):
self.action(request.get_json())
return self.response
class Server(object):
def __init__(self, name):
self.app = Flask(name)
def run(self, port=9520):
http_server = WSGIServer(('0.0.0.0', port), self.app)
http_server.serve_forever()
def add_endpoint(self, endpoint=None, endpoint_name=None, handler=None):
self.app.add_url_rule(endpoint, endpoint_name, EndpointAction(handler), methods=['POST'])
The main page of the PyQt5 app where one of the endpoints is binded to. This is instantiated inside the Window object. The rest are instantiated through the HomeConfigurationParser class through a json config file.
class Home(object):
# Class constructor
def __init__(self, window, server : Server):
self._window = window
self._client = Client()
self._server = server
self._client.subscribeToRyderEngine()
server.add_endpoint('/status', 'status', self.newStatus)
# UI Elements
def create_ui(self, path):
# Initialize
path = path + '/config.json'
self._fps, self._ui = HomeConfigurationParser.parse(self._window, self._client, self._server, path)
# Refresher
self._timer = QTimer()
self._timer.timeout.connect(self.update)
self._timer.start(1000 / self._fps)
def newStatus(self, request):
self._status = request
def update(self):
# Update UI
for elem in self._ui:
elem.update(self._status)
# Reset
if self._status is not None:
self._status = None
My problem is that by running monkey.path_all() the server does not process the requests anymore, in other words it basically ignores all the add_endpoint function calls. The server endpoints must be added at runtime I cannot add them directly in code through the # above the functions.
Why is this happening and how do I fix it?
EDIT: Added more bits of code. The server runs side by side with the PyQt5 interface. The server is used to receive data to then update the PyQt5 interface accordingly

The add_url_rule in the Flask source (here) has a #setupmethod decorator, which says "ignore me after the first request has been handled". (This is a key part of the #app.before_first_request mechanism.)
If you're starting up Flask in one thread, and later calling add_endpoint from another, your invoking thread is in a race with the first request to your app. There are also some serious issues with Thread safety that arise invoking methods that side-effect Flask internals from outside of the Flask main thread.
In your position, I'd rearrange to ensure that all of the add_endpoint calls happened before that server.run starts the app.
You may well still have a problem with monkeypatching, but I'd deal with this first.

Related

python : dynamically spawn multithread workers with flask-socket io and python-binance

Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1

TWS IB Gateway (version 972/974) Client keeps disconnecting

I am trying to connect with IB Api to download some historical data. I have noticed that my client connects to the API, but then disconnects automatically in a very small period (~a few seconds).
Here's the log in the server:
socket connection for client{10} has closed.
Connection terminated.
Here's my main code for starting the app:
class TestApp(TestWrapper, TestClient):
def __init__(self):
TestWrapper.__init__(self)
TestClient.__init__(self, wrapper=self)
self.connect(config.ib_hostname, config.ib_port, config.ib_session_id)
self.session_id = int(config.ib_session_id)
self.thread = Thread(target = self.run)
self.thread.start()
setattr(self, "_thread", self.thread)
self.init_error()
def reset_connection(self):
pass
def check_contract(self, name, exchange_name, security_type, currency):
self.reset_connection()
ibcontract = IBcontract()
ibcontract.secType = security_type
ibcontract.symbol = name
ibcontract.exchange = exchange_name
ibcontract.currency = currency
return self.resolve_ib_contract(ibcontract)
def resolve_contract(self, security):
self.reset_connection()
ibcontract = IBcontract()
ibcontract.secType = security.security_type()
ibcontract.symbol=security.name()
ibcontract.exchange=security.exchange()
ibcontract.currency = security.currency()
return self.resolve_ib_contract(ibcontract)
def get_historical_data(self, security, duration, bar_size, what_to_show):
self.reset_connection()
resolved_ibcontract=self.resolve_contract(security)
data = test_app.get_IB_historical_data(resolved_ibcontract.contract, duration, bar_size, what_to_show)
return data
def create_app():
test_app = TestApp()
return test_app
Any suggestions on what could be the problem? I can show more error messages from the debug if needed.
If you can connect without issue only by changing the client ID, typically that indicates that the previous connection was not properly closed and TWS thinks its still open. To disconnect an API client you should call the EClient.disconnect function explicity, overridden in your example as:
test_app.disconnect()
Though its not necessary to disconnect/reconnect after every task, and you can just leave the connection open for extended periods.
You may sometimes encounter problems if an API function, such as reqHistoricalData, is called immediately after connection. Its best to have a small pause after initiating a connection to wait for a callback such as nextValidID to ensure the connection is complete before proceeding.
http://interactivebrokers.github.io/tws-api/connection.html#connect
I'm not sure what the function init_error() is intended for in your example since it would always be called when a TestApp object is created (whether or not there is an error).
Installing the latest version of TWS API (v 9.76) solved the problem.
https://interactivebrokers.github.io/#

Mongoengine connecting to atlas No replica set members found yet

after a working production app was connected to local mongo, we've decided to move to Mongo Atlas.
This caused production errors.
Our stack is docker -> alping 3.6 -> python 2.7.13 -> Flask -> uwsgi (2.0.17) -> nginx running on aws
flask-mongoengine-0.9.3 mongoengine-0.14.3 pymongo-3.5.1
when starting the app in staging/production, The uwsgi is sending No replica set members found yet.
We don't know why.
We've tried different connection settings connect: False which is lazy connecting, meaning not on initializing, but on first query.
It caused the nginx to fail for resource temporarily unavailable error on some of our apps. We had to restart multiple times for the app to finally start serving requests.
I think the issue is with pymongo and the fact that it's not fork-safe
http://api.mongodb.com/python/current/faq.html?highlight=thread#id3
and uwsgi is using forks
I suspect it might be related to the way my app is being initalized.
might by aginst Using PyMongo with Multiprocessing
here is the app init code:
from app import FlaskApp
from flask import current_app
app = None
from flask_mongoengine import MongoEngine
import logging
application, app = init_flask_app(app_instance, module_name='my_module')
def init_flask_app(app_instance, **kwargs):
app_instance.init_instance(environment_config)
application = app_instance.get_instance()
app = application.app
return application, app
# app_instance.py
import FlaskApp
def init_instance(env):
global app
app = FlaskApp(env)
return app
def get_instance():
if globals().get('app') is None:
app = current_app.flask_app_object
else:
app = globals().get('app')
assert app is not None
return app
class FlaskApp(object):
def __init__(self, env):
.....
# Initialize the DB
self.db = Database(self.app)
....
# used in app_instance.py to get the flask app object in case it's None
self.app.flask_app_object = self
def run_server(self):
self.app.run(host=self.app.config['HOST'], port=self.app.config['PORT'], debug=self.app.config['DEBUG'])
class Database(object):
def __init__(self, app):
self.db = MongoEngine(app)
def drop_all(self, database_name):
logging.warn("Dropping database %s" % database_name)
self.db.connection.drop_database(database_name)
if __name__ == '__main__':
application.run_server()
help in debugging this will be appreciated!

How do I broadcast from a non-SocketIO request to all SocketiO clients connected?

I'm running the SocketIO server with something like:
from socketio.server import SocketIOServer
server = SocketIOServer(
('127.0.0.1', '8000'),
resource='socket.io',
)
server.serve_forever()
I then have a namespace:
class Foo(BaseNamespace, BroadcastMixin):
def on_msg(self, data):
self.emit(data['msg'])
And finally, I have a route such as:
module = Blueprint('web', __name__)
#module.route('/')
def index():
pkt = dict(
type='event',
name='new_visitor',
endpoint='/foo'
)
## HERE: How do I get the "socket" to look through each connection
#for sessid, socket in blah.socket.server.sockets.iteritems():
# socket.send_packet(pkt)
return render_template('index.html')
So, the above commented part is where I have issue.
What I've done so far:
I dove in to the gevent-socketio code and see that the sockets are kept tracked there. But I am not sure what would be the next step.
Noticed that, in Flask, request.environ has a socketio value that corresponds to the object. However, that's only on SocketIO requests.
Any clues or hints would be very appreciated.
The code that I use on my Flask-SocketIO extension to do what you want is:
def emit(self, event, *args, **kwargs):
ns_name = kwargs.pop('namespace', '')
for sessid, socket in self.server.sockets.items():
if socket.active_ns.get(ns_name):
socket[ns_name].emit(event, *args, **kwargs)
My actual implementation is a bit more complex, I have simplified it to just show how to do what you asked. The extension is on github if you want to see the complete code.

python bottle integration tests

I have a rest api hosted using bottle web framework. I would like to run integration tests for my api. As part of the test, I would need to start a local instance of the bottle server. But run api in bottle framework blocks the execution thread. How do I create integration tests with a local instance of the server?
I want to start the server during setUp and stop it after running all my tests.
Is this possible with bottle web framework?
I was able to do it using multi threading. If there is a better solution, I will consider it.
def setUp(self):
from thread import start_new_thread
start_new_thread(start_bottle,(),{})
def my_test():
#Run the tests here which make the http call to the resources hosted using above bottle server instance
UPDATE
class TestBottleServer(object):
"""
Starts a local instance of bottle container to run the tests against.
"""
is_running = False
def __init__(self, app=None, host="localhost", port=3534, debug=False, reloader=False, server="tornado"):
self.app = app
self.host = host
self.port = port
self.debug = debug
self.reloader = reloader
self.server = server
def ensured_bottle_started(self):
if TestBottleServer.is_running is False:
start_new_thread(self.__start_bottle__, (), {})
#Sleep is required for forked thread to initialise the app
TestBottleServer.is_running = True
time.sleep(1)
def __start_bottle__(self):
run(
app=self.app,
host=self.host,
port=self.port,
debug=self.debug,
reloader=self.reloader,
server=self.server)
#staticmethod
def restart():
TestBottleServer.is_running = False
TestBottleServer.ensured_bottle_started()
TEST_BOTTLE_SERVER = TestBottleServer()

Categories

Resources