I have a rest api hosted using bottle web framework. I would like to run integration tests for my api. As part of the test, I would need to start a local instance of the bottle server. But run api in bottle framework blocks the execution thread. How do I create integration tests with a local instance of the server?
I want to start the server during setUp and stop it after running all my tests.
Is this possible with bottle web framework?
I was able to do it using multi threading. If there is a better solution, I will consider it.
def setUp(self):
from thread import start_new_thread
start_new_thread(start_bottle,(),{})
def my_test():
#Run the tests here which make the http call to the resources hosted using above bottle server instance
UPDATE
class TestBottleServer(object):
"""
Starts a local instance of bottle container to run the tests against.
"""
is_running = False
def __init__(self, app=None, host="localhost", port=3534, debug=False, reloader=False, server="tornado"):
self.app = app
self.host = host
self.port = port
self.debug = debug
self.reloader = reloader
self.server = server
def ensured_bottle_started(self):
if TestBottleServer.is_running is False:
start_new_thread(self.__start_bottle__, (), {})
#Sleep is required for forked thread to initialise the app
TestBottleServer.is_running = True
time.sleep(1)
def __start_bottle__(self):
run(
app=self.app,
host=self.host,
port=self.port,
debug=self.debug,
reloader=self.reloader,
server=self.server)
#staticmethod
def restart():
TestBottleServer.is_running = False
TestBottleServer.ensured_bottle_started()
TEST_BOTTLE_SERVER = TestBottleServer()
Related
Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1
I need to run monkey.patch_all() as I have a Flask server in my application in combination with the ValvePython library (I get errors about cannot switch to a different thread otherwise) however I'm encountering a problem.
I've tested this without the ValvePython to ensure the issue didn't strictly pend on that.
How I start the server:
from gevent import monkey; monkey.patch_all();
if __name__ == "__main__":
# Create PyQt5 app
app = QApplication(sys.argv)
# Flask server
server = Server('Ryder Engine')
# Create the custom window (The initialize function creates all the
# server endpoints dynamically via the add_endpoint function
window = RyderDisplay()
window.initialize(server)
# Run Server
threading.Thread(target=server.run, daemon=True).start()
# Start the app
sys.exit(app.exec())
My server class:
import socket
from flask import Flask, Response, request
from gevent.pywsgi import WSGIServer
class EndpointAction(object):
def __init__(self, action):
self.action = action
self.response = Response(status=200, headers={})
def __call__(self, *args):
self.action(request.get_json())
return self.response
class Server(object):
def __init__(self, name):
self.app = Flask(name)
def run(self, port=9520):
http_server = WSGIServer(('0.0.0.0', port), self.app)
http_server.serve_forever()
def add_endpoint(self, endpoint=None, endpoint_name=None, handler=None):
self.app.add_url_rule(endpoint, endpoint_name, EndpointAction(handler), methods=['POST'])
The main page of the PyQt5 app where one of the endpoints is binded to. This is instantiated inside the Window object. The rest are instantiated through the HomeConfigurationParser class through a json config file.
class Home(object):
# Class constructor
def __init__(self, window, server : Server):
self._window = window
self._client = Client()
self._server = server
self._client.subscribeToRyderEngine()
server.add_endpoint('/status', 'status', self.newStatus)
# UI Elements
def create_ui(self, path):
# Initialize
path = path + '/config.json'
self._fps, self._ui = HomeConfigurationParser.parse(self._window, self._client, self._server, path)
# Refresher
self._timer = QTimer()
self._timer.timeout.connect(self.update)
self._timer.start(1000 / self._fps)
def newStatus(self, request):
self._status = request
def update(self):
# Update UI
for elem in self._ui:
elem.update(self._status)
# Reset
if self._status is not None:
self._status = None
My problem is that by running monkey.path_all() the server does not process the requests anymore, in other words it basically ignores all the add_endpoint function calls. The server endpoints must be added at runtime I cannot add them directly in code through the # above the functions.
Why is this happening and how do I fix it?
EDIT: Added more bits of code. The server runs side by side with the PyQt5 interface. The server is used to receive data to then update the PyQt5 interface accordingly
The add_url_rule in the Flask source (here) has a #setupmethod decorator, which says "ignore me after the first request has been handled". (This is a key part of the #app.before_first_request mechanism.)
If you're starting up Flask in one thread, and later calling add_endpoint from another, your invoking thread is in a race with the first request to your app. There are also some serious issues with Thread safety that arise invoking methods that side-effect Flask internals from outside of the Flask main thread.
In your position, I'd rearrange to ensure that all of the add_endpoint calls happened before that server.run starts the app.
You may well still have a problem with monkeypatching, but I'd deal with this first.
Let us create an application server and an admin server. Assume that fusionListener and adminListener contain the application and admin logic we want to expose.
from cherrypy._cpserver import Server
fserver = Server()
fserver.socket_port = 10000
fserver.subscribe()
aserver = Server()
aserver.socket_port = 10001
aserver.subscribe()
And then to start them:
cherrypy.engine.start()
cherrypy.engine.block()
The tree.mount parameters ask for:
the code/ business logic as the first parameter
listening url
config parameters
Here is how that looks for the above servers:
cherrypy.tree.mount(fusionListener, r"/fusion.*",fusionConf)
cherrypy.tree.mount(adminListener, r"/admin.*",adminConf)
But where is the parameter for the server itself - which includes the port being listened to?
This is not a well supported case for CherryPy.
The application selection (cherrypy.tree is basically a map of /path -> App) is done before the request dispatch and... long story short, you could use cherrypy.dispatch.VirtualHost and map you sub applications under a main one (that will route depending on the hostname (which the port can be part of). For the listening on multiple ports, can be done, but again this is a very custom arrangement.
I hope this example is illustrative of a possible way to make such feat:
import cherrypy
from cherrypy import dispatch
from cherrypy._cpserver import Server
class AppOne:
#cherrypy.expose
def default(self):
return "DEFAULT from app ONE!"
#cherrypy.expose
def foo(self):
return "FOO from app ONE"
class AppTwo:
#cherrypy.expose
def default(self):
return "DEFAULT from app TWO!"
#cherrypy.expose
def foo(self):
return "FOO from app TWO"
class Root:
def __init__(self):
self.one = AppOne()
self.two = AppTwo()
def bind_two_servers(app_one_port, app_two_port):
# unsubscribe the default server
cherrypy.server.unsubscribe()
s1 = Server()
s2 = Server()
s1.socket_port = app_one_port
s2.socket_port = app_two_port
# subscribe the server to the `cherrypy.engine` bus events
s1.subscribe()
s2.subscribe()
def start_server():
bind_two_servers(8081, 8082)
cherrypy.engine.signals.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
config = {
'/': {
'request.dispatch': dispatch.VirtualHost(**{
'localhost:8081': '/one',
'localhost:8082': '/two',
})
}
}
cherrypy.tree.mount(Root(), '/', config)
start_server()
This example will serve AppOne when coming from localhost:8081 and AppTwo when coming from localhost:8082.
The problem is that you can't do multiples cherrypy.tree.mount and expect to route into the different applications using the VirtualHost dispatcher, it assumes that the application resolution is done at that point and is only resolving the path of that application.
Having said all of that... I do not recommend this solution, it can get complicated and it would be better to have some other server in front (like nginx) and serve each path on different processes. This could be an alternative, only if you really really want to avoid any extra server or process in your setup.
after a working production app was connected to local mongo, we've decided to move to Mongo Atlas.
This caused production errors.
Our stack is docker -> alping 3.6 -> python 2.7.13 -> Flask -> uwsgi (2.0.17) -> nginx running on aws
flask-mongoengine-0.9.3 mongoengine-0.14.3 pymongo-3.5.1
when starting the app in staging/production, The uwsgi is sending No replica set members found yet.
We don't know why.
We've tried different connection settings connect: False which is lazy connecting, meaning not on initializing, but on first query.
It caused the nginx to fail for resource temporarily unavailable error on some of our apps. We had to restart multiple times for the app to finally start serving requests.
I think the issue is with pymongo and the fact that it's not fork-safe
http://api.mongodb.com/python/current/faq.html?highlight=thread#id3
and uwsgi is using forks
I suspect it might be related to the way my app is being initalized.
might by aginst Using PyMongo with Multiprocessing
here is the app init code:
from app import FlaskApp
from flask import current_app
app = None
from flask_mongoengine import MongoEngine
import logging
application, app = init_flask_app(app_instance, module_name='my_module')
def init_flask_app(app_instance, **kwargs):
app_instance.init_instance(environment_config)
application = app_instance.get_instance()
app = application.app
return application, app
# app_instance.py
import FlaskApp
def init_instance(env):
global app
app = FlaskApp(env)
return app
def get_instance():
if globals().get('app') is None:
app = current_app.flask_app_object
else:
app = globals().get('app')
assert app is not None
return app
class FlaskApp(object):
def __init__(self, env):
.....
# Initialize the DB
self.db = Database(self.app)
....
# used in app_instance.py to get the flask app object in case it's None
self.app.flask_app_object = self
def run_server(self):
self.app.run(host=self.app.config['HOST'], port=self.app.config['PORT'], debug=self.app.config['DEBUG'])
class Database(object):
def __init__(self, app):
self.db = MongoEngine(app)
def drop_all(self, database_name):
logging.warn("Dropping database %s" % database_name)
self.db.connection.drop_database(database_name)
if __name__ == '__main__':
application.run_server()
help in debugging this will be appreciated!
How can I manage my rabbit-mq connection in Pyramid app?
I would like to re-use a connection to the queue throughout the web application's lifetime. Currently I am opening/closing connection to the queue for every publish call.
But I can't find any "global" services definition in Pyramid. Any help appreciated.
Pyramid does not need a "global services definition" because you can trivially do that in plain Python:
db.py:
connection = None
def connect(url):
global connection
connection = FooBarBaz(url)
your startup file (__init__.py)
from db import connect
if __name__ == '__main__':
connect(DB_CONNSTRING)
elsewhere:
from db import connection
...
connection.do_stuff(foo, bar, baz)
Having a global (any global) is going to cause problems if you ever run your app in a multi-threaded environment, but is perfectly fine if you run multiple processes, so it's not a huge restriction. If you need to work with threads the recipe can be extended to use thread-local variables. Here's another example which also connects lazily, when the connection is needed the first time.
db.py:
import threading
connections = threading.local()
def get_connection():
if not hasattr(connections, 'this_thread_connection'):
connections.this_thread_connection = FooBarBaz(DB_STRING)
return connections.this_thread_connection
elsewhere:
from db import get_connection
get_connection().do_stuff(foo, bar, baz)
Another common problem with long-living connections is that the application won't auto-recover if, say, you restart RabbitMQ while your application is running. You'll need to somehow detect dead connections and reconnect.
It looks like you can attach objects to the request with add_request_method.
Here's a little example app using that method to make one and only one connection to a socket on startup, then make the connection available to each request:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def index(request):
return Response('I have a persistent connection: {} with id {}'.format(
repr(request.conn).replace("<", "<"),
id(request.conn),
))
def add_connection():
import socket
s = socket.socket()
s.connect(("google.com", 80))
print("I should run only once")
def inner(request):
return s
return inner
if __name__ == '__main__':
config = Configurator()
config.add_route('index', '/')
config.add_view(index, route_name='index')
config.add_request_method(add_connection(), 'conn', reify=True)
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()
You'll need to be careful about threading / forking in this case though (each thread / process will need its own connection). Also, note that I am not very familiar with pyramid, there may be a better way to do this.