I'd like to integrate Sentry with Huey task queue's workers / consumers.
I've seen an issue in both Sentry and Huey's GitHub issues, but I found no definite answer about how to integrate them.
I've read that one way to integrate them is via logging, however I'm storing my API key in the database and loading it from Python code, not from a hard-coded ini file (as is recommended).
Here is how I load Sentry in my main (Pyramid) app:
app = config.make_wsgi_app()
if get_siteconfig(dbsession)['sentry_key_backend']:
try:
from raven import Client
from raven.middleware import Sentry
client = Client(get_siteconfig(dbsession)['sentry_key_backend'])
app = Sentry(app, client=client)
except Exception:
print('SENTRY init error')
Wheres my huey_worker.py is just a bunch of import statement and database setup lines, without an actual app or a function which I could wrap in a try - except block.
What is the recommended way to integrate Sentry in this case?
Related
Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1
I've uploaded some code into a server. The code was working locally but when I upload it to the server it gives me an Internal Server Error. The website is running with wsgi and the code is:
try:
from decksite import main, APP as application
except Exception as e:
from shared import repo
repo.create_issue('Error starting website', exception=e)
if __name__ == '__main__':
print('Running manually. Is something wrong?')
application.run(host='0.0.0.0', debug=False)
So both the try and the except are failing. I want to add a second exception and pass it all to a simple flask application that would output both exceptions to the browser and log them to a file. The problem is that I don't know how to pass the exception to the error_app flask app and that it breaks in the line where I set the logging config. Here is what I've done. I'm only getting NoneType: None instead of the full exception.
import os, sys
sys.path.append("/home/myuser/public_html/flask");
try:
from decksite import main, APP as application
except Exception as error:
#from shared import repo
#repo.create_issue('Error starting decksite', exception=error)
#sys.path.insert(0, os.path.dirname(__file__))
#from error_app import app as application
# This is the code that goes into the error flask application
import logging
import traceback
from flask import Flask, __version__
app = Flask(__name__)
application = app
#app.route("/")
def hello():
return traceback.format_exc()
# The next line gives Internal Server Error
logging.basicConfig(filename='example.log', level=logging.DEBUG)
logging.exception(error)
return traceback.format_exc()
if __name__ == '__main__':
print('Running manually. Is something wrong?')
application.run(host='0.0.0.0', debug=False)
I don't have sudo in the server and can't ssh to it so unless I'm able to log the errors I'm not going to be able to fix anything.
Edit: I've almost got it as I want:
.htaccess
website.wsgi
error_app.py
website/init.py
website/main.py
Create a custom 500 handler and print out the trackback
import traceback
#app.errorhandler(500)
def internal_server_error(e):
return render_template('500_error.html', traceback=traceback.format_exc())
Have your '500_error.html' template show you the traceback.
You mentioned 500 Internal Server Error is coming. Things are proper in your local but fail on server. Since you don't have ssh access, it might be tough to debug. If you use something like Docker or Kubernetes to build and deploy it can be useful. I can suggest you some ways to debug. The code is not going to try except and failing, the possible reason is the server itself not starting due to missing requirements say some import or it could be anything.
Debug Steps
Create a virtual environment and re-install requirements in it similar to your server. This will help you to identify if there is a missing requirement.
if you are environment is not production and you are testing the application in the server then put debug = True. It will show an error on the front end. This is definitely not a recommended approach. I am suggesting since you don't have ssh access.
If possible create a simple route say /hello and in that just return hello, and check whether it is returning you the right result or not. This will let you know whether your server is starting or not. Even this is not recommendable for production.
You can also check the flask app before request and after request. This might also be useful
Hopefully, 1st step debugging will help you a lot and you might not need to go to step 2 and step 3. I gave you for the worst-case scenario
I am trying to add websocket functionality to an existing application. The existing structure of the app is
In /server/__init__.py:
from connexion import App
...
connexion_app = App(__name__, specification_dir='swagger/') # Create Connexion App
app = connexion_app.app # Configure Flask Application
...
connexion_app.add_api('swagger.yaml', swagger_ui=True) # Initialize Connexion api
In startserver.py:
from server import connexion_app
connexion_app.run(
processes=8,
debug=True
)
In this way, I was able to specify the number of processes. There are some long-running tasks that make it necessary to have as many processes as possible.
I have modified the application to include websocket functionality as below. It seems to be that I only have one process available. Once the application attempts to run one of the long-running processes, all API calls hang. Also, if the long-runnign process fails, the application is stuck in a hanging state
In /server/__init__.py:
from connexion import App
import socketio
...
connexion_app = App(__name__, specification_dir='swagger/') # Create Connexion App
sio = socketio.Server() # Create SocketIO for websockets
app = connexion_app.app # Configure Flask Application
...
connexion_app.add_api('swagger.yaml', swagger_ui=True) # Initialize Connexion api
In startserver.py:
import socketio
import eventlet
from server import sio
from server import app
myapp = socketio.Middleware(sio, app)
eventlet.wsgi.server(eventlet.listen(('', 5000)), myapp)
What am I missing here?
(side note: If you have any resources available to better understand the behemoth of the Flask object, please point me to them!!)
Exact answer to question: Eventlet built-in WSGI does not support multiple processes.
Approach to get the best solution for described problem: share one file that contains absolute minimum code required to reproduce problem. Maybe here https://github.com/eventlet/eventlet/issues or any other way you prefer.
Way of hope. Random stuff to poke at: eventlet.monkey_patch(), isolate Eventlet and long blocking calls in separate threads or processes.
I have written a single user application that currently works with Flask internal web server. It does not seem to be very robust and it crashes with all sorts of socket errors as soon as a page takes a long time to load and the user navigates elsewhere while waiting. So I thought to replace it with Apache.
The problem is, my current code is a single program that first launches about ten threads to do stuff, for example set up ssh tunnels to remote servers and zmq connections to communicate with a database located there. Finally it enters run() loop to start the internal server.
I followed all sorts of instructions and managed to get Apache service the initial page. However, everything goes wrong as I now don't have any worker threads available, nor any globally initialised classes, and none of my global variables holding interfaces to communicate with these threads do not exist.
Obviously I am not a web developer.
How badly "wrong" my current code is? Is there any way to make that work with Apache with a reasonable amount of work? Can I have Apache just replace the run() part and have a running application, with which Apache communicates? My current app in a very simplified form (without data processing threads) is something like this:
comm=None
app = Flask(__name__)
class CommsHandler(object):
__init__(self):
*Init communication links to external servers and databases*
def request_data(self, request):
*Use initialised links to request something*
return result
#app.route("/", methods=["GET"]):
def mainpage():
return render_template("main.html")
#app.route("/foo", methods=["GET"]):
def foo():
a=comm.request_data("xyzzy")
return render_template("foo.html", data=a)
comm = CommsHandler()
app.run()
Or have I done this completely wrong? Now when I remove app.run and just import app class to wsgi script, I do get a response from the main page as it does not need reference to global variable comm.
/foo does not work, as "comm" is an uninitialised variable. And I can see why, of course. I just never thought this would need to be exported to Apache or any other web server.
So the question is, can I launch this application somehow in a rc script at boot, set up its communication links and everyhing, and have Apache/wsgi just call function of the running application instead of launching a new one?
Hannu
This is the simple app with flask run on internal server:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
To run it on apache server Check out fastCGI doc :
from flup.server.fcgi import WSGIServer
from yourapplication import app
if __name__ == '__main__':
WSGIServer(app).run()
I am trying to run a flask-restless app in apache using mod_wsgi. This works fine with the development server. I have read everything I can find and none of the answers I have seen seem to work for me. The app handles non-database requests properly but gives the following error when I try to access a url that requires a database access:
OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 13] Permission denied)") None None
I have whittled down to basically the flask-restless quick-start with my config and my flask-sqlalchemy models imported (from flask import models). Here is my python code:
import flask
import flask.ext.sqlalchemy
import flask.ext.restless
import sys
sys.path.insert(0, '/proper/path/to/application')
application = flask.Flask(__name__, static_url_path = "")
application.debug=True
application.config.from_object('config')
db = flask.ext.sqlalchemy.SQLAlchemy(application)
from app import models
# Create the Flask-Restless API manager.
manager = flask.ext.restless.APIManager(application, flask_sqlalchemy_db=db)
# Create API endpoints, which will be available at /api/<tablename> by
# default. Allowed HTTP methods can be specified as well.
manager.create_api(models.Asset, methods=['GET'])
# start the flask loop
if __name__ == '__main__':
application.run()
I assume that mod_wsgi isn't having a problem finding the config file which contains the database access details since I don't get an error when reading the config and I also don't get an error on from app import models.
My research so far has led me to believe that this has something to do with the sql-alchemy db connection existing in the wrong scope or context and possibly complicated by the flask-restless API manager. I can't seem to wrap my head around it.
Your code under Apache/mod_wsgi will run as a special Apache user. That user likely doesn't have the privileges required to connect to the database.
Even though it says 'localhost' and you think that may imply a normal socket connection, some database clients will see 'localhost' and will automatically instead try and use the UNIX socket for the database. It may not have access to that UNIX socket connection.
Alternatively, when going through a UNIX socket connection it is trying to validate whether the Apache user than has access, but if the database hasn't been setup to allow the Apache user access, it may then fail.
Consider using daemon mode of mod_wsgi and configure daemon mode to run as a different user to the Apache user and one you know has access to the database.