Flask Thread Not Working when Deployed in uWSGI server - python

I have done threading in my Flask Application. I have to log my data in a separate table in mysql which has to work asynchronously. So my main function collects all the data and before sending the flask response i am calling the thread. So that the Response is also sent on time and also my thread function runs in background. This method works fine when run in local flask server. But when i deploy this in uWSGI server i need to enable thread in uWSGI. After that when my thread function is called the data in the thread is lost and there no value in my variable.
My Mail flask Function
#app.route('/', methods=['POST'])
def mainfunction():
Dictionary['Name'] = 'MyName'
Dictionary['Age'] = 'MyAge'
Dictionary['Address'] = 'MyAddress'
t1 = threading.Thread(target=loadinDBUsingThread, args=(Dictionary,))
t1.start()
return json.dumps(Dictionary)
My Thread Function
def loadinDBUsingThread(Dictionary):
localVariable0 = Dictionary['Name']
localVariable1 = Dictionary['Age']
localVariable2 = Dictionary['Address']
#Insert these variables to the Database
I got the Keyerror Name is not found in the Dictionary. I don't know how my variables are getting lost. Please help me with this.

Related

python : dynamically spawn multithread workers with flask-socket io and python-binance

Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1

How to use celery in flask app to send task from one server to the other

I have two flask apps, one on server A, the other on server B. What I want to do is generate an asynchronous task from the app on server A on some condition and send it to the app on server B(i.e. invoke a function on server B). I think celery send task method would be used for it but don't know how to use it.
let's say I have a function 'func' in my app on server B
def func(x):
return x
I want to invoke 'func' in another function 'somefunc' in my app on server A, something like this:
def somefunc(x):
if condition is True:
func(x)
How would I use celery to implement this logic? Please help and thanks in advance
On service A you would have this:
from celery.execute import send_task
#app.route('/')
def endpoint():
if cond(x):
send_task(
'task_service_b',
(param1, param2),
exchange='if u have a specific one',
routing_key='a routing key'
)
On service b,you would need to have the app listening on 'a routing key' and bound to the exchange 'if u have a specific one',
messaging_exchange = Exchange('if u have a specific one')
bindings = (
binding(messaging_exchange, routing_key=routing_key)
for routing_key in ['a routing key']
)
default_binding = binding(
Exchange(celery_app.conf.task_default_queue),
routing_key=celery_app.conf.task_default_queue
)
celery_app.conf.task_queues = [
# default queue has same routing key as name of the queue
Queue(celery_app.conf.task_default_queue, [default_binding]),
Queue('service.b.queue', list(bindings))
]
otherwise you can bypass all and just send_task to the service b queue.
you will need a celery worker on service b as the task will need to be consumed by the worker
I'm assuming from your phrasing that you're running different apps on servers A and B. If they were the same app, using the same celery broker and backend, then named queues with one queue being served by a celery worker that's only running on B could give you the effect you want.
If A and B are running different code, a safe approach is to have the asynchronous task on A make an HTTP request to and endpoint on B, with that endpoint calling the function and sending the answer back in an HTTP response for the async task in A to deal with.
Elaborating:
A slow-running async task (say, in tasks.py)
#celery.task
def slow_running_task():
...
that's configured to run in a specific queue
CELERY_ROUTES = {
'tasks.slow_running_task': {'queue': 'slow'},
...
can be run on a specific server by only running a celery worker with -Q slow on that server.
There are nuances. It's worth skimming the celery docs.

Flask RuntimeError: working outside of application context

I want to run my Flask app with websocket. Everything seems to be ok as long as I am starting my joiner class (running as thread) and then want to register a call back funktion. This works ok with flask development server.
As I am not very good in Englisch I have problems to understand the context issues with Flask. Any help would be very much appreciated
#socketio.on('change_R8', namespace='/fl')
def change_Relay8(R8_stat):
if R8_stat == 'on':
#print("Relay 8 on")
ui.set_relay(8,1,0)
elif R8_stat == 'off':
#print("Relay 8 off")
ui.set_relay(8,0,0)
# Listen for SocketIO event that will change analog output
#socketio.on('change_ao', namespace='/fl')
def change_ao(ao_value):
#print("setze ao auf: ", ao_value)
ui.set_ao(ao_value)
#- call back function from UniPi_joiner_class----------------------------
def unipi_change(event, data):
#print("Webserver in: ",event,data)
emit_to_all_clients(event, data)
# main program ----------------------------------------------------------
if __name__ == "__main__":
log.text("Flask Web-Server gestartet")
print("Flask Web-Server gestartet")
joiner = unipi_joiner("10.0.0.52",0)
joiner.on_unipi_change(unipi_change)
socketio.run(app, host='127.0.0.1', use_reloader=False, debug=False)
log.text("Flask Web-Server beendet")
The joiner function delivers data from sensors in the format event, data(json) which I emit to my website with broadcast. The data comes from 2 different sources (time dependend) and are joined together in the joiner function using queues. This works ok with Flask development server. When I use eventlet then joiner.on_unipi_change(unipi_change) does not work and shows context error. I tested the server with data from within flask and it worked.
Question: would it be possible to deliver the sensor data through websocket to my Flask server and then from flask server to my web-site. This would be very interesting as I would have different Raspi 3 collecting data and sending it to my web server.
Regarding complete stack trace I need some guidelines (sorry Flask beginner)

redis get function return None

I am working on a flask application that interacts with redis. This applciation is deployed on heroku, with a redis add on.
When I am doing some testing with the interaction, I am not able to get the key value pair that I just set. Instead, I always get None as a return type. Here is the example:
import Flask
import redis
app = Flask(__name__)
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
redis = redis.from_url(redis_url)
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
print redis.get("test") # print None here
return "what the freak"
if __name__ == "__main__":
app.run(host='0.0.0.0')
As shown above, the test route will print None, means the value is not set. I am confused. When I test the server on my local browser it works, and when I tried interacting with redis using heroku python shell it works too.
testing with python shell:
heroku run python
from server import redis
redis.set('test', 'i am here') # return True
redis.get('test') # return i am here
I am confused now. How should I properly interact with redis using Flask?
Redis-py by default constructs a ConnectionPool client, and this is probably what the from_url helper function is doing. While Redis itself is single threaded, the commands from the connection pool have no guaranteed order of execution. For a single client, construct a redis.StrictRedis client directly, or pass through the param connection_pool=none. This is preferable for simple commands, low in number, as there is less connection management overhead. You can alternatively use a pipeline in the context of a connection pool to serialise a batch operation.
https://redis-py.readthedocs.io/en/latest/#redis.ConnectionPool
https://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline
I did more experiments on this. It seems there is an issue related to the delay. the below modification will make it work:
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
time.sleep(5) # add the delay needed to the let the set finish
print redis.get("test") # print "{test1: test}" here
return "now it works"
I read the documentation on redis, redis seems to be single threaded. So I am not sure why it will execute the get function call before the set function is done. Someone with more experience please post an explanation.

Replacing flask internal web server with Apache

I have written a single user application that currently works with Flask internal web server. It does not seem to be very robust and it crashes with all sorts of socket errors as soon as a page takes a long time to load and the user navigates elsewhere while waiting. So I thought to replace it with Apache.
The problem is, my current code is a single program that first launches about ten threads to do stuff, for example set up ssh tunnels to remote servers and zmq connections to communicate with a database located there. Finally it enters run() loop to start the internal server.
I followed all sorts of instructions and managed to get Apache service the initial page. However, everything goes wrong as I now don't have any worker threads available, nor any globally initialised classes, and none of my global variables holding interfaces to communicate with these threads do not exist.
Obviously I am not a web developer.
How badly "wrong" my current code is? Is there any way to make that work with Apache with a reasonable amount of work? Can I have Apache just replace the run() part and have a running application, with which Apache communicates? My current app in a very simplified form (without data processing threads) is something like this:
comm=None
app = Flask(__name__)
class CommsHandler(object):
__init__(self):
*Init communication links to external servers and databases*
def request_data(self, request):
*Use initialised links to request something*
return result
#app.route("/", methods=["GET"]):
def mainpage():
return render_template("main.html")
#app.route("/foo", methods=["GET"]):
def foo():
a=comm.request_data("xyzzy")
return render_template("foo.html", data=a)
comm = CommsHandler()
app.run()
Or have I done this completely wrong? Now when I remove app.run and just import app class to wsgi script, I do get a response from the main page as it does not need reference to global variable comm.
/foo does not work, as "comm" is an uninitialised variable. And I can see why, of course. I just never thought this would need to be exported to Apache or any other web server.
So the question is, can I launch this application somehow in a rc script at boot, set up its communication links and everyhing, and have Apache/wsgi just call function of the running application instead of launching a new one?
Hannu
This is the simple app with flask run on internal server:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
To run it on apache server Check out fastCGI doc :
from flup.server.fcgi import WSGIServer
from yourapplication import app
if __name__ == '__main__':
WSGIServer(app).run()

Categories

Resources