Notify or callback flask after remote celery worker is completed - python

I am running celery client(Flask) and worker in two different machines, now once the worker has completed the task, I need to callback a function on client side. Is this possible?
Celery client:-
celery_app=Celery('test_multihost', broker='amqp://test:test#<worker_ip>/test_host', backend='rpc')
result= testMethod1.apply_async((param1, param2,param3), link=testMethod2.s())
#celery_app.task
def testMethod2():
#testMethod2 body.
Celery Worker:-
celery_app=Celery('test_multihost', broker='amqp://test:test#<worker_ip>/test_host', backend='rpc')
#celery_app.task
def testMethod1():
#testMethod1 body
But the problem is the function testMethod2 is getting executed on the celery worker side, not on the client side.
Is there anyway that I can callback the method on client side?

One way to do this is to have Celery write its result in a database table, and use Flask to poll for the result of the task by repeatedly querying the database. A similar construct might be to keep a register of completed tasks in Redis, but the gist would be the same.
Do you want to trigger a completion message to the user? If you can notify by email/text message, you could just let Celery handle that of course.
If you need to kickstart some Flask process - and it really needs to be inside Flask's ecosystem for some reason - use the worker with the requests module to call to an endpoint that Flask is listening to.

I solved this problem using #after_task_publish from celery signals.
The code snippet is as follows:-
#after_task_publish.connect(sender=<registered_celery_task>)
def testMethod2(sender=None, headers=None, body=None, **kwargs):
#callback body
The testMethod2 will be called after the celery worker is completed on the remote machine.
Here I can access the result of celery worker using headers parameter.

Related

python : dynamically spawn multithread workers with flask-socket io and python-binance

Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1

how to run an external python script as celery task by taking script name using flask server

I am using celery-flask for queuing and monitoring the task, I have four to five scripts and I want these scripts to run as a celery task by passing the script through flask server and then monitoring their status.
Here is the code I have written so far:
#app.route('/script_path/<script_name>') # flask server
def taking_script_name(script_name):
calling_script.delay(script_name)
return 'i have sent an async script request'
#celery.task
def calling_script(script_name):
result = script_name
return {'result':result}
i want the status of the script passed in the result returned in the celery task.
if anybody having another suggestion how to run external task as celery task.
thanks in advance.

Celery Remote task Blocking Request

I've a problem with remote task calling via tornado app with REST call .
In my case I've tasks that working on another machine , and another rest api working on another machine .
from flask import Flask
celery_obj = //CELERY .
#app.route("/task1")
def func():
celery_obj.send_task(name="tasks.task1", args=[])
When I start the application and send the request to the /taksk1 endpoint, flask app cannot reply anything .
What is the reason of this problem .
Please help.
celery_obj needs to be the celery application that you are sending it to with a minimum of the broker url specified.
e.g.,
from celery.app import Celery
celery = Celery(broker='redis://127.0.0.1/1')
celery.send_task('task.name', kwargs={})

How to use celery in flask app to send task from one server to the other

I have two flask apps, one on server A, the other on server B. What I want to do is generate an asynchronous task from the app on server A on some condition and send it to the app on server B(i.e. invoke a function on server B). I think celery send task method would be used for it but don't know how to use it.
let's say I have a function 'func' in my app on server B
def func(x):
return x
I want to invoke 'func' in another function 'somefunc' in my app on server A, something like this:
def somefunc(x):
if condition is True:
func(x)
How would I use celery to implement this logic? Please help and thanks in advance
On service A you would have this:
from celery.execute import send_task
#app.route('/')
def endpoint():
if cond(x):
send_task(
'task_service_b',
(param1, param2),
exchange='if u have a specific one',
routing_key='a routing key'
)
On service b,you would need to have the app listening on 'a routing key' and bound to the exchange 'if u have a specific one',
messaging_exchange = Exchange('if u have a specific one')
bindings = (
binding(messaging_exchange, routing_key=routing_key)
for routing_key in ['a routing key']
)
default_binding = binding(
Exchange(celery_app.conf.task_default_queue),
routing_key=celery_app.conf.task_default_queue
)
celery_app.conf.task_queues = [
# default queue has same routing key as name of the queue
Queue(celery_app.conf.task_default_queue, [default_binding]),
Queue('service.b.queue', list(bindings))
]
otherwise you can bypass all and just send_task to the service b queue.
you will need a celery worker on service b as the task will need to be consumed by the worker
I'm assuming from your phrasing that you're running different apps on servers A and B. If they were the same app, using the same celery broker and backend, then named queues with one queue being served by a celery worker that's only running on B could give you the effect you want.
If A and B are running different code, a safe approach is to have the asynchronous task on A make an HTTP request to and endpoint on B, with that endpoint calling the function and sending the answer back in an HTTP response for the async task in A to deal with.
Elaborating:
A slow-running async task (say, in tasks.py)
#celery.task
def slow_running_task():
...
that's configured to run in a specific queue
CELERY_ROUTES = {
'tasks.slow_running_task': {'queue': 'slow'},
...
can be run on a specific server by only running a celery worker with -Q slow on that server.
There are nuances. It's worth skimming the celery docs.

Execute Celery's link_error callback on a separate queue/worker

I have 2 apps on 2 separate servers, let's call them A and B. Both apps have a Celery worker active, listening to separate queues (QueueA and QueueB).
Server B pushes a task to QueueB, using apply_async.
Here is server B's tasks:
#app.task(bind=True, queue="QueueB", name="name_on_server_A")
def taskForServerB():
# nothing is executed here
#app.task(bind=True)
def success(result):
print('Task succeeded')
#app.task(bind=True):
def failure(...):
print('task failed')
taskForServerB.s().apply_async(link=success.s(), link_error=failure.s())
On Server A, the task name_on_server_A receives the tasks and executes it. If it completes successfully, the task success is execute properly on ServerB, but it name_on_server_A fails, the task failure is not executed. Instead, Server A throws a NotRegisteredError for a task with name failure.
Is there something I am missing? How can I get the failure task to be executed on ServerB, where the first task is called from?
There are two issues here:
The route of task to the correct queue which you defined for name_on_server_A (with the queue assignment) - which is by the way something that is new for me (I'm using ROUTER in the celery config and route each task by it's name to the right queue.
when you define your celery app you might forgot to include the task failure so it unregister:
app = Celery(broker='amqp://', backend='...', include=['file1.py', 'file2.py', ..])

Categories

Resources