I've a problem with remote task calling via tornado app with REST call .
In my case I've tasks that working on another machine , and another rest api working on another machine .
from flask import Flask
celery_obj = //CELERY .
#app.route("/task1")
def func():
celery_obj.send_task(name="tasks.task1", args=[])
When I start the application and send the request to the /taksk1 endpoint, flask app cannot reply anything .
What is the reason of this problem .
Please help.
celery_obj needs to be the celery application that you are sending it to with a minimum of the broker url specified.
e.g.,
from celery.app import Celery
celery = Celery(broker='redis://127.0.0.1/1')
celery.send_task('task.name', kwargs={})
Related
Hello fellow developers,
I'm actually trying to create a small webapp that would allow me to monitor multiple binance accounts from a dashboard and maybe in the futur perform some small automatic trading actions.
My frontend is implemented with Vue+quasar and my backend server is based on python Flask for the REST api.
What I would like to do is being able to start a background process dynamically when a specific endpoint of my server is called. Once this process is started on the server, I would like it to communicate via websocket with my Vue client.
Right now I can spawn the worker and create the websocket communication, but somehow, I can't figure out how to make all the threads in my worker to work all together. Let me get a bit more specific:
Once my worker is started, I'm trying to create at least two threads. One is the infinite loop allowing me to automate some small actions and the other one is the flask-socketio server that will handle the sockets connections. Here is the code of that worker :
customWorker.py
import time
from flask import Flask
from flask_socketio import SocketIO, send, emit
import threading
import json
import eventlet
# custom class allowing me to communicate with my mongoDD
from db_wrap import DbWrap
from binance.client import Client
from binance.exceptions import BinanceAPIException, BinanceWithdrawException, BinanceRequestException
from binance.websockets import BinanceSocketManager
def process_message(msg):
print('got a websocket message')
print(msg)
class customWorker:
def __init__(self, workerId, sleepTime, dbWrap):
self.workerId = workerId
self.sleepTime = sleepTime
self.socketio = None
self.dbWrap = DbWrap()
# this retrieves worker configuration from database
self.config = json.loads(self.dbWrap.get_worker(workerId))
keys = self.dbWrap.get_worker_keys(workerId)
self.binanceClient = Client(keys['apiKey'], keys['apiSecret'])
def handle_message(self, data):
print ('My PID is {} and I received {}'.format(os.getpid(), data))
send(os.getpid())
def init_websocket_server(self):
app = Flask(__name__)
socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True, cors_allowed_origins="*")
eventlet.monkey_patch()
socketio.on_event('message', self.handle_message)
self.socketio = socketio
self.app = app
def launch_main_thread(self):
while True:
print('My PID is {} and workerId {}'
.format(os.getpid(), self.workerId))
if self.socketio is not None:
info = self.binanceClient.get_account()
self.socketio.emit('my_account', info, namespace='/')
def launch_worker(self):
self.init_websocket_server()
self.socketio.start_background_task(self.launch_main_thread)
self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False)
Once the REST endpoint is called, the worker is spawned by calling birth_worker() method of "Broker" object available within my server :
from custom_worker import customWorker
#...
def create_worker(self, workerid, sleepTime, dbWrap):
worker = customWorker(workerid, sleepTime, dbWrap)
worker.launch_worker()
def birth_worker(workerid, 5, dbwrap):
p = Process(target=self.create_worker, args=(workerid,10, botPipe, dbWrap))
p.start()
So when this is done, the worker is launched in a separate process that successfully creates threads and listens for socket connection. But my problem is that I can't use my binanceClient in my main thread. I think that it is using threads and the fact that I use eventlet and in particular the monkey_patch() function breaks it. When I try to call the binanceClient.get_account() method I get an error AttributeError: module 'select' has no attribute 'poll'
I'm pretty sure about that it comes from monkey_patch because if I use it in the init() method of my worker (before patching) it works and I can get the account info. So I guess there is a conflict here that I've been trying to resolve unsuccessfully.
I've tried using only the thread mode for my socket.io app by using async_mode=threading but then, my flask-socketio app won't start and listen for sockets as the line self.socketio.run(self.app, host="127.0.0.1", port=8001, debug=True, use_reloader=False) blocks everything
I'm pretty sure I have an architecture problem here and that I shouldn't start my app by launching socketio.run. I've been unable to start it with gunicorn for example because I need it to be dynamic and call it from my python scripts. I've been struggling to find the proper way to do this and that's why I'm here today.
Could someone please give me a hint on how is this supposed to be achieved ? How can I dynamically spawn a subprocess that will manage a socket server thread, an infinite loop thread and connections with binanceClient ? I've been roaming stack overflow without success, every advice is welcome, even an architecture reforge.
Here is my environnement:
Manjaro Linux 21.0.1
pip-chill:
eventlet==0.30.2
flask-cors==3.0.10
flask-socketio==5.0.1
pillow==8.2.0
pymongo==3.11.3
python-binance==0.7.11
websockets==8.1
I am running celery client(Flask) and worker in two different machines, now once the worker has completed the task, I need to callback a function on client side. Is this possible?
Celery client:-
celery_app=Celery('test_multihost', broker='amqp://test:test#<worker_ip>/test_host', backend='rpc')
result= testMethod1.apply_async((param1, param2,param3), link=testMethod2.s())
#celery_app.task
def testMethod2():
#testMethod2 body.
Celery Worker:-
celery_app=Celery('test_multihost', broker='amqp://test:test#<worker_ip>/test_host', backend='rpc')
#celery_app.task
def testMethod1():
#testMethod1 body
But the problem is the function testMethod2 is getting executed on the celery worker side, not on the client side.
Is there anyway that I can callback the method on client side?
One way to do this is to have Celery write its result in a database table, and use Flask to poll for the result of the task by repeatedly querying the database. A similar construct might be to keep a register of completed tasks in Redis, but the gist would be the same.
Do you want to trigger a completion message to the user? If you can notify by email/text message, you could just let Celery handle that of course.
If you need to kickstart some Flask process - and it really needs to be inside Flask's ecosystem for some reason - use the worker with the requests module to call to an endpoint that Flask is listening to.
I solved this problem using #after_task_publish from celery signals.
The code snippet is as follows:-
#after_task_publish.connect(sender=<registered_celery_task>)
def testMethod2(sender=None, headers=None, body=None, **kwargs):
#callback body
The testMethod2 will be called after the celery worker is completed on the remote machine.
Here I can access the result of celery worker using headers parameter.
I am using celery-flask for queuing and monitoring the task, I have four to five scripts and I want these scripts to run as a celery task by passing the script through flask server and then monitoring their status.
Here is the code I have written so far:
#app.route('/script_path/<script_name>') # flask server
def taking_script_name(script_name):
calling_script.delay(script_name)
return 'i have sent an async script request'
#celery.task
def calling_script(script_name):
result = script_name
return {'result':result}
i want the status of the script passed in the result returned in the celery task.
if anybody having another suggestion how to run external task as celery task.
thanks in advance.
I have two flask apps, one on server A, the other on server B. What I want to do is generate an asynchronous task from the app on server A on some condition and send it to the app on server B(i.e. invoke a function on server B). I think celery send task method would be used for it but don't know how to use it.
let's say I have a function 'func' in my app on server B
def func(x):
return x
I want to invoke 'func' in another function 'somefunc' in my app on server A, something like this:
def somefunc(x):
if condition is True:
func(x)
How would I use celery to implement this logic? Please help and thanks in advance
On service A you would have this:
from celery.execute import send_task
#app.route('/')
def endpoint():
if cond(x):
send_task(
'task_service_b',
(param1, param2),
exchange='if u have a specific one',
routing_key='a routing key'
)
On service b,you would need to have the app listening on 'a routing key' and bound to the exchange 'if u have a specific one',
messaging_exchange = Exchange('if u have a specific one')
bindings = (
binding(messaging_exchange, routing_key=routing_key)
for routing_key in ['a routing key']
)
default_binding = binding(
Exchange(celery_app.conf.task_default_queue),
routing_key=celery_app.conf.task_default_queue
)
celery_app.conf.task_queues = [
# default queue has same routing key as name of the queue
Queue(celery_app.conf.task_default_queue, [default_binding]),
Queue('service.b.queue', list(bindings))
]
otherwise you can bypass all and just send_task to the service b queue.
you will need a celery worker on service b as the task will need to be consumed by the worker
I'm assuming from your phrasing that you're running different apps on servers A and B. If they were the same app, using the same celery broker and backend, then named queues with one queue being served by a celery worker that's only running on B could give you the effect you want.
If A and B are running different code, a safe approach is to have the asynchronous task on A make an HTTP request to and endpoint on B, with that endpoint calling the function and sending the answer back in an HTTP response for the async task in A to deal with.
Elaborating:
A slow-running async task (say, in tasks.py)
#celery.task
def slow_running_task():
...
that's configured to run in a specific queue
CELERY_ROUTES = {
'tasks.slow_running_task': {'queue': 'slow'},
...
can be run on a specific server by only running a celery worker with -Q slow on that server.
There are nuances. It's worth skimming the celery docs.
I have a very simple Celery task that runs a (long running) shell script:
import os
from celery import Celery
os.environ['CELERY_TIMEZONE'] = 'Europe/Rome'
os.environ['TIMEZONE'] = 'Europe/Rome'
app = Celery('tasks', backend='redis', broker='redis://OTHER_SERVER:6379/0')
#app.task(name='ct.execute_script')
def execute_script(command):
return os.system(command)
I have this task running on server MY_SERVER and I launch it from OTHER_SERVER where is also running the Redis database.
The task seems to run successfully (I see the result of executing the script on the filesystem) but the I always start getting the following error:
INTERNAL ERROR: ConnectionError('Error 111 connecting to localhost:6379. Connection refused.',)
What could it be? Why is it trying to contact localhost while I've set the Redis server to be redis://OTHER_SERVER:6379/0 and it works (since the task is launched)? Thanks
When you set the backend argument, Celery will use it as the result backend.
On your code, you tell Celery to use local redis server as the result backend.
You seen ConnectionError, because celery can't save the reult to local redis server.
You can disable result backend or start an local redis server or set it to OTHER_SERVER.
ref:
http://celery.readthedocs.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results
http://celery.readthedocs.org/en/latest/configuration.html#celery-result-backend