Celery + Redis losing connection - python

I have a very simple Celery task that runs a (long running) shell script:
import os
from celery import Celery
os.environ['CELERY_TIMEZONE'] = 'Europe/Rome'
os.environ['TIMEZONE'] = 'Europe/Rome'
app = Celery('tasks', backend='redis', broker='redis://OTHER_SERVER:6379/0')
#app.task(name='ct.execute_script')
def execute_script(command):
return os.system(command)
I have this task running on server MY_SERVER and I launch it from OTHER_SERVER where is also running the Redis database.
The task seems to run successfully (I see the result of executing the script on the filesystem) but the I always start getting the following error:
INTERNAL ERROR: ConnectionError('Error 111 connecting to localhost:6379. Connection refused.',)
What could it be? Why is it trying to contact localhost while I've set the Redis server to be redis://OTHER_SERVER:6379/0 and it works (since the task is launched)? Thanks

When you set the backend argument, Celery will use it as the result backend.
On your code, you tell Celery to use local redis server as the result backend.
You seen ConnectionError, because celery can't save the reult to local redis server.
You can disable result backend or start an local redis server or set it to OTHER_SERVER.
ref:
http://celery.readthedocs.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results
http://celery.readthedocs.org/en/latest/configuration.html#celery-result-backend

Related

Getting Redis password error with rpush command Python

I'm running the voting app used in many K8s training and all is well except I get an error on the Redis rpush command in Python.
https://github.com/dockersamples/example-voting-app/blob/master/vote/app.py
I do get a good redis connection before I try to push:
Redis<ConnectionPool<Connection<host=new-redis,port=6379,db=0>>>
But then I get "authentication error - invalid password" on the rpush command.
The redis server you're using requires authentication; you would need to pass it when creating the Redis object:
redis = Redis(host=..., db=0, password='secretkey')
or in the Redis URL if you use Redis.from_url:
redis = Redis.from_url('redis://:secretkey#.../0')
Getting the connection without authentication is a quirk of the Redis protocol, since passing the password is one command among others (AUTH secretkey)...

how to run an external python script as celery task by taking script name using flask server

I am using celery-flask for queuing and monitoring the task, I have four to five scripts and I want these scripts to run as a celery task by passing the script through flask server and then monitoring their status.
Here is the code I have written so far:
#app.route('/script_path/<script_name>') # flask server
def taking_script_name(script_name):
calling_script.delay(script_name)
return 'i have sent an async script request'
#celery.task
def calling_script(script_name):
result = script_name
return {'result':result}
i want the status of the script passed in the result returned in the celery task.
if anybody having another suggestion how to run external task as celery task.
thanks in advance.

How to fix receiving unregistered task error - Celery

I am trying to establish a periodic task using Celery (4.2.0) and RabbitMQ (3.7.14) running with Python 3.7.2 on an Azure VM using Ubuntu 16.04. I am able to start the beat and worker and see the message get kicked off from beat to the worker but at this point I'm met with an error like so
[2019-03-29 21:35:00,081: ERROR/MainProcess] Received
unregistered task of type 'facebook-call.facebook_api'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
My code is as follows:
from celery import Celery
from celery.schedules import crontab
app = Celery('facebook-call', broker='amqp://localhost//')
#app.task
def facebook_api():
{function here}
app.conf.beat.schedule = {
'task': 'facebook-call.facebook_api',
'schedule': crontab(hour=0, minute =0, day='0-6'),
}
I am starting the beat and worker processes by using the name of the python file which contains all of the code
celery -A FacebookAPICall beat --loglevel=info
celery -A FacebookAPICall worker --loglevel=info
Again, the beat process starts and I can see the message being successfully passed to the worker but cannot figure out how to "register" the task so that it is processed by the worker.
I was able to resolve the issue by renaming the app from facebook-call to coincide with the name of the file FacebookAPICall
Before:
app = Celery('facebook-call', broker='amqp://localhost//'
After:
app = Celery('FacebookAPICall', broker='amqp://localhost//'
From reading the Celery documentation, I don't totally understand why the name of the app must also be the name of the .py file but that seems to do the trick.

Celery Remote task Blocking Request

I've a problem with remote task calling via tornado app with REST call .
In my case I've tasks that working on another machine , and another rest api working on another machine .
from flask import Flask
celery_obj = //CELERY .
#app.route("/task1")
def func():
celery_obj.send_task(name="tasks.task1", args=[])
When I start the application and send the request to the /taksk1 endpoint, flask app cannot reply anything .
What is the reason of this problem .
Please help.
celery_obj needs to be the celery application that you are sending it to with a minimum of the broker url specified.
e.g.,
from celery.app import Celery
celery = Celery(broker='redis://127.0.0.1/1')
celery.send_task('task.name', kwargs={})

Celery group.apply_async().join() never returns

Consider the following script
tasks.py:
from celery import Celery
from celery import group
app = Celery()
app.conf.update(
broker_url='pyamqp://guest#localhost//',
result_backend='redis://localhost',
)
#app.task
def my_calc(data):
for i in range(100):
data[0]=data[0]/1.04856
data[1]=data[1]/1.02496
return data
def compute(parallel_tasks):
tasks=[]
for i in range(parallel_tasks):
tasks.append([i+1.3,i+2.65])
job = group([my_calc.s(task) for task in tasks])
results = job.apply_async().join(timeout=120)
#for result in results:
# print(result.get(timeout=20))
def start(parallel_tasks,iterations):
for i in range(iterations):
print(i)
compute(parallel_tasks)
The script executes a given number of tasks (parallel_tasks) in a given number of iterations (iterations) using celery's group function
The problem is that, the more task I submit in a single iteration (the greater the parallel_tasks input parameter) the more likely that the execution of the batch will time out because of an unknown reason. The workers don't get overloaded, when the timeout happens workers are already idle.
Calling start(2,100000) works just fine.
Calling start(20,40) stops around the 10th iteration.
The issue is independent from broker and backend types. My primary config uses RabbitMQ as broker and Redis as backend, but I've tried vice versa, RabbitMQ only and Redis only configuration too.
I start the worker just the standard way: worker -A tasks -l info
Environment:
Miniconda - Python 3.6.6 (see requirements.txt for details below)
Debian 9 running in Virtualbox. VM Config: 4 cores and 8GB RAM
Redis 4.0.11
RabbitMQ 3.6.6 on Erlang 19.2.1
**Output of celery -A tasks report**
software -> celery:4.2.1 (windowlicker) kombu:4.2.1 py:3.6.6
billiard:3.5.0.4 py-amqp:2.3.2
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:redis://localhost/
RabbitMQ log contains the following errors:
=ERROR REPORT==== 7-Sep-2018::17:31:42 ===
closing AMQP connection <0.1688.0> (127.0.0.1:52602 -> 127.0.0.1:5672):
missed heartbeats from client, timeout: 60s

Categories

Resources