I have two separate celeryd processes running on my server, managed by supervisor. They are set to listen on separate queues as such:
[program:celeryd1]
command=/path/to/celeryd --pool=solo --queues=queue1
...
[program:celeryd2]
command=/path/to/celeryd --pool=solo --queues=queue2
...
And my celeryconfig looks something like this:
from celery.schedules import crontab
BROKER_URL = "amqp://guest:guest#localhost:5672//"
CELERY_DISABLE_RATE_LIMITS = True
CELERYD_CONCURRENCY = 1
CELERY_IGNORE_RESULT = True
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = {
'default': {
"exchange": "default",
"binding_key": "default",
},
'queue1': {
'exchange': 'queue1',
'routing_key': 'queue1',
},
'queue2': {
'exchange': 'queue2',
'routing_key': 'queue2',
},
}
CELERY_IMPORTS = ('tasks', )
CELERYBEAT_SCHEDULE = {
'first-queue': {
'task': 'tasks.sync',
'schedule': crontab(hour=02, minute=00),
'kwargs': {'client': 'client_1'},
'options': {'queue': 'queue1'},
},
'second-queue': {
'task': 'tasks.sync',
'schedule': crontab(hour=02, minute=00),
'kwargs': {'client': 'client_2'},
'options': {'queue': 'queue1'},
},
}
All tasks.sync tasks must be routed to a specific queue (and therefore celeryd progress). But when I try to run the task manually with sync.apply_async(kwargs={'client': 'value'}, queue='queue1') both celery workers pick up the task. How can I make the task route to the correct queue and only be run by the worker that is bound to the queue?
You are only running one celerybeat instance right?
Maybe you have old queue bindings that clash with this?
Try running rabbitmqctl list_queues and rabbitmqctl list_bindings,
maybe reset the data in the broker to start from scratch.
The example you have here should work, and is working for me when I just tried it.
Tip: Since you are using the same exchange and binding_key value as the queue name,
you don't have to explicitly list them in CELERY_QUEUES. When CELERY_CREATE_MISSING_QUEUES
is on (which it is by default) the queues will be automatically created exactly like you have
if you just do celeryd -Q queue1 or send a task to a queue that is undefined.
Related
I have my celery configured as following:
# create context tasks in celery
celery = Celery(
__name__,
# redis
backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL'],
include=['app.celery_tasks.tasks']
)
celery.conf.timezone = 'US/Pacific'
celery.conf.broker_transport_options = {'visibility_timeout': 3600*24}
celery.conf.task_routes = {
'tasks.periodic': {
'queue': 'periodic',
'routing_key': 'tasks.periodic'
},
'tasks.generate_report': {
'queue': 'report',
'routing_key': 'tasks.generate_report'
}
}
Then I'm using this helper method to get eta for all my report tasks
def get_eta_time(time_d=8):
tz = timezone('US/Pacific')
ct = datetime.now(tz=tz)
eta = ct + timedelta(hours=time_d)
return eta
The thing I'm encountering is I can see the tasks are scheduled using the celery control but they are not executed when the eta arrives. However, when I tried to restart my celery workers, these tasks got picked up immediately. Is there anything I missed in my celery config?
My tasks are triggered as following:
eta = get_eta_time()
generate_report.apply_async(args=(log_location, repetition_count+1), queue='report', eta=eta)
My periodical queue works as expected, but my report queue is not making any sense to me.
Turned out to be some hardware issue. I switch to a different machine and it is working now.
As I know, since Celery 3.1 decorator #periodic_task is depricated.
So I am trying to run an example from celery docs, and can't realise, what am I doing wrong.
I have the following code in task_planner.py:
from celery import Celery
from kombu import Queue, Exchange
class Config(object):
CELERY_QUEUES = (
Queue(
'try',
exchange=Exchange('try'),
routing_key='try',
),
)
celery = Celery('tasks',
backend='redis://',
broker='redis://localhost:6379/0')
celery.config_from_object(Config)
celery.conf.beat_schedule = {
'planner': {
'task': 'some_task',
'schedule': 5.0,
},
}
#celery.task(queue='try')
def some_task():
print('Hooray')
And when I run: celery -A task_planner worker -l info -B, I recieve only the following: [2016-11-27 19:06:56,119: INFO/Beat] Scheduler: Sending due task planner (some_task) every 5 sec.
But I am expecting the output 'Hooray'.
So, what am I missing?
Have found the solution.
I had the task:
#celery.task(queue='try')
def some_task():
print('Hooray')
I printed it's name:
print(some_task)
Got the following:
<#task: task_planner.some_task of tasks:0x7fceaaf5b9e8>
So I just changed the name of the task from some_task to task_planner.some_task here:
celery.conf.beat_schedule = {
'planner': {
'task': 'task_planner.some_task',
'schedule': 5.0,
},
}
And it worked!
[2016-11-29 10:09:57,697: WARNING/PoolWorker-3] Hooray
Note. You should run beat with worker (if task in the same module as beat) and loglevel 'info' in order to see the results:
celery -A task_planner worker -B -l info
Is there a way of deleting periodic task or removing the cache in Django Celery? Commenting out the code or deleting the corresponding code segment that schedules the task does not delete the actual task.
""" Commenting out, or deleting both entries from the code base doesn't do anything
CELERYBEAT_SCHEDULE = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=2),
'args': (2, 2)
},
'add-every-30-seconds2': {
'task': 'tasks.add',
'schedule': timedelta(seconds=5),
'args': (2, 6)
},
}
"""
I tried celery -A my_proj purge but the periodic tasks still happens. I am using RabbitMQ as my broker
BROKER_URL = "amqp://guest:guest#localhost:5672//"
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
From the celery guide to periodic tasks and the celery management guide.
inspect active: List active tasks
$ celery -A proj inspect active
inspect scheduled: List scheduled ETA tasks
$ celery -A proj inspect scheduled
control disable_events: Disable events
$ celery -A proj control disable_events
Alternatively, try the GUI management systems available in the management guide.
EDIT: Purge will only remove the messages, not the task itself.
Delete the task in the djcelery admin screen to remove it from the database.
My django app have two task out of which one is periodic task.
normal task: AddScore
periodic task: CalculateTopScore
class CalculateTopScore(celery.Task):
default_retry_delay = settings.DEFAULT_RETRY_DELAY
max_retries = settings.DEFAULT_MAX_RETRIES
name = 'games.tasks.CalculateTopScore'
def run(self):
try:
# Code to run
except Exception, err:
logger.exception("Error in running task calculate_top_score")
self.retry(exc=err)
return True
def on_failure(self, exc, task_id):
failure = "%s task for calculate_top_score failed permanently." % task_id
logger.error(failure)
def on_success(self):
task_info = 'calculate_top_score task successfully'
logger.info(task_info)
I want to execute this task periodically every 30 minute.
Here are the settings i'm using:
#celery/settings.py
import djcelery
import kombu
from celery.schedules import crontab
from config.celery import exchanges
djcelery.setup_loader()
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
CELERYBEAT_SCHEDULE = {
"calcluate_score": {
"task": "games.tasks.CalculateTopScore",
"schedule": crontab(minute='*/30'),
"args": (),
},
}
CELERY_QUEUES = (
kombu.Queue('add_score',
exchange=exchanges.add_score_exchange,
routing_key='add.scores'),
kombu.Queue('calcluate_score',
exchange=exchanges.calcluate_score_exchange,
routing_key='calculate.scores'),
)
CELERY_ROUTES = ('config.celery.routers.CeleryTaskRouter',)
# Default delay(in seconds) for retrying tasks.
DEFAULT_RETRY_DELAY = 60
# Maximum retry count
DEFAULT_MAX_RETRIES = 6
CELERY_IGNORE_RESULT = True
exchanges.py file
#exchanges.py
from kombu import Exchange
add_score_exchange = Exchange('add_score', type='direct')
calcluate_score_exchange = Exchange('calcluate_score', type='direct')
routes.py file
ROUTES = {
'players.tasks.AddScore': {
'exchange': 'add_score',
'exchange_type': 'direct',
'routing_key': 'add.score',
},
'games.tasks.CalculateTopScore': {
'exchange': 'calculate_score',
'exchange_type': 'direct',
'routing_key': 'calculate.score',
},
}
class CeleryTaskRouter(object):
""" This is a basic celery task router.
"""
def route_for_task(self, task, arg=None, kwargs=None):
return ROUTES.get(task)
On our production server i run celeryd with following arguments: celeryd worker -B
Now in logs i observe that celerybeat schedules the task every 30 minutes but worker doesn't know at all about the task scheduled and hence its not executed.
Why? any config/settings missing? How to execute a class based task periodically?
Please help
You probably have to specify the queues you want celery to get tasks from. Default is to use the celery queue.
Try:
celeryd worker -B -Q add_score,calculate_score
I use celery with redis as a broker.
I build out an add task and run two workers listen to different queues for testing.
celeryd -I tasks -l info -Q tasks
celeryd -I tasks -l info -Q count
Here is the tasks.py
from celery.task import task
#task(exchange="tasks")
def add(x, y):
result = x + y
return "I am queue 2.", result
However, no matter I assigned the queue or not, both queues run the task.
Please let me know if there is something I misunderstand. Thanks a lot.
The following is the celeryconfig.py
BROKER_URL = "redis://localhost:6379/0"
# Redis Backend
CELERY_RESULT_BACKEND = "redis"
CELERY_REDIS_HOST = "localhost"
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_SEND_EVENTS = True
CELERY_RESULT_BACKEND = "amqp"
CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
# CELERY_DEFAULT_QUEUE = "default"
CELERY_DEFAULT_EXCHANGE = "default"
CELERY_QUEUES = {
"default": {
"exchange": "default"
},
"tasks": {
"exchange": "tasks"
},
"count": {
"exchange": "tasks"
}
}