I've launched a lot of tasks, but some of then hasn't finished (763 tasks), are in PENDING state, but the system isn't processing anything...
It's possible to retry this tasks giving celery the task_id?
You can't.
You can retry a task only from inside itself, you can't do it from outside.
The best thing to do in this case is to run again the task type with the same args, in this way you will do the same JOB but with a new PID that identify your process/task.
Remember also that the celery PENDING state not means only that the task is waiting for execution, but maybe that is unknown.
http://celeryq.org/docs/userguide/tasks.html#pending
I hope this could help
This works now after setting celery.conf.update(result_extended=True) which persists the arguments passed to the task:
def retry_task(task_id):
meta=celery.backend.get_task_meta(task_id)
task = celery.tasks[meta['name']]
task.apply_async(args=meta['args'], kwargs=meta['kwargs']) #specify any other parameters you might be passing
Related
I have a special use case where I need to run a task on all workers to check if a specific process is running on the celery worker. The problem is that I need to run this on all my workers as each worker represents a replica of this specific process.
In the end I want to display 8/20 workers are ready to process further tasks.
But currently I'm only able to process a task on either a random selected worker or just on one specific worker which does not solve my problem at all ...
Thanks in advance
I can't think of a good way to do this on Celery. However, a nice workaround perhaps could be to implement your own command, and then you can broadcast that command to every worker (just like you can broadcast shutdown or status commands for an example). When I think of it, this does indeed sound like some sort of monitoring/maintenance operation, right?
In celery the worker node which is executing the task crashes in a mid-way through the task. Does the celerey reschedules the task excution?
Not by default, however, you can set the acks_late option on the task, or the task_acks_late option globally, to get that behavior. See:
http://docs.celeryproject.org/en/latest/faq.html#should-i-use-retry-or-acks-late
The acks_late setting would be used when you need the task to be
executed again if the worker (for some reason) crashes mid-execution.
After reading docs; what I understand is that you cannot rerun celery tasks outside of application contexts.
Initially; what i thought was; terminated task would resume running once the worker has been restarted; however it didn't. I am currently using
celery.control.terminate(task_id)
That terminates my celery task id; I then tried running a worker with the same name hoping my revoked task would resume and finish; it didn't. After doing bit of research; I saw that a task can be reran with the same arguments; I thought MAYBE it would resume if I reran the same task again, it didn't. How can I revoke a task - then be able to re run it.
I'm using .apply_async() to intiate my task.
Use revoke instead of terminate e.g:
celery_app.control.revoke(task_id)
you can refer this solution as well.
Cancel an already executing task with Celery?
Does celery detect the changes of task code even if task already is prefetched as past task code?
No, you must reload the workers.
I've created a task where the program uploads a file and the task acts as expected i.e it successfully uploads the file but even after the task does its work, the task does not "terminate", the state still remains PENDING. How can I fix this?
From the celery docs:
PENDING: Task is waiting for execution or unknown. Any task id that is
not know is implied to be in the pending state.
Are you sure the task finishes what it does cleanly? Best to post code.