Pymongo Celery ConfiguraionError Unknown option auto_start_request - python

I am using decorator inside my tasks which it manages my tasks. And I am using MongoDB as celery backend.
#app.task(bind=True)
#my_customize_decorator
def some_task(self):
#Do something
return
My decorator and task, both of them have MongoDB connection. When i send some_task.delay() to worker it gives me ConfigurationError: Unknown option auto_start_request.
I think celery sends auto_start_request option to pymongo and pymongo couldn't resolve that. But i don't know how can i override that configuration.

It causes from celery backend options. Not from task or decorator. Celery mongodb backend default options are you can see here
self.options.setdefault('max_pool_size', self.max_pool_size)`
self.options.setdefault('auto_start_request', False)`
These lines are causes ConfigurationError. After i remove these lines from
path/to/dist-pack/celery/backends/mongodb.py issue has been solved.

Related

Is there a way to set the acks_late config in Celery?

For my Django project, I am using Celery with Redis, registering the tasks on runtime using the celery_app.tasks.register method. I want to retrigger a task in case of some failure, I have set the acks_late config param using task_acks_late=True on the app level while instantiating celery itself. I have also set task_reject_on_worker_lost=True. However, the tasks aren't being received back by celery no matter what. Is there any other way?

Django celery redis remove a specific periodic task from queue

There is a specific periodic task that needs to be removed from message queue. I am using the configuration of Redis and celery here.
tasks.py
#periodic_task(run_every=crontab(minute='*/6'))
def task_abcd():
"""
some operations here
"""
There are other periodic tasks also in the project but I need to stop this specific task to stop from now on.
As explained in this answer, the following code will work?
#periodic_task(run_every=crontab(minute='*/6'))
def task_abcd():
pass
In this example periodic task schedule is defined directly in code, meaning it is hard-coded and cannot be altered dynamically without code change and app re-deploy.
The provided code with task logic deleted or with simple return at the beginning - will work, but will not be the answer to the question - task will still run, there just is no code that will run with it.
Also, it is recommended NOT to use #periodic_task:
"""Deprecated decorator, please use :setting:beat_schedule."""
so it is not recommended to use it.
First, change method from being #periodic_task to just regular celery #task, and because you are using Django - it is better to go straightforward for #shared_task:
from celery import shared_task
#shared_task
def task_abcd():
...
Now this is just one of celery tasks, which needs to be called explicitly. Or it can be run periodically if added to celery beat schedule.
For production and if using multiple workers it is not recommended to run celery worker with embedded beat (-B) - run separate instance of celery beat scheduler.
Schedule can specified in celery.py or in django project settings (settings.py).
It is still not very dynamic, as to re-read settings app needs to be reloaded.
Then, use Database Scheduler which will allow dynamically creating schedules - which tasks need to be run and when and with what arguments. It even provides nice django admin web views for administration!
That code will work but I'd go for something that doesn't force you to update your code every time you need to disable/enable the task.
What you could do is to use a configurable variable whose value could come from an admin panel, a configuration file, or whatever you want, and use that to return before your code runs if the task is in disabled mode.
For instance:
#periodic_task(run_every=crontab(minute='*/6'))
def task_abcd():
config = load_config_for_task_abcd()
if not config.is_enabled:
return
# some operations here
In this way, even if your task is scheduled, its operations won't be executed.
If you simply want to remove the periodic task, have you tried to remove the function and then restart your celery service. You can restart your Redis service as well as your Django server for safe measure.
Make sure that the function you removed is not referenced anywhere else.

why each time a new queue is generated by celery+rabbitmq?

abmp.py:
from celery import Celery
app = Celery('abmp', backend='amqp://guest#localhost',broker='amqp://guest#localhost' )
#app.task(bind=True)
def add(self, a, b):
return a + b
execute_test.py
from abmp import add
add.apply_async(
args=(5,7),
queue='push_tasks',
exchange='push_tasks',
routing_key='push_tasks'
)
execute celery
celery -A abmp worker -E -Q push_tasks -l info
execute execute_test.py
python2.7 execute_test.py。
Finally to the rabbitmq background view and found that the implementation of execute_test.py each time to generate a new queue, rather than the task thrown into push_tasks queue.
You are using AMQP as result backend. Celery stores each task's result as new queue, named with the task's ID. Use a better suited backend (Redis, for example) to avoid spamming new queues.
When you are using AMQP as the result backend for Celery, default behavior is to store every task result (for 1 day as per the faqs in http://docs.celeryproject.org/en/latest/faq.html).
As per the documentation on current stable version (4.1), this is deprecated and should not be used.
Your options are,
Use result_expires setting, if you plan to go ahead with amqp as backend.
Use a different backend (like redis)
If you dont need the results at all, user ignore_result setting

Celery gives no error with bad remote task names, why?

Using "send_task" celery actually never verifies a remote task exists i.e:
app.send_task('tasks.i.dont.exist', args=[], kwargs={})
Celery seems to still return a message i.e.:
<AsyncResult: b8c1425a-7411-491f-b75a-34313832b8ba>
Is there a way for it to fail if the remote task does not exist?
I've tried adding .get() and it just freezes.
According to the documentation:
If the task is not registered in the current process then you can also
execute a task by name.
You do this by using the send_task() method of the celery instance
If you want verification consider using delay instead.
You can read more about how to execute celery tasks here.

Celery task is hanging with http request

I'm testing celery tasks and have stumbled on issue. If in task exists code with request(through urllib.urlopen) then it's hanging. What reasons can be?
I just try start on minimal config with Flask.
I used rabbitmq and redis for broker and backend, but result is the same.
file(run_celery.py) with tasks:
...import celery and flask app...
celery = Celery(
app.import_name,
backend=app.config['CELERY_BROKER_URL'],
broker=app.config['CELERY_BROKER_URL']
)
#celery.task
def test_task(a):
print(a)
print(requests.get('http://google.com'))
In this way I launched worker:
celery -A run_celery.celery worker -l debug
After this, I run ipython and call task.
from run_celery import test_task
test_task.apply_async(('sfas',))
Worker's beginning perform task:
...
Received task: run_celery.test_task...
sfas
Starting new HTTP connection (1)...
And after this it's hanging.
This behavior is actual only if task contain request.
What Did I do wrong?
I found reason in my code and very wondered O_o. I don't know why this is happening but within file with tasks, exists import Model and when it is executing then perform initialization instance MagentoAPI(https://github.com/bernieke/python-magento). If I comment out this initialization then requests in celery tasks perform correctly.

Categories

Resources