Celery worker hangs without any error - python

I have a production setup for running celery workers for making a POST / GET request to remote service and storing result, It is handling load around 20k tasks per 15 min.
The problem is that the workers go numb for no reason, no errors, no warnings.
I have tried adding multiprocessing also, the same result.
In log I see the increase in the time of executing task, like succeeded in s
For more details look at https://github.com/celery/celery/issues/2621

If your celery worker get stuck sometimes, you can use strace & lsof to find out at which system call it get stuck.
For example:
$ strace -p 10268 -s 10000
Process 10268 attached - interrupt to quit
recvfrom(5,
10268 is the pid of celery worker, recvfrom(5 means the worker stops at receiving data from file descriptor.
Then you can use lsof to check out what is 5 in this worker process.
lsof -p 10268
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
......
celery 10268 root 5u IPv4 828871825 0t0 TCP 172.16.201.40:36162->10.13.244.205:wap-wsp (ESTABLISHED)
......
It indicates that the worker get stuck at a tcp connection(you can see 5u in FD column).
Some python packages like requests is blocking to wait data from peer, this may cause celery worker hangs, if you are using requests, please make sure to set timeout argument.
Have you seen this page:
https://www.caktusgroup.com/blog/2013/10/30/using-strace-debug-stuck-celery-tasks/

I also faced the issue, when I was using delay shared_task with
celery, kombu, amqp, billiard. After calling the API when I used
delay() for #shared_task, all functions well but when it goes to delay
it hangs up.
So, the issue was In main Application init.py, the below settings
were missing
This will make sure the app is always imported when # Django starts so that shared_task will use this app.
In init.py
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celeryApp
#__all__ = ('celeryApp',)
__all__ = ['celeryApp']
Note1: In place of celery_app put the Aplication name, means the Application mentioned in celery.py import the App and put here
Note2:** If facing only hangs issue in shared task above solution may solve your issue and ignore below matters.
Also wanna mention A=another issue, If anyone facing Error 111
connection issue then please check the versions of amqp==2.2.2,
billiard==3.5.0.3, celery==4.1.0, kombu==4.1.0 whether they are
supporting or not. Mentioned versions are just an example. And Also
check whether redis is install in your system(If any any using redis).
Also make sure you are using Kombu 4.1.0. In the latest version of
Kombu renames async to asynchronous.

Follow this tutorial
Celery Django Link
Add the following to the settings
NB Install redis for both transport and result
# TRANSPORT
CELERY_BROKER_TRANSPORT = 'redis'
CELERY_BROKER_HOST = 'localhost'
CELERY_BROKER_PORT = '6379'
CELERY_BROKER_VHOST = '0'
# RESULT
CELERY_RESULT_BACKEND = 'redis'
CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = '6379'
CELERY_REDIS_DB = '1'

Related

Django Celery delay() always pushing to default 'celery' queue

I'm ripping my hair out with this one.
The crux of my issue is that, using the Django CELERY_DEFAULT_QUEUE setting in my settings.py is not forcing my tasks to go to that particular queue that I've set up. It always goes to the default celery queue in my broker.
However, if I specify queue=proj:dev in the shared_task decorator, it goes to the correct queue. It behaves as expected.
My setup is as follows:
Django code on my localhost (for testing and stuff). Executing task .delay()'s via Django's shell (manage.py shell)
a remote Redis instance configured as my broker
2 celery workers configured on a remote machine setup and waiting for messages from Redis (On Google App Engine - irrelevant perhaps)
NB: For the pieces of code below, I've obscured the project name and used proj as a placeholder.
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery, shared_task
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
app = Celery('proj')
app.config_from_object('django.conf:settings', namespace='CELERY', force=True)
app.autodiscover_tasks()
#shared_task
def add(x, y):
return x + y
settings.py
...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'redis://:{}#{}:6379/0'.format(
os.environ.get('REDIS_PASSWORD'),
os.environ.get('REDIS_HOST', 'alice-redis-vm'))
CELERY_DEFAULT_QUEUE = os.environ.get('CELERY_DEFAULT_QUEUE', 'proj:dev')
The idea is that, for right now, I'd like to have different queues for the different environments that my code exists in: dev, staging, prod. Thus, on Google App Engine, I define an environment variable that is passed based on the individual App Engine service.
Steps
So, with the above configuration, I fire up the shell using ./manage.py shell and run add.delay(2, 2). I get an AsyncResult back but Redis monitor clearly shows a message was sent to the default celery queue:
1497566026.117419 [0 155.93.144.189:58887] "LPUSH" "celery"
...
What am I missing?
Not to throw a spanner in the works, but I feel like there was a point today at which this was actually working. But for the life of me, I can't think what part of my brain is failing me here.
Stack versions:
python: 3.5.2
celery: 4.0.2
redis: 2.10.5
django: 1.10.4
This issue is far more simple than I thought - incorrect documentation!!
The Celery documentation asks us to use CELERY_DEFAULT_QUEUE to set the task_default_queue configuration on the celery object.
Ref: http://docs.celeryproject.org/en/latest/userguide/configuration.html#new-lowercase-settings
We should currently use CELERY_TASK_DEFAULT_QUEUE. This is an inconsistency in the naming of all the other settings' names. It was raised on Github here - https://github.com/celery/celery/issues/3772
Solution summary
Using CELERY_DEFAULT_QUEUE in a configuration module (using config_from_object) has no effect on the queue.
Use CELERY_TASK_DEFAULT_QUEUE instead.
If you are here because you're trying to implement a predefined queue using SQS in Celery and find that Celery creates a new queue called "celery" in SQS regardless of what you say, you've reached the end of your journey friend.
Before passing broker_transport_options to Celery, change your default queue and/or specify the queues you will use explicitly. In my case, I need just the one queue so doing the following worked:
celery.conf.task_default_queue = "<YOUR_PREDEFINED_QUEUE_NAME_IN_SQS">

How to get Celery to load the config from command line?

I am attempting to use celery worker to load a config file at the command line:
celery worker --config=my_settings_module
This doesn't appear to work. celery worker starts and uses its default settings (which include assuming that there is a RabbitMQ server available at localhost:5672) In my config, I would like to point celery to a different place. When I change the amqp settings in the config file to something, Celery didn't appear to care. It still shows the default RabbitMQ settings.
I also tried something bogus
celery worker --config=this_file_does_not_exist
And Celery once again did not care. The worker started and attached to the default RabbitMQ. It's not even looking at the --config setting
I read about how Celery lazy loads. I'm not sure that has anything to do with this.
How do I get celery worker to honor the --config setting?
If you give some invalid module name or a module name which is not in PYTHONPATH, say celery worker --config=invalid_foo, celery will ignore it.
You can verify this by creating a simple config file.
$ celery worker -h
--config=CONFIG Name of the configuration module
As mentioned in celery worker help, you should pass configuration module. Otherwise it will raise an error.
If you just run
celery worker
it will start worker and its output will be colored.
In the same directory, create a file called c.py with this line.
CELERYD_LOG_COLOR = False
Now run
celery worker --config=c
it will start worker and its output will not be colored.
If you run celery worker --config=c.py, it will raise an error.
celery.utils.imports.NotAPackage: Error: Module 'c.py' doesn't exist, or it's not a valid Python module name.
Did you mean 'c'?
I had the exact same error, but I eventually figured out that I had made simple option naming mistakes in the configuration module itself which were not obvious at all.
You see, when you start out and follow the tutorial, you will end up with something that looks like this in your main module:
app = Celery('foo', broker='amqp://user:pass#example.com/vsrv', backend='rpc://')
Which works fine, but then later as you add more and more configuration options you decide to move the options to a separate file, at which point you go ahead and just copy+paste and split the options into lines until it looks like this:
Naïve my_settings.py:
broker='amqp://user:pass#example.com/vsrv'
backend='rpc://'
result_persistent=True
task_acks_late=True
# ... etc. etc.
And there you just fooled yourself! Because in a settings module the options are called broker_url and result_backend instead of just broker and backend as they would be called in the instantiation above.
Corrected my_settings.py:
broker_url='amqp://user:pass#example.com/vsrv'
result_backend='rpc://'
result_persistent=True
task_acks_late=True
# ... etc. etc.
And all of a sudden, your worker boots up just fine with all settings in place.
I hope this will cure a few headaches of fellow celery newbies like us.
Further note:
You can test that celery in fact does not ignore your file by placing a print-statement (or print function call if you're on Py3) into the settings module.

Celery/RabbitMQ unacked messages blocking queue?

I have invoked a task that fetches some information remotely with urllib2 a few thousand times. The tasks are scheduled with a random eta (within a week) so they all don't hit the server at the same time. Sometimes I get a 404, sometimes not. I am handling the error in case it happens.
In the RabbitMQ console I can see 16 unacknowledged messages:
I stopped celery, purged the queue and restarted it. The 16 unacknowledged messages were still there.
I have other tasks that go to the same queue and none of them was executed either. After purging, I tried to submit another task and it's state remains ready:
Any ideas how I can find out why messages remain unacknowledged?
Versions:
celery==3.1.4
{rabbit,"RabbitMQ","3.5.3"}
celeryapp.py
CELERYBEAT_SCHEDULE = {
'social_grabber': {
'task': '<django app>.tasks.task_social_grabber',
'schedule': crontab(hour=5, minute=0, day_of_week='sunday'),
},
}
tasks.py
#app.task
def task_social_grabber():
for user in users:
eta = randint(0, 60 * 60 * 24 * 7) #week in seconds
task_social_grabber_single.apply_async((user), countdown=eta)
There is no routing for this task defined so it goes into the default queue: celery. There is one worker processing this queue.
supervisord.conf:
[program:celery]
autostart = true
autorestart = true
command = celery worker -A <django app>.celeryapp:app --concurrency=3 -l INFO -n celery
RabbitMQ broke QoS settings in version 3.3. You need to upgrade celery to at least 3.1.11 (changelog) and kombu to at least 3.0.15 (changelog). You should use the latest versions.
I hit this exact same behavior when 3.3 was released. RabbitMQ flipped the default behavior of the prefetch_count flag. Before this, if a consumer reached the CELERYD_PREFETCH_MULTIPLIER limit in eta'd messages, the worker would up this limit in order to fetch more messages. The change broke this behavior, as the new default behavior denied this capability.
I had a similar symptoms. Messages where getting to the MQ (visible in the charts) but where not picked up by the worker.
This led me to the assumption that my Django app had correctly setup Celery app, but I was missing an import ensuring Celery would be configured during Django startup:
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app # noqa
It is a silly mistake, but the messages getting to the broker, having returned an AsyncResult, got me off track, and made me looking i the wrong places. Then I noticed that setting CELERY_ALWAYS_EAGER = True didn't do squat, event then tasks weren't executed at all.
PS: This may not be an answer to #kev question, but since I got here couple of times, while looking for the solution to my problem, I post it here for anyone in similar situation.

celery worker not working though rabbitmq has queue buildup

I am getting in touch with celery and I wrote a task by following Tutorial but somehow worker not getting up and I get following log
After entering command:
celery worker -A tasks -l debug
I get a log:
Running a worker with superuser privileges when the
worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
And here is my task:
from celery import Celery
app = Celery('tasks', backend='amqp',broker='amqp://sanjay:**#localhost:5672//')
#app.task
def gen_prime(x):
multiples = []
results = []
for i in xrange(2, x+1):
if i not in multiples:
results.append(i)
for j in xrange(i*i, x+1, i):
multiples.append(j)
return results
Though in rabbitmq admin console I see some queue build up when I try to generate prime numbers in ipython console but i am not getting result back on the console.
Here is my console action:
>>> from tasks import gen_prime
>>> pr=gen_prime.delay(10000)
>>> pr.ready()
False
>>>
>>> pr.ready()
False
>>> pr.ready()
False
I am trying to solve this one from last 3 days but I was not able to solve it.
The error message pretty much tells you what's going on in this case. You're trying to run the worker as root (generally a bad idea due to security concerns). If you want to override this and allow it to run, you must set your environment:
export C_FORCE_ROOT="true"
Then run the worker.
Or you can just run it as a different user, which is preferred. You can search for how to add a user. Then you simply login as that user or su and execute your worker.
Since you tagged this digital ocean, here is a link to their tutorial on how to add a user:
https://www.digitalocean.com/community/tutorials/how-to-add-and-delete-users-on-ubuntu-12-04-and-centos-6
Also, celery has some docs regarding how to daemonize your workers. I usually use the supervisord method.
https://celery.readthedocs.org/en/latest/tutorials/daemonizing.html#centos
Don't run celery workers as root.
I recommend using supervisord to manage celery workers - you can use the user configuration directive to specify which user to run the celery workers as.

python-rq worker not reading jobs in queue

I'm facing a basic issue while setting up python-rq - the rqworker doesn't seem to recognize jobs that are pushed to the queue it's listening on.
Everything is run inside virtualenv
I have the following code:
from redis import Redis
from rq import Queue
from rq.registry import FinishedJobRegistry
from videogen import videogen
import time
redis_conn = Redis(port=5001)
videoq = Queue('medium', connection=redis_conn)
fin_registry = FinishedJobRegistry(connection=redis_conn, name='medium')
jobid = 1024
job = videoq.enqueue(videogen, jobid)
while not job.is_finished:
time.sleep(2)
print job.result
Here videogen is a simple function which immediately returns the integer parameter it receives.
On running rqworker medium and starting the app, there is no result printed. There are NO extra traces at rqworker other than this:
14:41:29 RQ worker started, version 0.5.0
14:41:29
14:41:29 *** Listening on medium...
The redis instance is accessible from the same shell where I run rqworker, as even shows the updated keys:
127.0.0.1:5001> keys *
1) "rq:queues"
2) "rq:queue:medium"
3) "rq:job:9a46f9c5-03e1-4b08-946b-61ad2c3815b1"
So what is possibly missing here?
Silly error - had to supply redis connection url to rqworker
rqworker --url redis://localhost:5001 medium
It's worth noting that this can also happen if you run your RQ workers on Windows, which is not supported by the workers. From the documentation:
RQ workers will only run on systems that implement fork(). Most
notably, this means it is not possible to run the workers on Windows
without using the Windows Subsystem for Linux and running in a bash
shell.

Categories

Resources