We have Python 3.6.1 set up with Django, Celery, and Rabbitmq on Ubuntu 14.04. Right now, I'm using the Django debug server (for dev and Apache isn't working). My current problem is that the celery workers get launched from Python and immediately die -- processes show as defunct. If I use the same command in a terminal window, the worker gets created and picks up the task if there is one waiting in the queue.
Here's the command:
celery worker --app=myapp --loglevel=info --concurrency=1 --maxtasksperchild=20 -n celery_1 -Q celery
The same functionality occurs for whichever queues are being set up.
In the terminal, we see the output myapp.settings - INFO - Loading... followed by output that describes the queue and lists the tasks. When running from Python, the last thing we see is the Loading...
In the code, we do have a check to be sure we are not running the celery command as root.
These are the Celery settings from our settings.py file:
CELERY_ACCEPT_CONTENT = ['json','pickle']
CELERY_TASK_SERIALIZER = 'pickle'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IMPORTS = ('api.tasks',)
CELERYD_PREFETCH_MULTIPLIER = 1
CELERYD_CONCURRENCY = 1
BROKER_POOL_LIMIT = 120 # Note: I tried this set to None but it didn't seem to make any difference
CELERYD_LOG_COLOR = False
CELERY_LOG_FORMAT = '%)asctime)s - $(processName)s - %(levelname)s - %(message)s'
CELERYD_HIJACK_ROOT_LOGGER = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(psconf.BASE_DIR, 'myapp_static/')
BROKER_URL = psconf.MQ_URI
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = True
CELERY_ROUTES = {}
for entry in os.scandir(psconf.PLUGIN_PATH):
if not entry.is_dir() or entry.name == '__pycache__':
continue
plugin_dir = entry.name
settings_file = f'{plugin_dir}.settings'
try:
plugin_tasks = importlib.import_module(settings_file)
queue_name = plugin_tasks.QUEUENAME
except ModuleNotFoundError as e:
logging.warning(e)
except AttributeError:
logging.debug(f'The plugin {plugin_dir} will use the general worker queue.')
else:
CELERY_ROUTES[f'{plugin_dir}.tasks.run'] = {'queue': queue_name}
logging.debug(f'The plugin {plugin_dir} will use the {queue_name} queue.')
Here is the part that kicks off the worker:
class CeleryWorker(BackgroundProcess):
def __init__(self, n, q):
self.name = n
self.worker_queue = q
cmd = f'celery worker --app=myapp --loglevel=info --concurrency=1 --maxtasksperchild=20 -n {self.name" -Q {self.worker_queue}'
super().__init__(cmd, cwd=str(psconf.BASE_DIR))
class BackgroundProcess(subprocess.Popen):
def __init__(self, args, **kwargs):
super().__init__(args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, **kwargs)
Any suggestions as to how to get this working from Python are appreciated. I'm new to Rabbitmq/Celery.
Just in case someone else needs this...It turns out that the problem was that the shell script which kicks off this whole app is now being launched with sudo and, even though I thought I was checking so we wouldn't launch the celery worker with sudo, I'd missed something and we were trying to launch as root. That is a no-no. I'm now explicitly using 'sudo -u ' and the workers are starting properly.
I would like to launch a periodic task every second but only if the previous task ended (db polling to send task to celery).
In the Celery documentation they are using the Django cache to make a lock.
I tried to use the example:
from __future__ import absolute_import
import datetime
import time
from celery import shared_task
from django.core.cache import cache
LOCK_EXPIRE = 60 * 5
#shared_task
def periodic():
acquire_lock = lambda: cache.add('lock_id', 'true', LOCK_EXPIRE)
release_lock = lambda: cache.delete('lock_id')
a = acquire_lock()
if a:
try:
time.sleep(10)
print a, 'Hello ', datetime.datetime.now()
finally:
release_lock()
else:
print 'Ignore'
with the following configuration:
app.conf.update(
CELERY_IGNORE_RESULT=True,
CELERY_ACCEPT_CONTENT=['json'],
CELERY_TASK_SERIALIZER='json',
CELERY_RESULT_SERIALIZER='json',
CELERYBEAT_SCHEDULE={
'periodic_task': {
'task': 'app_task_management.tasks.periodic',
'schedule': timedelta(seconds=1),
},
},
)
But in the console, I never see the Ignore message and I have Hello every second. It seems that the lock is not working fine.
I launch the periodic task with:
celeryd -B -A my_app
and the worker with:
celery worker -A my_app -l info
Could you please correct my misunderstanding?
From the Django Cache Framework documentation about local-memory cache:
Note that each process will have its own private cache instance, which
means no cross-process caching is possible.
So basically your workers are each dealing with their own cache. If you need a low resource cost cache backend I would recommend File Based Cache or Database Cache, both allow cross-process.
I'm trying to run example from Celery documentation.
I run: celeryd --loglevel=INFO
/usr/local/lib/python2.7/dist-packages/celery/loaders/default.py:64: NotConfigured: No 'celeryconfig' module found! Please make sure it exists and is available to Python.
"is available to Python." % (configname, )))
[2012-03-19 04:26:34,899: WARNING/MainProcess]
-------------- celery#ubuntu v2.5.1
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://guest#localhost:5672//
- ** ---------- . loader: celery.loaders.default.Loader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 4
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
tasks.py:
# -*- coding: utf-8 -*-
from celery.task import task
#task
def add(x, y):
return x + y
run_task.py:
# -*- coding: utf-8 -*-
from tasks import add
result = add.delay(4, 4)
print (result)
print (result.ready())
print (result.get())
In same folder celeryconfig.py:
CELERY_IMPORTS = ("tasks", )
CELERY_RESULT_BACKEND = "amqp"
BROKER_URL = "amqp://guest:guest#localhost:5672//"
CELERY_TASK_RESULT_EXPIRES = 300
When I run "run_task.py":
on python console
eb503f77-b5fc-44e2-ac0b-91ce6ddbf153
False
errors on celeryd server
[2012-03-19 04:34:14,913: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': 'tasks.add', 'utc': False, 'args': (4, 4), 'expires': None, 'eta': None, 'kwargs': {}, 'id': '841bc21f-8124-436b-92f1-e3b62cafdfe7'}
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 444, in receive_message
self.strategies[name](message, body, message.ack_log_error)
KeyError: 'tasks.add'
Please explain what's the problem.
I think you need to restart the worker server. I meet the same problem and solve it by restarting.
I had the same problem:
The reason of "Received unregistered task of type.." was that celeryd service didn't find and register the tasks on service start (btw their list is visible when you start
./manage.py celeryd --loglevel=info ).
These tasks should be declared in CELERY_IMPORTS = ("tasks", ) in settings file.
If you have a special celery_settings.py file it has to be declared on celeryd service start as --settings=celery_settings.py as digivampire wrote.
You can see the current list of registered tasks in the celery.registry.TaskRegistry class. Could be that your celeryconfig (in the current directory) is not in PYTHONPATH so celery can't find it and falls back to defaults. Simply specify it explicitly when starting celery.
celeryd --loglevel=INFO --settings=celeryconfig
You can also set --loglevel=DEBUG and you should probably see the problem immediately.
Whether you use CELERY_IMPORTS or autodiscover_tasks, the important point is the tasks are able to be found and the name of the tasks registered in Celery should match the names the workers try to fetch.
When you launch the Celery, say celery worker -A project --loglevel=DEBUG, you should see the name of the tasks. For example, if I have a debug_task task in my celery.py.
[tasks]
. project.celery.debug_task
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
If you can't see your tasks in the list, please check your celery configuration imports the tasks correctly, either in --setting, --config, celeryconfig or config_from_object.
If you are using celery beat, make sure the task name, task, you use in CELERYBEAT_SCHEDULE matches the name in the celery task list.
app = Celery('proj',
broker='amqp://',
backend='amqp://',
include=['proj.tasks'])
please include=['proj.tasks']
You need go to the top directory, then execute this
celery -A app.celery_module.celeryapp worker --loglevel=info
not
celery -A celeryapp worker --loglevel=info
in your celeryconfig.py input imports = ("path.path.tasks",)
please in other module invoke task!!!!!!!!
I also had the same problem; I added
CELERY_IMPORTS=("mytasks")
in my celeryconfig.py file to solve it.
Using --settings did not work for me. I had to use the following to get it all to work:
celery --config=celeryconfig --loglevel=INFO
Here is the celeryconfig file that has the CELERY_IMPORTS added:
# Celery configuration file
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'amqp://'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'America/Los_Angeles'
CELERY_ENABLE_UTC = True
CELERY_IMPORTS = ("tasks",)
My setup was a little bit more tricky because I'm using supervisor to launch celery as a daemon.
For me this error was solved by ensuring the app containing the tasks was included under django's INSTALLED_APPS setting.
What worked for me, was to add explicit name to celery task decorator. I changed my task declaration from #app.tasks to #app.tasks(name='module.submodule.task')
Here is an example
At first my task was like:
# tasks/test_tasks.py
#celery.task
def test_task():
print("Celery Task !!!!")
I changed it to :
# tasks/test_tasks.py
#celery.task(name='tasks.test_tasks.test_task')
def test_task():
print("Celery Task !!!!")
This method is helpful when you don't have a dedicated tasks.py file to include it in celery config.
In my case the issue was, my project was not picking up autodiscover_tasks properly.
In celery.py file the code was for getting autodiscover_tasks was:
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
I changed it to the following one:
from django.apps import apps
app.autodiscover_tasks(lambda: [n.name for n in apps.get_app_configs()])
Best wishes to you.
I had this problem mysteriously crop up when I added some signal handling to my django app. In doing so I converted the app to use an AppConfig, meaning that instead of simply reading as 'booking' in INSTALLED_APPS, it read 'booking.app.BookingConfig'.
Celery doesn't understand what that means, so I added, INSTALLED_APPS_WITH_APPCONFIGS = ('booking',) to my django settings, and modified my celery.py from
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
to
app.autodiscover_tasks(
lambda: settings.INSTALLED_APPS + settings.INSTALLED_APPS_WITH_APPCONFIGS
)
I had the same problem running tasks from Celery Beat. Celery doesn't like relative imports so in my celeryconfig.py, I had to explicitly set the full package name:
app.conf.beat_schedule = {
'add-every-30-seconds': {
'task': 'full.path.to.add',
'schedule': 30.0,
'args': (16, 16)
},
}
Try importing the Celery task in a Python Shell - Celery might silently be failing to register your tasks because of a bad import statement.
I had an ImportError exception in my tasks.py file that was causing Celery to not register the tasks in the module. All other module tasks were registered correctly.
This error wasn't evident until I tried importing the Celery task within a Python Shell. I fixed the bad import statement and then the tasks were successfully registered.
This, strangely, can also be because of a missing package. Run pip to install all necessary packages:
pip install -r requirements.txt
autodiscover_tasks wasn't picking up tasks that used missing packages.
I did not have any issue with Django. But encountered this when I was using Flask. The solution was setting the config option.
celery worker -A app.celery --loglevel=DEBUG --config=settings
while with Django, I just had:
python manage.py celery worker -c 2 --loglevel=info
I encountered this problem as well, but it is not quite the same, so just FYI. Recent upgrades causes this error message due to this decorator syntax.
ERROR/MainProcess] Received unregistered task of type 'my_server_check'.
#task('my_server_check')
Had to be change to just
#task()
No clue why.
If you are using the apps config in installed apps like this:
LOCAL_APPS = [
'apps.myapp.apps.MyAppConfig']
Then in your config app, import the task in ready method like this:
from django.apps import AppConfig
class MyAppConfig(AppConfig):
name = 'apps.myapp'
def ready(self):
try:
import apps.myapp.signals # noqa F401
import apps.myapp.tasks
except ImportError:
pass
did you include your tasks.py file or wherever your async methods are stored?
app = Celery('APP_NAME', broker='redis://redis:6379/0', include=['app1.tasks', 'app2.tasks', ...])
I have solved my problem, my 'task' is under a python package named 'celery_task',when i quit this package,and run the command celery worker -A celery_task.task --loglevel=info. It works.
As some other answers have already pointed out, there are many reasons why celery would silently ignore tasks, including dependency issues but also any syntax or code problem.
One quick way to find them is to run:
./manage.py check
Many times, after fixing the errors that are reported, the tasks are recognized by celery.
if you're using Docker, like said # here will kill your pain.
docker stop $(docker ps -a -q)
For me, restarting the broker (Redis) solved it.
The task already showed up correctly in Celery's task list and all relevant Django settings and imports worked fine.
My broker was running before I wrote the task, and restarting Celery and Django alone didn't solve it.
However, stopping Redis with Ctrl+C and then restarting it with redis-server helped Celery to correctly identify the task.
If you are running into this kind of error, there are a number of possible causes but the solution I found was that my celeryd config file in /etc/defaults/celeryd was configured for standard use, not for my specific django project. As soon as I converted it to the format specified in the celery docs, all was well.
The solution for me to add this line to /etc/default/celeryd
CELERYD_OPTS="-A tasks"
Because when I run these commands:
celery worker --loglevel=INFO
celery worker -A tasks --loglevel=INFO
Only the latter command was showing task names at all.
I have also tried adding CELERY_APP line /etc/default/celeryd but that didn't worked either.
CELERY_APP="tasks"
I had the issue with PeriodicTask classes in django-celery, while their names showed up fine when starting the celery worker every execution triggered:
KeyError: u'my_app.tasks.run'
My task was a class named 'CleanUp', not just a method called 'run'.
When I checked table 'djcelery_periodictask' I saw outdated entries and deleting them fixed the issue.
Just to add my two cents for my case with this error...
My path is /vagrant/devops/test with app.py and __init__.py in it.
When I run cd /vagrant/devops/ && celery worker -A test.app.celery --loglevel=info I am getting this error.
But when I run it like cd /vagrant/devops/test && celery worker -A app.celery --loglevel=info everything is OK.
I've found that one of our programmers added the following line to one of the imports:
os.chdir(<path_to_a_local_folder>)
This caused the Celery worker to change its working directory from the projects' default working directory (where it could find the tasks) to a different directory (where it couldn't find the tasks).
After removing this line of code, all tasks were found and registered.
Celery doesn't support relative imports so in my celeryconfig.py, you need absolute import.
CELERYBEAT_SCHEDULE = {
'add_num': {
'task': 'app.tasks.add_num.add_nums',
'schedule': timedelta(seconds=10),
'args': (1, 2)
}
}
An additional item to a really useful list.
I have found Celery unforgiving in relation to errors in tasks (or at least I haven't been able to trace the appropriate log entries) and it doesn't register them. I have had a number of issues with running Celery as a service, which have been predominantly permissions related.
The latest related to permissions writing to a log file. I had no issues in development or running celery at the command line, but the service reported the task as unregistered.
I needed to change the log folder permissions to enable the service to write to it.
My 2 cents
I was getting this in a docker image using alpine. The django settings referenced /dev/log for logging to syslog. The django app and celery worker were both based on the same image. The entrypoint of the django app image was launching syslogd on start, but the one for the celery worker was not. This was causing things like ./manage.py shell to fail because there wouldn't be any /dev/log. The celery worker was not failing. Instead, it was silently just ignoring the rest of the app launch, which included loading shared_task entries from applications in the django project
I'm using Celery to manage asynchronous tasks. Occasionally, however, the celery process goes down which causes none of the tasks to get executed. I would like to be able to check the status of celery and make sure everything is working fine, and if I detect any problems display an error message to the user. From the Celery Worker documentation it looks like I might be able to use ping or inspect for this, but ping feels hacky and it's not clear exactly how inspect is meant to be used (if inspect().registered() is empty?).
Any guidance on this would be appreciated. Basically what I'm looking for is a method like so:
def celery_is_alive():
from celery.task.control import inspect
return bool(inspect().registered()) # is this right??
EDIT: It doesn't even look like registered() is available on celery 2.3.3 (even though the 2.1 docs list it). Maybe ping is the right answer.
EDIT: Ping also doesn't appear to do what I thought it would do, so still not sure the answer here.
Here's the code I've been using. celery.task.control.Inspect.stats() returns a dict containing lots of details about the currently available workers, None if there are no workers running, or raises an IOError if it can't connect to the message broker. I'm using RabbitMQ - it's possible that other messaging systems might behave slightly differently. This worked in Celery 2.3.x and 2.4.x; I'm not sure how far back it goes.
def get_celery_worker_status():
ERROR_KEY = "ERROR"
try:
from celery.task.control import inspect
insp = inspect()
d = insp.stats()
if not d:
d = { ERROR_KEY: 'No running Celery workers were found.' }
except IOError as e:
from errno import errorcode
msg = "Error connecting to the backend: " + str(e)
if len(e.args) > 0 and errorcode.get(e.args[0]) == 'ECONNREFUSED':
msg += ' Check that the RabbitMQ server is running.'
d = { ERROR_KEY: msg }
except ImportError as e:
d = { ERROR_KEY: str(e)}
return d
From the documentation of celery 4.2:
from your_celery_app import app
def get_celery_worker_status():
i = app.control.inspect()
availability = i.ping()
stats = i.stats()
registered_tasks = i.registered()
active_tasks = i.active()
scheduled_tasks = i.scheduled()
result = {
'availability': availability,
'stats': stats,
'registered_tasks': registered_tasks,
'active_tasks': active_tasks,
'scheduled_tasks': scheduled_tasks
}
return result
of course you could/should improve the code with error handling...
To check the same using command line in case celery is running as daemon,
Activate virtualenv and go to the dir where the 'app' is
Now run : celery -A [app_name] status
It will show if celery is up or not plus no. of nodes online
Source:
http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/
The following worked for me:
import socket
from kombu import Connection
celery_broker_url = "amqp://localhost"
try:
conn = Connection(celery_broker_url)
conn.ensure_connection(max_retries=3)
except socket.error:
raise RuntimeError("Failed to connect to RabbitMQ instance at {}".format(celery_broker_url))
One method to test if any worker is responding is to send out a 'ping' broadcast and return with a successful result on the first response.
from .celery import app # the celery 'app' created in your project
def is_celery_working():
result = app.control.broadcast('ping', reply=True, limit=1)
return bool(result) # True if at least one result
This broadcasts a 'ping' and will wait up to one second for responses. As soon as the first response comes in, it will return a result. If you want a False result faster, you can add a timeout argument to reduce how long it waits before giving up.
I found an elegant solution:
from .celery import app
try:
app.broker_connection().ensure_connection(max_retries=3)
except Exception as ex:
raise RuntimeError("Failed to connect to celery broker, {}".format(str(ex)))
You can use ping method to check whether any worker (or specific worker) is alive or not https://docs.celeryproject.org/en/latest/_modules/celery/app/control.html#Control.ping
celey_app.control.ping()
You can test on your terminal by running the following command.
celery -A proj_name worker -l INFO
You can review every time your celery runs.
The below script is worked for me.
#Import the celery app from project
from application_package import app as celery_app
def get_celery_worker_status():
insp = celery_app.control.inspect()
nodes = insp.stats()
if not nodes:
raise Exception("celery is not running.")
logger.error("celery workers are: {}".format(nodes))
return nodes
Run celery status to get the status.
When celery is running,
(venv) ubuntu#server1:~/project-dir$ celery status
-> celery#server1: OK
1 node online.
When no celery worker is running, you get the below information displayed in terminal.
(venv) ubuntu#server1:~/project-dir$ celery status
Error: No nodes replied within time constraint
UDATE3: found the issue. See the answer below.
UPDATE2: It seems I might have been dealing with an automatic naming and relative imports problem by running the djcelery tutorial through the manage.py shell, see below. It is still not working for me, but now I get new log error messages. See below.
UPDATE: I added the log at the bottom of the post. It seems the example task is not registered?
Original Post:
I am trying to get django-celery up and running. I was not able to get through the example.
I installed rabbitmq succesfully and went through the tutorials without trouble: http://www.rabbitmq.com/getstarted.html
I then tried to go through the djcelery tutorial.
When I run python manage.py celeryd -l info I get the message:
[Tasks]
- app.module.add
[2011-07-27 21:17:19, 990: WARNING/MainProcess] celery#sequoia has started.
So that looks good. I put this at the top of my settings file:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
added these to my installed apps:
'djcelery',
here is my tasks.py file in the tasks folder of my app:
from celery.task import task
#task()
def add(x, y):
return x + y
I added this to my django.wsgi file:
os.environ["CELERY_LOADER"] = "django"
Then I entered this at the command line:
>>> from app.module.tasks import add
>>> result = add.delay(4,4)
>>> result
(AsyncResult: 7auathu945gry48- a bunch of stuff)
>>> result.ready()
False
So it looks like it worked, but here is the problem:
>>> result.result
>>> (nothing is returned)
>>> result.get()
When I put in result.get() it just hangs. What am I doing wrong?
UPDATE: This is what running the logger in the foreground says when I start up the worker server:
No handlers could be found for logger “multiprocessing”
[Configuration]
- broker: amqplib://guest#localhost:5672/
- loader: djcelery.loaders.DjangoLoader
- logfile: [stderr]#INFO
- concurrency: 4
- events: OFF
- beat: OFF
[Queues]
- celery: exchange: celery (direct) binding: celery
[Tasks]
- app.module.add
[2011-07-27 21:17:19, 990: WARNING/MainProcess] celery#sequoia has started.
C:\Python27\lib\site-packages\django-celery-2.2.4-py2.7.egg\djcelery\loaders.py:80: UserWarning: Using settings.DEBUG leads to a memory leak, neveruse this setting in production environments!
warnings.warn(“Using settings.DEBUG leads to a memory leak, never”
then when I put in the command:
>>> result = add(4,4)
This appears in the error log:
[2011-07-28 11:00:39, 352: ERROR/MainProcess] Unknown task ignored: Task of kind ‘task.add’ is not registered, please make sure it’s imported. Body->”{‘retries’: 0, ‘task’: ‘tasks.add’, ‘args’: (4,4), ‘expires’: None, ‘ta’: None
‘kwargs’: {}, ‘id’: ‘225ec0ad-195e-438b-8905-ce28e7b6ad9’}”
Traceback (most recent call last):
File “C:\Python27\..\celery\worker\consumer.py”,line 368, in receive_message
Eventer=self.event_dispatcher)
File “C:\Python27\..\celery\worker\job.py”,line 306, in from_message
**kw)
File “C:\Python27\..\celery\worker\job.py”,line 275, in __init__
self.task = tasks[self.task_name]
File “C:\Python27\...\celery\registry.py”, line 59, in __getitem__
Raise self.NotRegistered(key)
NotRegistered: ‘tasks.add’
How do I get this task to be registered and handled properly? thanks.
UPDATE 2:
This link suggested that the not registered error can be due to task name mismatches between client and worker - http://celeryproject.org/docs/userguide/tasks.html#automatic-naming-and-relative-imports
exited the manage.py shell and entered a python shell and entered the following:
>>> from app.module.tasks import add
>>> result = add.delay(4,4)
>>> result.ready()
False
>>> result.result
>>> (nothing returned)
>>> result.get()
(it just hangs there)
so I am getting the same behavior, but new log message. From the log, it appears the server is working but it won't feed the result back out:
[2011-07-28 11:39:21, 706: INFO/MainProcess] Got task from broker: app.module.tasks.add[7e794740-63c4-42fb-acd5-b9c6fcd545c3]
[2011-07-28 11:39:21, 706: INFO/MainProcess] Task app.module.tasks.add[7e794740-63c4-42fb-acd5-b9c6fcd545c3] succeed in 0.04600000038147s: 8
So the server got the task and it computed the correct answer, but it won't send it back? why not?
I found the solution to my problem from another stackoverflow post: Why does Celery work in Python shell, but not in my Django views? (import problem)
I had to add these lines to my settings file:
CELERY_RESULT_BACKEND = "amqp"
CELERY_IMPORTS = ("app.module.tasks", )
then in the task.py file I named the task as such:
#task(name="module.tasks.add")
The server and the client had to be informed of the task names. The celery and django-celery tutorials omit these lines in their tutorials.
if you run celery in debug mode is more easy understand the problem
python manage.py celeryd
What the celery logs says, celery is receiving the task ?
If not probably there is a problem with broker (wrong queue ?)
Give us more detail, in this way we can help you