Celery task not working in Django framework - python

I tried code to send_email 5 times to user as Asynchronous task using Celery and Redis Broker in Django Framework. My Celery server is working and it is responding to the celery cli interface even it is receiving task from Django but after that I am getting Error like:
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python3
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python3
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
task.py -
from celery.decorators import task
from django.core.mail import EmailMessage
import time
#task(name="Sending_Emails")
def send_email(to_email,message):
time1 = 1
while(time1 != 5):
print("Sending Email")
email = EmailMessage('Checking Asynchronous Task', message+str(time1), to=[to_email])
email.send()
time.sleep(1)
time1 += 1
views.py -
print("sending for Queue")
send_email.delay(request.user.email,"Email sent : ")
print("sent for Queue")
settings.py -
# CELERY STUFF
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/India'
celery.py -
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ECartApplication.settings')
app = Celery('ECartApplication')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
I expect Email should be sent 5 times but getting error:
[tasks]
. ECartApplication.celery.debug_task
. Sending_Emails
[2019-05-19 12:41:27,695: INFO/SpawnPoolWorker-2] child process 3628 calling sel
f.run()
[2019-05-19 12:41:27,696: INFO/SpawnPoolWorker-1] child process 5748 calling sel
f.run()
[2019-05-19 12:41:28,560: INFO/MainProcess] Connected to redis://localhost:6379/
/
[2019-05-19 12:41:30,599: INFO/MainProcess] mingle: searching for neighbors
[2019-05-19 12:41:35,035: INFO/MainProcess] mingle: all alone
[2019-05-19 12:41:39,069: WARNING/MainProcess] c:\users\vipin\appdata\local\prog
rams\python\python37-32\lib\site-packages\celery\fixups\django.py:202: UserWarni
ng: Using settings.DEBUG leads to a memory leak, never use this setting in produ
ction environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2019-05-19 12:41:39,070: INFO/MainProcess] celery#vipin-PC ready.
[2019-05-19 12:41:46,448: INFO/MainProcess] Received task: Sending_Emails[db10da
d4-a8ec-4ad2-98a6-60e8c3183dd1]
[2019-05-19 12:41:47,455: ERROR/MainProcess] Task handler raised error: ValueErr
or('not enough values to unpack (expected 3, got 0)')
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)

This is an issue when you running Python over Windows 7/10.
There are a workaround, you just need to use the module eventlet that you can install using pip:
pip install eventlet
After that execute your worker with -P eventlet at the end of the command:
celery -A MyWorker worker -l info -P eventlet

This command below also works on Windows 11:
celery -A core worker --pool=solo -l info

Related

Getting a exitcode 2 error using celery in flask

Question on flask and celery -
init.py
from celery import Celery
from flask import Flask
app.config['CELERY_RESULT_BACKEND'] = 'redis://localhost'
app.config['CELERY_BROKER_URL'] = 'redis://localhost'
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
training.py
from project import celery
training = Blueprint('training', __name__)
#training.route('/projectdetails/training', methods=["GET", "POST"])
#login_required
def start_training():
train_test.delay()
return render_template('test.html')
#celery.task()
def train_test():
# a ML training task.
I have my redis server on and my celery worker celery -A myproject.celery worker --loglevel=info .
This is an error I keep getting -
[2021-04-01 17:25:11,604: ERROR/MainProcess] Process 'ForkPoolWorker-8' pid:67580 exited with 'exitcode 2'
[2021-04-01 17:25:11,619: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: exitcode 2.')
Traceback (most recent call last):
File "/Users/rohankamath/Desktop/lol/env/lib/python3.7/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost
human_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: exitcode 2.
Tried searching the meaning on exitcode2, couldn't find anything.
This has happened to me when there was some import error in my code. Could you try running without celery(directly calling the main function) and see if it is working?

Django-Celery : worker does not execute task

I've been trying to run some test with Celery, RabbitMQ, Django..
The problem is that once i have spawned a worker, i use django to send a task thanks to .delay() which is received by the worker but from what it looks like, the worker does not find a task of that name to execute. Weird thing since this task with the exact same name is is the list of tasks the worker is supposed to be able to execute.
Here it is:
-> settings.py
BROKER_HOST = "127.0.0.1" #IP address of the server running RabbitMQ and Celery
BROKER_PORT = 5672
BROKER_URL = 'amqp://'
CELERY_IMPORTS = ('notify')
CELERY_IGNORE_RESULT = False
CELERY_RESULT_BACKEND = "amqp"
CELERY_IMPORTS = ("notify")
-> __init__.py (under notify module)
import celery
from celery import app as celery_app
from notify.user.tasks import send_email
-> tasks.py (under notify module)
import os
from celery import Celery, task
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
app = Celery('app')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.config_from_object('app.settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS, force=False)
#task()
def send_email(recepient, title, subject):
print('sending email')
And here is how i spawn my worker at the root of my project:
celery worker -n worker_one -l debug -A notify
Here is the error i get when the worker receive notification of the task i sent:
The full contents of the message body was:
'[["Guillaume", "One title", "This is my subject"], {}, {"chain": null, "callbacks": null, "errbacks": null, "chord": null}]' (123b)
Traceback (most recent call last):
File "/Users/guillaumeboufflers/.pyenv/versions/smartbase/lib/python3.5/site-packages/celery/worker/consumer/consumer.py", line 561, in on_task_received
strategy = strategies[type_]
KeyError: 'notify.user.tasks.send_email'
which is weird because ..
[tasks]
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. notify.user.tasks.send_email
Thanks guys for helping..

AttributeError when I launch celery

I am trying to run celery with the command celeryworker [1] based on this configuration [2], but I get the error [3] when I launch the program. My program is running in medusa1-blank1, and the rabbitmq-server is running in hadoop-medusa-1. You can see in [1] that the $HOST_NAME variable is the medusa1-blank1 and that the celeryconfig.py contains the host address where rabbitmq-server is running.
I looked to my configuration, and I cannot find any error in it. I would like that the log could be more verbose to understand what is going on, but I also don't think that it is possible to do that. Since it looks that the error is not in my code, I am completely clueless in understanding what is going on. Any help to try to debug this?
[1] Script that I use to run with celery
#!/bin/bash
set -xv
# This scripts runs celery in the server host
export C_FORCE_ROOT="true"
HOST_NAME=`hostname`
echo "------------------------"
echo "Initialize celery at $HOST_NAME"
echo "------------------------"
celery worker -n $HOST_NAME -E --loglevel=DEBUG --concurrency=20 -f ./logs/celerydebug.log --config=celeryconfig -Q $HOST_NAME
# celery worker -n medusa1-blank1 -E --loglevel=DEBUG --concurrency=20 -f ./logs/celerydebug.log --config=celeryconfig -Q medusa1-blank1
[2] Configuration that I use:
(medusa-env)xubuntu#medusa1-blank1:~/Programs/medusa-1.0$ cat celeryconfig.py
import os
import sys
# add hadoop python to the env, just for the running
sys.path.append(os.path.dirname(os.path.basename(__file__)))
# broker configuration
BROKER_URL = "amqp://celeryuser:celery#hadoop-medusa-1/celeryvhost"
CELERY_RESULT_BACKEND = "amqp"
CELERY_RESULT_PERSISTENT = True
TEST_RUNNER = 'celery.contrib.test_runner.run_tests'
# for debug
# CELERY_ALWAYS_EAGER = True
# module loaded
CELERY_IMPORTS = ("manager.mergedirs", "manager.system", "manager.utility", "manager.pingdaemon", "manager.hdfs")
[3] Error that I have:
[2016-03-07 10:24:09,482: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2016-03-07 10:24:09,484: DEBUG/MainProcess] | Worker: Building graph...
[2016-03-07 10:24:09,484: DEBUG/MainProcess] | Worker: New boot order: {Timer, Hub, Queues (intra), Pool, Autoscaler, Autoreloader, StateDB, Beat, Consumer}
[2016-03-07 10:24:09,487: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2016-03-07 10:24:09,487: DEBUG/MainProcess] | Consumer: Building graph...
[2016-03-07 10:24:09,491: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Agent, Events, Mingle, Tasks, Control, Heart, Gossip, event loop}
[2016-03-07 10:24:09,491: WARNING/MainProcess] /home/xubuntu/Programs/medusa-1.0/medusa-env/local/lib/python2.7/site-packages/celery/apps/worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2016-03-07 10:24:09,493: ERROR/MainProcess] Unrecoverable error: AttributeError("'NoneType' object has no attribute 'rstrip'",)
Traceback (most recent call last):
File "/home/xubuntu/Programs/medusa-1.0/medusa-env/local/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/home/xubuntu/Programs/medusa-1.0/medusa-env/local/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
self.on_start()
File "/home/xubuntu/Programs/medusa-1.0/medusa-env/local/lib/python2.7/site-packages/celery/apps/worker.py", line 169, in on_start
string(self.colored.cyan(' \n', self.startup_info())),
File "/home/xubuntu/Programs/medusa-1.0/medusa-env/local/lib/python2.7/site-packages/celery/apps/worker.py", line 230, in startup_info
results=self.app.backend.as_uri(),
File "/home/xubuntu/Programs/medusa-1.0/medusa-env/local/lib/python2.7/site-packages/celery/backends/base.py", line 117, in as_uri
else maybe_sanitize_url(self.url).rstrip("/"))
AttributeError: 'NoneType' object has no attribute 'rstrip'
Don't know which version you're using, but found this bug report:
https://github.com/celery/celery/issues/3094
bottom line, roll back for now.
In my minimum configuration file would be:
CELERY_IMPORTS = ...
AMPQ_USERNAME = os.getenv('AMQP_USERNAME', '...')
AMPQ_PASSWORD = os.getenv('AMQP_PASSWORD', '...')
AMQP_HOST = os.getenv('AMQP_HOST', '172.17.42.1')
AMQP_PORT = int(os.getenv('AMQP_PORT', '5672'))
DEFAULT_BROKER_URL = 'amqp://%s:%s#%s:%d'\
% (AMPQ_USERNAME, AMPQ_PASSWORD, AMQP_HOST, AMQP_PORT)
CELERY_RESULT_BACKEND = 'amqp://%s:%s#%s:%d'\
% (AMPQ_USERNAME, AMPQ_PASSWORD, AMQP_HOST, AMQP_PORT)
BROKER_API = DEFAULT_BROKER_URL

Celery task results not persisted with rpc

I have been trying to get Celery task results to be routed to another process by making results persisted to a queue and another process can pick results from queue. So, have configured Celery as CELERY_RESULT_BACKEND = 'rpc', but still Python function returned value is not persisted to queue.
Not sure if any other configuration or code change required. Please help.
Here is the code example:
celery.py
from __future__ import absolute_import
from celery import Celery
app = Celery('proj',
broker='amqp://',
backend='rpc://',
include=['proj.tasks'])
# Optional configuration, see the application user guide.
app.conf.update(
CELERY_RESULT_BACKEND = 'rpc',
CELERY_RESULT_PERSISTENT = True,
CELERY_TASK_SERIALIZER = 'json',
CELERY_RESULT_SERIALIZER = 'json'
)
if __name__ == '__main__':
app.start()
tasks.py
from proj.celery import app
#app.task
def add(x, y):
return x + y
Running Celery as
celery worker --app=proj -l info --pool=eventlet -c 4
Solved by using Pika (Python implementation of the AMQP 0-9-1 protocol - https://pika.readthedocs.org) to post results back to celeryresults channel

Django Celery tutorial not returning results

UDATE3: found the issue. See the answer below.
UPDATE2: It seems I might have been dealing with an automatic naming and relative imports problem by running the djcelery tutorial through the manage.py shell, see below. It is still not working for me, but now I get new log error messages. See below.
UPDATE: I added the log at the bottom of the post. It seems the example task is not registered?
Original Post:
I am trying to get django-celery up and running. I was not able to get through the example.
I installed rabbitmq succesfully and went through the tutorials without trouble: http://www.rabbitmq.com/getstarted.html
I then tried to go through the djcelery tutorial.
When I run python manage.py celeryd -l info I get the message:
[Tasks]
- app.module.add
[2011-07-27 21:17:19, 990: WARNING/MainProcess] celery#sequoia has started.
So that looks good. I put this at the top of my settings file:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
added these to my installed apps:
'djcelery',
here is my tasks.py file in the tasks folder of my app:
from celery.task import task
#task()
def add(x, y):
return x + y
I added this to my django.wsgi file:
os.environ["CELERY_LOADER"] = "django"
Then I entered this at the command line:
>>> from app.module.tasks import add
>>> result = add.delay(4,4)
>>> result
(AsyncResult: 7auathu945gry48- a bunch of stuff)
>>> result.ready()
False
So it looks like it worked, but here is the problem:
>>> result.result
>>> (nothing is returned)
>>> result.get()
When I put in result.get() it just hangs. What am I doing wrong?
UPDATE: This is what running the logger in the foreground says when I start up the worker server:
No handlers could be found for logger “multiprocessing”
[Configuration]
- broker: amqplib://guest#localhost:5672/
- loader: djcelery.loaders.DjangoLoader
- logfile: [stderr]#INFO
- concurrency: 4
- events: OFF
- beat: OFF
[Queues]
- celery: exchange: celery (direct) binding: celery
[Tasks]
- app.module.add
[2011-07-27 21:17:19, 990: WARNING/MainProcess] celery#sequoia has started.
C:\Python27\lib\site-packages\django-celery-2.2.4-py2.7.egg\djcelery\loaders.py:80: UserWarning: Using settings.DEBUG leads to a memory leak, neveruse this setting in production environments!
warnings.warn(“Using settings.DEBUG leads to a memory leak, never”
then when I put in the command:
>>> result = add(4,4)
This appears in the error log:
[2011-07-28 11:00:39, 352: ERROR/MainProcess] Unknown task ignored: Task of kind ‘task.add’ is not registered, please make sure it’s imported. Body->”{‘retries’: 0, ‘task’: ‘tasks.add’, ‘args’: (4,4), ‘expires’: None, ‘ta’: None
‘kwargs’: {}, ‘id’: ‘225ec0ad-195e-438b-8905-ce28e7b6ad9’}”
Traceback (most recent call last):
File “C:\Python27\..\celery\worker\consumer.py”,line 368, in receive_message
Eventer=self.event_dispatcher)
File “C:\Python27\..\celery\worker\job.py”,line 306, in from_message
**kw)
File “C:\Python27\..\celery\worker\job.py”,line 275, in __init__
self.task = tasks[self.task_name]
File “C:\Python27\...\celery\registry.py”, line 59, in __getitem__
Raise self.NotRegistered(key)
NotRegistered: ‘tasks.add’
How do I get this task to be registered and handled properly? thanks.
UPDATE 2:
This link suggested that the not registered error can be due to task name mismatches between client and worker - http://celeryproject.org/docs/userguide/tasks.html#automatic-naming-and-relative-imports
exited the manage.py shell and entered a python shell and entered the following:
>>> from app.module.tasks import add
>>> result = add.delay(4,4)
>>> result.ready()
False
>>> result.result
>>> (nothing returned)
>>> result.get()
(it just hangs there)
so I am getting the same behavior, but new log message. From the log, it appears the server is working but it won't feed the result back out:
[2011-07-28 11:39:21, 706: INFO/MainProcess] Got task from broker: app.module.tasks.add[7e794740-63c4-42fb-acd5-b9c6fcd545c3]
[2011-07-28 11:39:21, 706: INFO/MainProcess] Task app.module.tasks.add[7e794740-63c4-42fb-acd5-b9c6fcd545c3] succeed in 0.04600000038147s: 8
So the server got the task and it computed the correct answer, but it won't send it back? why not?
I found the solution to my problem from another stackoverflow post: Why does Celery work in Python shell, but not in my Django views? (import problem)
I had to add these lines to my settings file:
CELERY_RESULT_BACKEND = "amqp"
CELERY_IMPORTS = ("app.module.tasks", )
then in the task.py file I named the task as such:
#task(name="module.tasks.add")
The server and the client had to be informed of the task names. The celery and django-celery tutorials omit these lines in their tutorials.
if you run celery in debug mode is more easy understand the problem
python manage.py celeryd
What the celery logs says, celery is receiving the task ?
If not probably there is a problem with broker (wrong queue ?)
Give us more detail, in this way we can help you

Categories

Resources