Worker on Python2, and client on Python3 in Celery - python

I'm trying to have the worker run on Python2, and client on Python3.
I notice that if I launch client from Python3, the worker code too seems to run on Python3, instead of Python2.
How can I resolve this? My worker code lies on Python2, and client using it has to run on Python3?
This is how I've set it up.
VirtualEnv1: Python2.7.12 (tasks.py)
from celery import Celery
import time
app = Celery('tasks', backend='redis://localhost', broker='redis://localhost')
app.conf.update(
task_serializer='json',
accept_content=['json'], # Ignore other content
result_serializer='json',
timezone='Europe/Oslo',
enable_utc=True,
)
#app.task
def add(x, y):
time.sleep(5)
print "Trying to process task" # This can run only on Python2, and not Python3
return x + y
Execution command:
celery -A tasks worker --loglevel=info -c 1
VirtualEnv2: Python3.5.2 (client.py)
from celery import Celery
from tasks import add
import time
app = Celery('tasks', backend='redis://localhost', broker='redis://localhost')
app.conf.update(
task_serializer='json',
accept_content=['json'], # Ignore other content
result_serializer='json',
timezone='Europe/Oslo',
enable_utc=True,
)
result = add.delay(4, 8)
result.get()
And this is the error I get upon executing client.py (in python3)
Traceback (most recent call last):
File "client.py", line 2, in <module>
from tasks import add
File "/home/vishal/work/yobi/expr/tasks.py", line 17
print "Trying to process task"
Isn't this unexpected? I would have guessed that the worker code should run on Python2, and not Python3.

Related

Getting a exitcode 2 error using celery in flask

Question on flask and celery -
init.py
from celery import Celery
from flask import Flask
app.config['CELERY_RESULT_BACKEND'] = 'redis://localhost'
app.config['CELERY_BROKER_URL'] = 'redis://localhost'
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
training.py
from project import celery
training = Blueprint('training', __name__)
#training.route('/projectdetails/training', methods=["GET", "POST"])
#login_required
def start_training():
train_test.delay()
return render_template('test.html')
#celery.task()
def train_test():
# a ML training task.
I have my redis server on and my celery worker celery -A myproject.celery worker --loglevel=info .
This is an error I keep getting -
[2021-04-01 17:25:11,604: ERROR/MainProcess] Process 'ForkPoolWorker-8' pid:67580 exited with 'exitcode 2'
[2021-04-01 17:25:11,619: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: exitcode 2.')
Traceback (most recent call last):
File "/Users/rohankamath/Desktop/lol/env/lib/python3.7/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost
human_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: exitcode 2.
Tried searching the meaning on exitcode2, couldn't find anything.
This has happened to me when there was some import error in my code. Could you try running without celery(directly calling the main function) and see if it is working?

Celery task not working in Django framework

I tried code to send_email 5 times to user as Asynchronous task using Celery and Redis Broker in Django Framework. My Celery server is working and it is responding to the celery cli interface even it is receiving task from Django but after that I am getting Error like:
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python3
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python3
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
task.py -
from celery.decorators import task
from django.core.mail import EmailMessage
import time
#task(name="Sending_Emails")
def send_email(to_email,message):
time1 = 1
while(time1 != 5):
print("Sending Email")
email = EmailMessage('Checking Asynchronous Task', message+str(time1), to=[to_email])
email.send()
time.sleep(1)
time1 += 1
views.py -
print("sending for Queue")
send_email.delay(request.user.email,"Email sent : ")
print("sent for Queue")
settings.py -
# CELERY STUFF
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/India'
celery.py -
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ECartApplication.settings')
app = Celery('ECartApplication')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
I expect Email should be sent 5 times but getting error:
[tasks]
. ECartApplication.celery.debug_task
. Sending_Emails
[2019-05-19 12:41:27,695: INFO/SpawnPoolWorker-2] child process 3628 calling sel
f.run()
[2019-05-19 12:41:27,696: INFO/SpawnPoolWorker-1] child process 5748 calling sel
f.run()
[2019-05-19 12:41:28,560: INFO/MainProcess] Connected to redis://localhost:6379/
/
[2019-05-19 12:41:30,599: INFO/MainProcess] mingle: searching for neighbors
[2019-05-19 12:41:35,035: INFO/MainProcess] mingle: all alone
[2019-05-19 12:41:39,069: WARNING/MainProcess] c:\users\vipin\appdata\local\prog
rams\python\python37-32\lib\site-packages\celery\fixups\django.py:202: UserWarni
ng: Using settings.DEBUG leads to a memory leak, never use this setting in produ
ction environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2019-05-19 12:41:39,070: INFO/MainProcess] celery#vipin-PC ready.
[2019-05-19 12:41:46,448: INFO/MainProcess] Received task: Sending_Emails[db10da
d4-a8ec-4ad2-98a6-60e8c3183dd1]
[2019-05-19 12:41:47,455: ERROR/MainProcess] Task handler raised error: ValueErr
or('not enough values to unpack (expected 3, got 0)')
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
This is an issue when you running Python over Windows 7/10.
There are a workaround, you just need to use the module eventlet that you can install using pip:
pip install eventlet
After that execute your worker with -P eventlet at the end of the command:
celery -A MyWorker worker -l info -P eventlet
This command below also works on Windows 11:
celery -A core worker --pool=solo -l info

How to fix receiving unregistered task error - Celery

I am trying to establish a periodic task using Celery (4.2.0) and RabbitMQ (3.7.14) running with Python 3.7.2 on an Azure VM using Ubuntu 16.04. I am able to start the beat and worker and see the message get kicked off from beat to the worker but at this point I'm met with an error like so
[2019-03-29 21:35:00,081: ERROR/MainProcess] Received
unregistered task of type 'facebook-call.facebook_api'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
My code is as follows:
from celery import Celery
from celery.schedules import crontab
app = Celery('facebook-call', broker='amqp://localhost//')
#app.task
def facebook_api():
{function here}
app.conf.beat.schedule = {
'task': 'facebook-call.facebook_api',
'schedule': crontab(hour=0, minute =0, day='0-6'),
}
I am starting the beat and worker processes by using the name of the python file which contains all of the code
celery -A FacebookAPICall beat --loglevel=info
celery -A FacebookAPICall worker --loglevel=info
Again, the beat process starts and I can see the message being successfully passed to the worker but cannot figure out how to "register" the task so that it is processed by the worker.
I was able to resolve the issue by renaming the app from facebook-call to coincide with the name of the file FacebookAPICall
Before:
app = Celery('facebook-call', broker='amqp://localhost//'
After:
app = Celery('FacebookAPICall', broker='amqp://localhost//'
From reading the Celery documentation, I don't totally understand why the name of the app must also be the name of the .py file but that seems to do the trick.

Celery add_periodic_task blocks Django running in uwsgi environment

I have written a module that dynamically adds periodic celery tasks based on a list of dictionaries in the projects settings (imported via django.conf.settings).
I do that using a function add_tasks that schedules a function to be called with a specific uuid which is given in the settings:
def add_tasks(celery):
for new_task in settings.NEW_TASKS:
celery.add_periodic_task(
new_task['interval'],
my_task.s(new_task['uuid']),
name='My Task %s' % new_task['uuid'],
)
Like suggested here I use the on_after_configure.connect signal to call the function in my celery.py:
app = Celery('my_app')
#app.on_after_configure.connect
def setup_periodic_tasks(celery, **kwargs):
from add_tasks_module import add_tasks
add_tasks(celery)
This setup works fine for both celery beat and celery worker but breaks my setup where I use uwsgi to serve my django application. Uwsgi runs smoothly until the first time when the view code sends a task using celery's .delay() method. At that point it seems like celery is initialized in uwsgi but blocks forever in the above code. If I run this manually from the commandline and then interrupt when it blocks, I get the following (shortened) stack trace:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'tasks'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'tasks'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
(SHORTENED HERE. Just contained the trace from the console through my call to this function)
File "/opt/my_app/add_tasks_module/__init__.py", line 42, in add_tasks
my_task.s(new_task['uuid']),
File "/usr/local/lib/python3.6/site-packages/celery/local.py", line 146, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/local/lib/python3.6/site-packages/celery/local.py", line 109, in _get_current_object
return loc(*self.__args, **self.__kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/app/__init__.py", line 72, in task_by_cons
return app.tasks[
File "/usr/local/lib/python3.6/site-packages/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/usr/local/lib/python3.6/site-packages/celery/app/base.py", line 1228, in tasks
self.finalize(auto=True)
File "/usr/local/lib/python3.6/site-packages/celery/app/base.py", line 507, in finalize
with self._finalize_mutex:
It seems like there is a problem with acquiring a mutex.
Currently I am using a workaround to detect if sys.argv[0] contains uwsgi and then not add the periodic tasks, as only beat needs the tasks, but I would like to understand what is going wrong here to solve the problem more permanently.
Could this problem have something to do with using uwsgi multi-threaded or multi-processed where one thread/process holds the mutex the other needs?
I'd appreciate any hints that can help me solve the problem. Thank you.
I am using: Django 1.11.7 and Celery 4.1.0
Edit 1
I have created a minimal setup for this problem:
celery.py:
import os
from celery import Celery
from django.conf import settings
from myapp.tasks import my_task
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
app = Celery('my_app')
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(
60,
my_task.s(),
name='Testtask'
)
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
tasks.py:
from celery import shared_task
#shared_task()
def my_task():
print('ran')
Make sure that CELERY_TASK_ALWAYS_EAGER=False and that you have a working message queue.
Run:
./manage.py shell -c 'from myapp.tasks import my_task; my_task.delay()'
Wait about 10 seconds before interrupting to see the above error.
So, I have found out that the #shared_task decorator creates the problem. I can circumvent the problem when I declare the task right in the function called by the signal like so:
def add_tasks(celery):
#celery.task
def my_task(uuid):
print(uuid)
for new_task in settings.NEW_TASKS:
celery.add_periodic_task(
new_task['interval'],
my_task.s(new_task['uuid']),
name='My Task %s' % new_task['uuid'],
)
This solution is actually working for me, but I have one more problem with this: I use this code in a pluggable app, so I can't directly access the celery app outside of the signal handler but would like to also be able to call the my_task function from within other code. By defining it within the function it is not available outside of the function, so I cannot import it anywhere else.
I can probably work around this by defining the task function outside of the signal function, and use it with different decorators here and in the tasks.py. I am wondering though if there is a decorator apart from the #shared_task decorator that I can use in the tasks.py that does not create the problem.
The current best solution could be:
task_app.__init__.py:
def my_task(uuid):
# do stuff
print(uuid)
def add_tasks(celery):
celery_my_task = celery.task(my_task)
for new_task in settings.NEW_TASKS:
celery.add_periodic_task(
new_task['interval'],
celery_my_task(new_task['uuid']),
name='My Task %s' % new_task['uuid'],
)
task_app.tasks.py:
from celery import shared_task
from task_app import my_task
shared_my_task = shared_task(my_task)
myapp.celery.py:
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
app = Celery('my_app')
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
from task_app import add_tasks
add_tasks(sender)
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
Could you give a try that signal #app.on_after_finalize.connect:
some fast snippet from working project celery==4.1.0, Django==2.0, django-celery-beat==1.1.0 and django-celery-results==1.0.1
#app.on_after_finalize.connect
def setup_periodic_tasks(sender, **kwargs):
""" setup of periodic task :py:func:shopify_data_fetcher.celery.fetch_shopify
based on the schedule defined in: settings.CELERY_BEAT_SCHEDULE
"""
for task_name, task_config in settings.CELERY_BEAT_SCHEDULE.items():
sender.add_periodic_task(
task_config['schedule'],
fetch_shopify.s(**task_config['kwargs']['resource_name']),
name=task_name
)
piece of CELERY_BEAT_SCHEDULE:
CELERY_BEAT_SCHEDULE = {
'fetch_shopify_orders': {
'task': 'shopify.tasks.fetch_shopify',
'schedule': crontab(hour="*/3", minute=0),
'kwargs': {
'resource_name': shopify_constants.SHOPIFY_API_RESOURCES_ORDERS
}
}
}

Why does this Celery "hello world" loop forever?

Consider the code:
from celery import Celery, group
from time import time
app = Celery('tasks', broker='redis:///0', backend='redis:///1', task_ignore_result=False)
#app.task
def test_task(i):
print('hi')
return i
x = test_task.delay(3)
print(x.get())
I run it by calling python script.py, but I'm getting no results. Why?
You don't get any results because you've asked your celery app to execute a task without starting a worker process to do the work executing it. The process you did start is blocked on the call to get().
First things first, when using celery it is critical that you do not have tasks get executed when a module is imported, so let's put your task execution inside of a main() function, and put it in a file called celery_test.py.
from celery import Celery, group
from time import time
app = Celery('tasks', broker='redis:///0', backend='redis:///1', task_ignore_result=False)
#app.task
def test_task(i):
print('hi')
return i
def main():
x = test_task.delay(3)
print(x.get())
if __name__ == '__main__':
main()
Now let's start a pool of celery workers to execute tasks for this app. You can do this by opening a new terminal and executing the following.
celery worker -A celery_test --loglevel=INFO
The -A flag refers to the module where celery will find an application to add workers to. You should see some output in the terminal to indicate that the the celery worker is running and ready for tasks to process.
Now, try executing your script again with python celery_test.py. You should see hi show up in the worker's log output, but the the value 3 returned in the script that called get().
Be warned, if you've been playing with celery without running a worker, it probably has lots of tasks waiting in your broker to execute. The first time you start up the worker pool, you'll see them all execute in parallel until the broker runs out of tasks.

Categories

Resources