Celery using 'application/x-python-serialize' instead of `application/json` - python

I'm using the celery module v. 3.1.25 in Python 2.7 and Windows 10 to run a Celery worker. The results must be returned encoded in json and not pickle.
Problem: When the worker returns the result of a task, RabbitMQ management console shows the results to be content_type: application/x-python-serialize. Why is it still x-python-serialize when we have set task_serializer, result_serializer and accept_content to json?
proj/celery.py
from __future__ import absolute_import, unicode_literals
from celery import Celery
app = Celery('tasks',
broker='amqp://test:test#192.168.1.26:5672//', # running in Win10 VM
backend='amqp://',
task_serializer='json',
result_serializer='json',
accept_content=['application/json'],
include=['proj.tasks'])
proj/tasks.py
from __future__ import absolute_import, unicode_literals
from .celery import app
#app.task
def myTask():
...
return ...
Worker is started using
celery -A proj worker --loglevel=info
and gives a warning about the pickle serializer
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
-------------- celery#Y-PC v3.1.25 (Cipater)
---- **** -----
--- * *** * -- Windows-10-10.0.14393
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x40ffeb8
- ** ---------- .> transport: amqp://test:**#192.168.1.26:5672//
- ** ---------- .> results: amqp://
- *** --- * --- .> concurrency: 12 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery

Does it help to change your Celery() config parameter to accept_content=['json'], instead of application/json?

Related

Celery not queuing to a remote broker, adding tasks to a localhost instead

My question is same like this Celery not queuing tasks to broker on remote server, adds tasks to localhost instead, but the answer is not working to me.
My celery.py
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project', broker='amqp://<user>:<user_pass>#remoteserver:5672/<vhost>', backend='amqp')
# app = Celery('project')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
When I run:
$ celery -A worker -l info
I receive the following output:
-------------- celery#paulo-Inspiron-3420 v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.15.0-36-generic-x86_64-with-Ubuntu-18.04-bionic 2018-10-30 13:44:07
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: mycelery:0x7ff88ca043c8
- ** ---------- .> transport: amqp://<user>:**#<remote_ip>:5672/<vhost>
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
I tried stop rabbitmq server and uninstalled it too, but celery keeps queuing to localhost.
Someone can help?
You need to add something like this to your __init__.py file in the same directory as the celery.py file:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)
Also, make sure you're starting the worker process from inside your project's virtualenv.

Celery: Which are the queues consumed if the -Q option is not specified

According to the Celery documentation, the -Q/--queues command line option can be used for:
-Q, --queues
List of queues to enable for this worker, separated by comma. By default all configured queues are enabled. Example: -Q video,image
However I don't understand what does it mean with configured queues here. Does this mean all queues known to Celery, including the default one? Or only the ones defined in the task_queues config option? Does the task_create_missing_queues option affect this?
If you haven't configured anything, it will consume from celery queue and you can as you can see from logs
celery worker -A t
-------------- celery#pavilion v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Linux-4.4.0-79-generic-x86_64-with-Ubuntu-16.04-xenial 2017-06-09 10:39:14
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7f15cf9cdfd0
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: rpc://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
You can also configure celery to consume a set of queues by default like this
from celery import Celery
from kombu import Queue
app = Celery(broker='amqp://guest#localhost//', backend='rpc')
app.conf.task_queues = (Queue('foo'), Queue('bar'))
Now all workers will consume foo, bar queues by default.
-------------- [queues]
.> bar exchange=celery(direct) key=celery
.> foo exchange=celery(direct) key=celery
I was facing an issue where when I would execute
$ celery -A myCeleryConfig worker -Q myQueue2
I would get error
celery.exceptions.ImproperlyConfigured: Trying to select queue subset of ['myQueue2'], but queue 'myQueue2' isn't defined in the `task_queues` setting.
The documentation for the task_queues setting was unclear to me. It does state
If you really want to configure advanced routing, this setting should be a list of kombu.Queue objects the worker will consume from.
I wasn't sure what it meant by this. No code examples are provided in the documentation. But, thanks to #Chillar's response, I came to find that configuring
app.conf.task_queues = (Queue('myQueue1'), Queue('myQueue2'))
solved the issue. I now see
-------------- [queues]
.> myQueue1 exchange=myQueue1 key=myQueue1
.> myQueue2 exchange=myQueue2 key=myQueue2
when I start the worker, indicating the queue is now registered.

Django + Celery + SQS setup. Celery connects to default RabbitMQ via ampq

I am trying to setup Amazon SQS as a default message broker for Celery in Django app. Celery worker is starting but broker is set to default RabbitMQ. Below you can find the output of my worker.
Here are some configs which I have in the project. My celery.py looks like:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'dance.settings')
app = Celery('dance')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
The essential part of the Django celery settings responsible for setup of broker url is:
BROKER_URL = 'sqs://{}:{}#'.format(AWS_ACCESS_KEY_ID, quote(AWS_SECRET_ACCESS_KEY, safe=''))
BROKER_TRANSPORT_OPTIONS = {
'region': 'eu-west-1',
'polling_interval': 3,
'visibility_timeout': 300,
'queue_name_prefix':'dev-celery-',
}
When I am trying to launch worker within virtual environment with:
celery -A dance worker -l info
I receive following output:
-------------- celery#universe v4.0.0 (latentcall)
---- **** -----
--- * *** * -- Linux-4.8.0-28-generic-x86_64-with-debian-stretch-sid 2016-12-02 14:20:40
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: dance:0x7fdc592ca9e8
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
...
task1
task2
...
Tasks are listed so I guess Celery gets and processes related Django settings. If to switch in settings from SQS to Redis, I get the same problem.
As I understand from read tutorials worker's output should look similar to.
- ** ---------- .> transport: sqs://*redacted*:**#localhost//
- ** ---------- .> results: djcelery.backends.database:DatabaseBackend
Also I am not using djcelery as far as it is outdated. Instead I am using django_celery_results as it is recommended on Celery setup pages. The last output is just a guess from side project.
The only possible solution which I have found is to explicitly specify broker and database backend.
For me it looks strange, because settings from Django settings.py are not fully loaded or probably I have missed something, otherwise it is bug of Celery.
app = Celery('dance', broker='sqs://', backend='django-db')
Real solution:
Here is why I had problems:
All the Celery variables in Django should start with CELERY so instead of using BROKER_URL and BROKER_TRANSPORT_OPTIONS I had to use CELERY_BROKER_URL and CELERY_BROKER_TRANSPORT_OPTIONS
Incorrect: you need to use CELERY_BROKER_URL when you use CELERY namespace. But some options by default go with CELERY prefix, for example, CELERY_RESULT_BACKEND. If you use CELERY namespace so you need to write CELERY_CELERY_RESULT_BACKEND.

Celery Backend Enabled, Results Say Otherwise

I'll keep it short and to the point:
project directory
proj/__init__.py
/tasks.py
/celery_app.py
celery_app.py
from __future__ import absolute_import
from celery import Celery
app = Celery('proj',
broker='amqp://',
backend='amqp://',
include=['proj.tasks'])
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
)
if __name__ == '__main__':
app.start()
tasks.py
from __future__ import absolute_import
from celery import current_app
from celery.contrib.methods import task_method
class A:
#current_app.task(filter=task_method)
def add(self,x, y):
return x + y
worker log
-------------- celery#mycomp.localdomain v3.1.17 (Cipater)
---- **** -----
--- * *** * -- Linux-2.6.32-504.8.1.el6.x86_64-x86_64-with-centos-6.6-Final
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: proj:0x1dc12d0
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: amqp://
- *** --- * --- .> concurrency: 24 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. proj.tasks.add
[2015-04-08 17:45:20,788: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2015-04-08 17:45:20,801: INFO/MainProcess] mingle: searching for neighbors
[2015-04-08 17:45:21,812: INFO/MainProcess] mingle: all alone
[2015-04-08 17:45:21,828: WARNING/MainProcess] celery#mycomp.localdomain ready.
[2015-04-08 17:50:25,610: INFO/MainProcess] Received task: proj.tasks.add[e0020f67-dbe7-4f6d-9547-a8ace36c2a2c]
[2015-04-08 17:50:25,635: INFO/MainProcess] Task proj.tasks.add[e0020f67-dbe7-4f6d-9547-a8ace36c2a2c] succeeded in 0.023062946042s: 4
python shell
>>> from proj.tasks import A
>>> a = A()
>>> s = a.add.delay(2,2)
>>> s
<AsyncResult: e0020f67-dbe7-4f6d-9547-a8ace36c2a2c>
>>> s.backend
<celery.backends.base.DisabledBackend object at 0x113fdd0>
As you can see, I have a backend enabled. I'm using amqp. However, when I try and get the result, it's saying I dont have an enabled backend.
By including the line from proj.celery_app import app in tasks.py, the backend started to work.
This seems like a bug, since current_app should contain that backend instance.
I opened an issue on the celery github. Hopefully this helps anyone who encounters this problem as well.
Link to the github issue

celery recurring task not executing

My periodic task is never executed. What am I missing? I have the RabbitMQ service running. I also have flower running and the celery worker is showing up there.
I find it frustrating that there are a ton of examples on how to use celery with Django, but most of those are old versions which I believe don't apply to the latest release.
celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'solar_secured.settings')
app = Celery('solar_secured', broker='amqp://guest#localhost//')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
tasks.py
from asset_monitor.process_raw import parse_rawdata
from datetime import timedelta
from celery.task import periodic_task
#periodic_task(run_every=timedelta(minutes=5))
def parse_raw():
parse_rawdata()
started celery worker
C:\dev\solar_secured>manage.py celeryd
C:\Python27\lib\site-packages\celery-3.1.6-py2.7.egg\celery\apps\worker.py:159:
CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2013-12-16 11:26:19,302: WARNING/MainProcess] C:\Python27\lib\site-packages\cel
ery-3.1.6-py2.7.egg\celery\apps\worker.py:159: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
-------------- celery#DjangoDev v3.1.6 (Cipater)
---- **** -----
--- * *** * -- Windows-2008ServerR2-6.1.7601-SP1
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> broker: amqp://guest#localhost:5672//
- ** ---------- .> app: solar_secured:0x2cd4b00
- ** ---------- .> concurrency: 8 (prefork)
- *** --- * --- .> events: OFF (enable -E to monitor this worker)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
C:\Python27\lib\site-packages\celery-3.1.6-py2.7.egg\celery\fixups\django.py:224
: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setti
ng in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2013-12-16 11:26:20,512: WARNING/MainProcess] C:\Python27\lib\site-packages\cel
ery-3.1.6-py2.7.egg\celery\fixups\django.py:224: UserWarning: Using settings.DEB
UG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2013-12-16 11:26:20,512: WARNING/MainProcess] celery#DjangoDev ready.
started celery beat
C:\dev\solar_secured>manage.py celery beat
celery beat v3.1.6 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://guest#localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2013-12-16 14:18:48,206: INFO/MainProcess] beat: Starting...
You have to use a scheduler to execute periodic tasks. You can use Celery's Beat which is default scheduler. Use following command to use beat python manage.py celery worker -B

Categories

Resources