Raise exception in Celery after_configure callback - python

I defined a callback in my Celery app to setup stuff (database,...). The code raises an exception in case of missing configuration (missing env var). I noticed this exception doesn't bubble up, making it quite useless.
logger = get_task_logger(__name__)
class MyCelery(Celery):
def on_after_configure_cb(self, *args, **kwargs):
db_url = os.getenv("SQLALCHEMY_DATABASE_URI")
if db_url is None:
logger.critical("SQLALCHEMY_DATABASE_URI environment variable not set")
print("SQLALCHEMY_DATABASE_URI environment variable not set")
raise MyError("SQLALCHEMY_DATABASE_URI environment variable not set")
from myapp import MyApp
MyApp.set_db_url(db_url)
celery = MyCelery()
If I run
celery -A myapp worker -l DEBUG
only the print actually displays something in stdout. For some reason, the logger doesn't print anything (perhaps it is not ready at this stage).
Ideally, I would like Celery to stop, rather than silently run without a successful setup.
Context: I can't make this config in MyCelery.__init__ because my app imports the myapp.celery module where the Celery app is instantiated, so I can't import my app when importing myapp.celery (circular import).

Related

Celery task hangs after calling .delay() in Django

While calling the .delay() method of an imported task from a django application, the process gets stuck and the request is never completed.
We also don't get any error on the console.
Setting up a set_trace() with pdb results in the same thing.
The following questions were reviewed which didn't help resolve the issue:
Calling celery task hangs for delay and apply_async
celery .delay hangs (recent, not an auth problem)
Eg.:
backend/settings.py
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER", RABBIT_URL)
CELERY_RESULT_BACKEND = os.environ.get("CELERY_BROKER", RABBIT_URL)
backend/celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings')
app = Celery('backend')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
app/tasks.py
import time
from celery import shared_task
#shared_task
def upload_file(request_id):
time.sleep(request_id)
return True
app/views.py
from rest_framework.views import APIView
from .tasks import upload_file
class UploadCreateAPIView(APIView):
# other methods...
def post(self, request, *args, **kwargs):
id = request.data.get("id", None)
# business logic ...
print("Going to submit task.")
import pdb; pdb.set_trace()
upload_file.delay(id) # <- this hangs the runserver as well as the set_trace()
print("Submitted task.")
The issue was with the setup of the celery application with Django. We need to make sure that the celery app is imported and initialized in the following file:
backend\__init__.py
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ('celery_app',)
I've run into this issue that Celery calls through delay or apply_async may randomly hang the program indefinitely. I tried the all broker_transport_options and retry_policy options to let Celery to recover, but it still happens. Then I found this solution to enforce an execution time limit for an execution block/function by using underlying Python signal handlers.
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
def my_function():
with time_limit(3):
celery_call.apply_sync(kwargs={"k1", "v1"}, expires=30)

Celery and Flask - Cannot mix new setting names with old setting names

I have used this tutorial to set up Celery on my Flask application but i keep getting the following error:
File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\site-packages\celery\app\base.py", line 141, in data
return self.callback()
celery.exceptions.ImproperlyConfigured:
Cannot mix new setting names with old setting names, please
rename the following settings to use the old format:
include -> CELERY_INCLUDE
Or change all of the settings to use the new format :)
What am i doing wrong? The code i used is basically the same of the tutorial:
init.py
app = Flask(__name__)
app.config.from_object(Config)
app.config['TESTING'] = True
db = SQLAlchemy(app)
migrate = Migrate(app, db)
def make_celery(app):
celery = Celery(
app.import_name,
backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL']
)
celery.conf.update(app.config)
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
app.config.update(
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379'
)
celery = make_celery(app)
the code you are using used old variable names
change lines
app.config.update(
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379'
)
to
app.config.update(
broker_url='redis://localhost:6379',
result_backend='redis://localhost:6379'
)
The similar Celery/Flask implementation was working fine on Ubuntu and Python 3.8, but giving errors as above on Windows 10 (Python 3.9.0 and Celery 4.4.6 (cliffs)).
It finally worked for me by adding -P solo to the celery worker command (ref - https://github.com/celery/celery/issues/3759)
$ celery worker -A proj -Q yourqueuename -P solo --loglevel=INFO

Flask-Script add_option method not working

Using flask-script's add_option method I'm trying to pass the name of a config file into my create_app() so I can configure from_pyfile() -- Flask Instance Folders
I used this gist to get me started.
manage.py
from fbone import create_app
app = create_app()
manager = Manager(app)
manager.add_option('-c', '--config', dest='config', required=False)
app.py
def create_app(config=None, app_name=None, blueprints=None):
"""Create a Flask app."""
print config
This is just a snippet of my create_app function but I'm starting the app like this:
$ python manage.py -c config/heroku.cfg runserver
None
/env/lib/python2.7/site-packages/flask_script/__init__.py:153: UserWarning: Options will be ignored.
* Running on http://127.0.0.1:5000/
* Restarting with reloader
None
As you can see, instead of printing config/heroku.cfg it prints None
I think this is because of the UserWarning from flask script but I can't find out why that's happening.
It turns out you are creating the flask object by calling create_app() (with the parens).
If you do
app=create_app
or
Manager(create_app)
then you should be able to use
add_option()

Celery tasks in Django are always blocking

I have the following setup in my django settings:
CELERY_TASK_RESULT_EXPIRES = timedelta(minutes=30)
CELERY_CHORD_PROPAGATES = True
CELERY_ACCEPT_CONTENT = ['json', 'msgpack', 'yaml']
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = True
BROKER_URL = 'django://'
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
I've included this under my installed apps:
'djcelery',
'kombu.transport.django'
My project structure is (django 1.5)
proj
|_proj
__init__.py
celery.py
|_apps
|_myapp1
|_models.py
|_tasks.py
This is my celery.py file:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings.dev')
app = Celery('proj')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS, related_name='tasks')
In the main __init__.pyI have:
from __future__ import absolute_import
from .celery import app as celery_app
And finally in myapp1/tasks.py I define my task:
#task()
def retrieve():
# Do my stuff
Now, if I launch a django interactive shell and I launch the retrieve task:
result = retrieve.delay()
it always seems to be a blocking call, meaning that the prompt is bloked until the function returns. The result status is SUCCESS, the function actually performs the operations BUT it seems not to be async. What am I missing?
it seems like CELERY_ALWAYS_EAGER causes this
if this is True, all tasks will be executed locally by blocking until
the task returns. apply_async() and Task.delay() will return an
EagerResult instance, which emulates the API and behavior of
AsyncResult, except the result is already evaluated.
That is, tasks will be executed locally instead of being sent to the
queue.

Can't get "retry" to work in Celery

Given a file myapp.py
from celery import Celery
celery = Celery("myapp")
celery.config_from_object("celeryconfig")
#celery.task(default_retry_delay=5 * 60, max_retries=12)
def add(a, b):
with open("try.txt", "a") as f:
f.write("A trial = {}!\n".format(a + b))
raise add.retry([a, b])
Configured with a celeryconfig.py
CELERY_IMPORTS = ["myapp"]
BROKER_URL = "amqp://"
CELERY_RESULT_BACKEND = "amqp"
I call in the directory that have both files:
$ celeryd -E
And then
$ python -c "import myapp; myapp.add.delay(2, 5)"
or
$ celery call myapp.add --args="[2, 5]"
So the try.txt is created with
A trial = 7!
only once.
That means, the retry was ignored.
I tried many other things:
Using MongoDB as broker and backend and inspecting the database (strangely enough, I can't see anything in my broker "messages" collection even in a "countdown"-scheduled job)
The PING example in here, both with RabbitMQ and MongoDB
Printing on the screen with both print (like the PING example) and logging
Make the retry call in an except block after an enforced Exception is raised, raising or returning the retry(), changing the "throw" parameter to True/False/not specified.
Seeing what's happening with celery flower (in which the "broker" link shows nothing)
But none happened to work =/
My celery report output:
software -> celery:3.0.19 (Chiastic Slide) kombu:2.5.10 py:2.7.3
billiard:2.7.3.28 py-amqp:N/A
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:amqp
Is there anything wrong above? What I need to do to make the retry() method work?

Categories

Resources