Clarification of guide to Heroku Celery - python

I'm trying to figure out the woeful instructions here
Under the section "Configuring a Celery app" I'm not sure where i put the code:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Any clarification of these instructions is greatly appreciated.

The instructions are indicating you should put that code in your tasks.py module. However, that's not exactly extensible for multiple packages, each with their own tasks.py module. What I'd recommend is creating a celery.py file in the same directory as your settings.py file.
# tasks.py
import celery
app = celery.Celery('example')
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Or you can specify your settings in settings.py and configure celery as such:
# settings.py
broker_url = os.environ['REDIS_URL']
result_backend = os.environ['REDIS_URL']
# celery.py
from celery import Celery
from celery.utils.collections import DictAttribute
from celery.loaders.base import BaseLoader
from django.conf import settings
from django.apps import apps
class ProjectLoader(BaseLoader):
def read_configuration(self):
"""Load configuration from Django settings.
This may not be needed to be honest. It's what I use in my project.
"""
return DictAttribute(settings)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
# CELERY_LOADER must be set in the environment. Setting the ``loader``
# kwarg for the app instance does _not_ do what we need it to.
os.environ.setdefault("CELERY_LOADER", "project.celery:ProjectLoader")
app = Celery("project")
app.config_from_object("django.conf:settings")
app.autodiscover_tasks(lambda: [n.name for n in apps.get_app_configs()])
# Procfile
worker: celery worker --app=project.celery
Disclaimer, some of these configs will require adjustment for your project.

Following is the steps i took to make a minimal heroku/django/celery/redis project in conjunction with the instructions here along with other sources I found on the web. Hopefully someone will find this useful.
In your terminal, use the "heroku login" command to log in to the Heroku CLI.
"git clone https://github.com/heroku/python-getting-started.git" to copy a basic django skeleton project to your local.
rename python-getting-started to whatever.
cd into this directory.
run the following command: "pip install -r requirements.txt"
Note: Postgres must be properly installed in order for this step to work properly.
run the following command: "python manage.py collectstatic"
Install redis on Mac: "brew install redis"
Start redis server: "redis-server&" (The & at the end is to run it as a background process)
Test if Redis server is running: "redis-cli ping". If it replies “PONG”, then it’s good to go!
Install celery: "pip install celery"
Make a tasks.py file in your application directory with the following code:
from celery import Celery
app = Celery('tasks', broker='redis://localhost:6379/0')
#app.task
def add(x, y):
return x + y
"cd .." back into root directory.
Run celery: "celery worker -A=location of tasks&"
run: "python manage.py shell" in your root directory.
As your tasks celery server has been started, you can now use it to run your task just by importing tasks.py script, e.g from Python interpreter interactive mode:
import hello.tasks
hello.tasks.add.delay(1, 1)
This should return an Async message.
Push your local to heroku master.
** Note: If you run "celery worker -A=location of tasks.py&" and it gives the message:
consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 61]
Connection refused.
Try restarting the redis server with the command: "brew services restart redis"
There you have it. A minimal heroku/django/celery/redis project! You can download it here.Instructions on how to deploy this to heroku.
** Note: In the working project the "celery worker" command is already included in the Procfile.

Related

Celery worker - detach mode doesn't load app correctly

My original problem is that when running celery worker with --detach flag, or using celery multi, my application tasks are not registered with the workers ( although the workers do startup and are reachable, same is this question ). To help debug this problem I have made a plain celery app, which has a different but maybe related issue.
Source structure
setup.py
example
| tasks.py
| celery.py
| __init__.py
| __main__.py
tasks.py:
from example.celery import app
#app.task
def add(x, y):
return x + y
celery.py
from celery import Celery
app = Celery('tasks', broker='redis://localhost:6379', include=["example.tasks"])
init.py
__version__ = "0.0.1"
__package__ = "example"
from example.celery import app
setup.py
from setuptools import setup
import example
setup(
name=example.__package__,
version=example.__version__,
include_package_data=True,
python_requires=">=3.7",
)
Install the package using
$ pip install -e .
From anywhere in the system I can now run
$ celery -A example worker
And I will have a worker with the add task. Adding --detach flag like this:
$ celery -A example.celery.app worker --detach --logfile=$HOME/celery.log
Will give an error trying to connect to AMQPLAIN into the celery.log file, when the app is configured to use Redis:
...
amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
So my understanding is that something is going off when the worker tries to load the celery app in detach mode, but I can't figure out what. Any assistance is greatly appreciated.
Using celery==4.4.7
Update
This works with celery 5.0.2, will be opening a bug ticket.
This is a known bug which I missed.
https://github.com/celery/celery/issues/6370

Django, ImportError: cannot import name Celery

Why is this happening?
My celery.py:
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myshop.settings')
app = Celery('myshop')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
my init.py
# import celery
from .celery import app as celery_app
I even tried renaming celery.py to something else and the error still persisted. Could it be because of my python version?
I`ll post answer in order to move it from comments.
First of all in your __ init__.py file add this line
from __future__ import absolute_import, unicode_literals
Second of all you need to add to your settings, information about brooker.
This is an example configuration file to get you started. It should contain all you need to run a basic Celery set-up.
Broker settings.
broker_url = 'amqp://guest:guest#localhost:5672//'
The next thing is running your celery worker. So if you celery app is named myshop you have to run celery worker (using your environment), by typing this simple command:
celery -A myshop worker -l info
Then try to run your task, and everything should be fine.
its becuase of version
How u have installed celery
pip install celery==3.0.19
if this
then run python by
python manage.py runserver
or
pip3 install celery==3.0.19
if this
then run python by
python3 manage.py runserver

Gunicorn failed to load Flask application

I have a Flask app I am trying to serve via Gunicorn.
I am using virtualenv and python3. If I activate my venv cd to my app base dir then run:
gunicorn mysite:app
I get:
Starting gunicorn
Listening at http://127.0.0.1:8000
DEBUG:mysite.settings:>>Config()
...
Failed to find application: 'mysite'
Worker exiting
Shutting down: master
Reason: App failed to load
Looking in /etc/nginx/sites-available I only have the file 'default'. In sites-enabled I have no file.
In my nginx.conf file I have:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
App structure:
mysite #this is where I cd to and run gunicorn mysite:app
--manage.py
--/mysite
----settings.py
----__init__.py
in manage.py for mysite I have following:
logger.debug("manage.py entry point")
app = create_app(app_name)
manager = Manager(app)
if __name__ == "__main__":
manager.run()
In __init__.py file:
def create_app(object_name):
app = Flask(__name__)
#more setup here
return app
In my settings.py in the app directory
class Config(object):
logger.debug(">>Config()") #this logs OK so gunicorn is at least starting in correct directory
From inside the virtualenv if I run
print(sys.path)
I find a path to python and site-packages for this virtualenv.
From what I have read to start gunicorn it's just a matter of installing it and running gunicorn mysite:app
Running gunicorn from the parent directory of mysite I get the same failed to find application: 'mysite', App failed to load error, but don't get the DEBUG...Config() logged (as we are clearly in the wrong directory to start in). Running gunicorn from mysite/mysite (clearly wrong) I get and Exception in worker process ereor, ImportError: No module named 'mysite'.
Any clues as to how I can get gunicorn running?
You're pointing gunicorn at mysite:app, which is equivalent to from mysite import app. However, there is no app object in the top (__init__.py) level import of mysite. Tell gunicorn to call the factory.
gunicorn "mysite:create_app()"
You can pass arguments to the call as well.
gunicorn "mysite:create_app('production')"
Internally, this is equivalent to:
from mysite import create_app
app = create_app('production')
Alternatively, you can use a separate file that does the setup. In your case, you already initialized an app in manage.py.
gunicorn manage:app

django celery daemon doesn't work

Here's my celery app config:
from __future__ import absolute_import
from celery import Celery
import os
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tshirtmafia.settings')
app = Celery('tshirtmafia')
app.conf.update(
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend',
)
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
settings.py:
INSTALLED_APPS:
'kombu.transport.django',
'djcelery',
also:
BROKER_URL = 'django://'
Here's my task:
#shared_task
def test():
send_mail('nesamone bus', 'Files have been successfully generated.', 'marijus.merkevicius#gmail.com',
['marijus.merkevicius#gmail.com'], fail_silently=False)
Now when I run locally python manage.py celeryd locally and then run test.delay() from shell locally it works.
Now I'm trying to deploy my app. When with the exact same configuration I try to open python manage.py celeryd and in other window I open shell and run test task, it doesn't work.
I've also tried to setup background daemon like this:
/etc/default/celeryd configuration:
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Where to chdir at start. (CATMAID Django project dir.)
CELERYD_CHDIR="/home/tshirtnation/"
# Python interpreter from environment. (in CATMAID Django dir)
ENV_PYTHON="/usr/bin/python"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=1"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
And I use default celery /etc/init.d/celeryd script.
So basically it seems like celeryd starts but doesn't work. No idea how to debug this and what might be wrong.
Let me know if you need anything else
Celery turned to be a very capricious child in Django robust system as for me.
There are too little initial data for understanding the reason of your problems.
The most usual reason of Celery daemon fail is file system permissions.
But to clarify the reason I'd try:
Start celery from a command line by the user-owner of django project:
celery -A proj worker -l info
If it works OK, go further
Start celery in a verbal mode as a root user just like daemon to be:
sudo sh -x /etc/init.d/celeryd start
This will show most of the problems with the daemon script - celery user and group used, but not all, unfortunately: permission fails are not visible.
My little remark.
Usually Celery is started by own celery user, and the django project by another one. After long fighting celery and system, I refused from celery user, and owned celery process by the django project user.
And .. do not forget to start once
update-rc.d celerybeat defaults
update-rc.d celeryd defaults
this is for Ubuntu daemon start, sure.
Good luck

Nginx(Django) ImportError: cannot import name celeryd

I tested my project in my local machine, and it worked fine. But after uploading to a remote server(CentOS), I cannot execute celerybeat.
Here is my command.
python manage.py celeryd --events --loglevel=INFO -c 5 --settings=[settings-directory].production
This command works in the local machine(with --settings=[settings-directory].local), but in the remote server, ImportError: cannot import name celeryd occured.
Setting about celery is in base.py. local.py and production.py import the file. In production.py, there are just DEBUG, static, database settings.
I can import djcelery and celery in shell of the remote machine.
How could I solve this?
--
I think this is a version problem.. I'm reading about celery3.1
It turned out I used different version of Django in my remote server.
In Celery 3.1, there is no command named celeryd.

Categories

Resources