I'm attempting to use Flask and Celery in Docker and am having issues with the Flask application context.
Flask==1.0.2
celery==4.2.0
Flask-CeleryExt==0.3.1
Here is some pertinent code.
docker-compose.yaml
version: '3'
services:
myapp:
build:
context: .
dockerfile: compose/dev/myapp/Dockerfile
ports:
- '5000:5000'
- '8888:8888'
env_file: .env
environment:
- FLASK_ENV=development
volumes:
- .:/myapp
entrypoint: /wait-for-postgres.sh
command: flask run --host=0.0.0.0
depends_on:
- postgres
- redis
networks:
- flask-redis-celery
celery:
build:
context: .
dockerfile: compose/dev/celery/Dockerfile
command: 'celery -A myapp.tasks worker -Q default --loglevel=info'
env_file: .env
volumes:
- .:/myapp
depends_on:
- redis
- myapp
networks:
- flask-redis-celery
extensions.py
from flask_celeryext import FlaskCeleryExt
ext = FlaskCeleryExt()
app.py in a register_extensions function. I'm using the application factory pattern in my app.
ext.init_app(app)
Inside of the myapp container, I can get to ext.celery per the documentation and see that I have a Celery instance and correctly send a task to
<Celery default at 0x7f600d0e7f98>
However, attempting to do the same in the celery container in my tasks file results in ext.celery being None.
tasks.py
from coupon.extensions import ext
celery = ext.celery # This is None
#celery.task(name='tasks.my_task', max_retries=2, default_retry_delay=60)
def my_task(some_args):
# etc.
Error
AttributeError: 'NoneType' object has no attribute 'task'
I've attempted numerous other options as well including make_celery as noted in the Flask docs, but cannot get to Flask and my models in the celery container, so do not believe this is very specific to Flask-CeleryExt.
I can make Celery tasks work fine if they do not access Flask objects, but I need to access SQLAlchemy models and custom classes from my Celery tasks.
How can I make Celery work properly in my celery container and be able to access Flask objects?
Related
I am using Celery in Django in order to run tasks in specific time intervals. When I first start the Docker, all the tasks run without any issue. If I stop the docker (i.e. docker-compose down) and then restart the docker (i.e. docker-compose up), celery-beat does not send the tasks to the celery worker in order for them to get executed.
If I visit the Admin panel and disable the tasks and then re-enable them it starts working!
Also if I do not use the django_celery_beat.schedulers:DatabaseScheduler and allow celery beat to use the default scheduler it also works. Even though this method works, is not the best scenario since the tasks are not longer editable in Django admin panel
Is there a solution to this issue?
Postgresql is not part of the docker-compose. I have a native PostgreSQL 10.5 installation.
settings.py
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://redis:6379",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"CONNECTION_POOL_KWARGS":{"max_connections":50, "retry_on_timeout": True}
}
}
}
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
SESSION_CACHE_ALIAS = "default"
CELERY_TIMEZONE = "Europe/Amsterdam"
CELERY_TASK_TRACK_STARTED = True
CELERY_TASK_TIME_LIMIT = 30 * 60
CELERY_BROKER_URL="redis://redis:6379"
CELERY_CACHE_BACKEND = 'default'
CELERY_BEAT_SCHEDULE = {
"sendemails_task":{
"task":"demoapp.tasks.testing",
"schedule":crontab(minute='*/2')
},
}
docker-compose.yml
services:
redis:
image: redis
restart: unless-stopped
expose:
- 6379
web:
build:
context: ./app
dockerfile: Dockerfile.prod
restart: unless-stopped
command: gunicorn Project.wsgi:application --bind 0.0.0.0:8001 --workers=4 --preload
volumes:
- static_volume:/home/app/web/staticfiles
- ./uploads/:/home/app/web/uploads/
expose:
- 8001
env_file:
- ./.env.prod
celery:
build:
context: ./app
dockerfile: Dockerfile.prod
command: celery -A Project worker -l info
volumes:
- ./app/:/usr/src/app/
env_file:
- ./.env.prod
depends_on:
- web
- redis
celery-beat:
build:
context: ./app
dockerfile: Dockerfile.prod
command: celery -A Project beat -l debug --scheduler django_celery_beat.schedulers:DatabaseScheduler
volumes:
- ./app/:/usr/src/app/
env_file:
- ./.env.prod
depends_on:
- web
- redis
- celery
volumes:
static_volume:
tasks.py
from __future__ import absolute_import, unicode_literals
from celery.utils.log import get_task_logger
from celery import shared_task
from django.core.cache import cache
import time
import traceback
#shared_task
def testing():
print('test')
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Project.settings')
app = Celery('Project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
Having trouble running Flower to monitor the celery async tasks that are running on my docker-deployed flask app. I've tried everything but the documentation on getting flower running in a docker deployed environment is pretty sparse & I'm still relatively new to this.
The web & celery & flower portions of my docker-compose.yml file
version: "3.6"
services:
web:
image: <image here>
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager] # this parameter should be worker when in the cloud with managers and workers
command: ./docker_setup.sh postgres postgres_test
depends_on:
- celery
environment:
- PYTHONUNBUFFERED=1
secrets:
- <secret shtuff>
networks:
- webnet
labels:
- <local deployment label>
celery:
image: <image here>
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager] # this parameter should be worker when in the cloud with managers and workers
command: celery worker -A celery_worker.celery --loglevel=info
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
secrets:
- <secret shtuff>
networks:
- webnet
labels:
- <local deployment label>
flower:
image: <image here>
environment:
- PYTHONUNBUFFERED=1
working_dir: /code
command: celery flower -A celery_worker.celery --port=5555
depends_on:
- postgres
- redis
- celery
ports:
- "5555:5555"
links:
- db
- redis
networks:
- webnet
When I deploy this locally through docker (such that I can access the Web API via localhost), it works fine & I can see through the celery logs that the app is running and handling async requests smoothly. However, when I try to access the flower monitoring app by executing $ flower & going to http://localhost:5555, the flower app loads but no threads or workers are shown. Any advice or help would be greatly appreciated!
Wow. Made a silly oversight & forgot to include flower==0.9.2 in my requirements.txt file in my app. Once I did that, flower was exposed on localhost:5555 after doing a local deployment. Works like a charm!
i have a problem with this docker-compose
version: '3'
services:
app:
image: php:7
command: php -S 0.0.0.0:8000 /app/get_count_of_day.php
ports:
- "8000:8000"
volumes:
- .:/app
composer:
restart: 'no'
image: composer/composer:php7
command: install
volumes:
- .:/app
python:
image: python:3
command: bash -c "pip3 install -r /app/requirements.txt && celery worker -l info -A cron --beat --workdir=/app/python"
links:
- redis
volumes:
- .:/app
depends_on:
- app
redis:
image: 'redis:3.0-alpine'
command: redis-server
ports:
- "6379:6379"
My celery task
import os
from celery import Celery
from celery.schedules import crontab
os.chdir("..")
app = Celery(broker='redis://redis:6379/0')
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(10.0, run_cron.s(), name='add every 10')
#app.task
def run_cron():
os.system("/usr/local/bin/php index.php")
My error is php not found
python_1 | sh: 1: /usr/local/bin/php: not found
python_1 | [2018-06-15 15:08:29,491: INFO/ForkPoolWorker-2] Task cron.run_cron[e7c338c1-7b9c-4d6f-b607-f4e354fbd623] succeeded in
0.003908602000592509s: None
python_1 | [2018-06-15 15:08:39,487: INFO/Beat] Scheduler: Sending due task add every 10 (cron.run_cron)
but if i go manually to docker with
docker exec -i -t 1ff /bin/bash
i found php in directory
Binaries from container "app" are not exposed in container "python", this is docker's MO. To run index.php script you can jus open this page via http request: curl http://app/index.php or do the same entirely in python via urllib2 or requests (I recommend the last option).
But in case your request fails because it can't find the app domain - original answer below is your solution.
In case you have to perform more complicated operations inside the app container you should really think about exposing them through internal API or something like that, but as I understand it, docker containers should do one thing and one thing only. If you need to run some complex shell script in your php container - you are breaking this principle. app container is for serving php pages, so it should do exactly that.
As a last resort, you can totally hack on docker, for example exposing docker control socket inside your celery container and issuing commands to other containers directly. This can be really dangerous and heavily discouraged in the docs, but you do you ;)
[EDIT: originally misread question...]
In default docker network you can't address containers by name. Add
networks:
my-net:
to the end of docker-compose and
networks:
- my-net
to every container that needs to talk with each other.
I'm creating a flask service using an app factory pattern and I need to use celery for async tasks. I'm also using docker and docker-compose to contain and run everything. My structure looks like this:
server
|
+-- manage.py
+-- docker-compose.yml
+-- requirements.txt
+-- Dockerfile
|
+-- project
| |
| +-- api
| |
| +--tasks.py
|
| +-- __init__.py
My tasks.py file looks like this:
from project import celery_app
#celery_app.task
def celery_check(test):
print(test)
I call manage.py to run which looks like this:
# manage.py
from flask_script import Manager
from project import create_app
app = create_app()
manager = Manager(app)
if __name__ == '__main__':
manager.run()
And my __init__.pylooks like this:
# project/__init__.py
import os
import json
from flask_mongoalchemy import MongoAlchemy
from flask_cas import CAS
from flask import Flask
from itsdangerous import JSONWebSignatureSerializer as JWT
from flask_httpauth import HTTPTokenAuth
from celery import Celery
# instantiate the database and CAS
db = MongoAlchemy()
cas = CAS()
# Auth stuff (ReplaceMe is replaced below in create_app())
jwt = JWT("ReplaceMe")
auth = HTTPTokenAuth('Bearer')
celery_app = Celery(__name__, broker=os.environ.get("CELERY_BROKER_URL"))
def create_app():
# instantiate the app
app = Flask(__name__, template_folder='client/templates', static_folder='client/static')
# set config
app_settings = os.getenv('APP_SETTINGS')
app.config.from_object(app_settings)
# Send new static files every time if debug is enabled
if app.debug:
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
# Get the secret keys
parse_secret(app.config['CONFIG_FILE'], app)
celery_app.conf.update(app.config)
print(celery_app.conf)
# set up extensions
db.init_app(app)
cas.init_app(app)
# Replace the secret key with the app's
jwt.secret_key = app.config["SECRET_KEY"]
parse_config(app.config['CONFIG_FILE'])
# register blueprints
from project.api.views import twist_blueprint
app.register_blueprint(twist_blueprint)
return app
In my docker-compose I start a worker and define some environment variables like this:
version: '2.1'
services:
twist-service:
container_name: twist-service
build: .
volumes:
- '.:/usr/src/app'
ports:
- 5001:5000 # expose ports - HOST:CONTAINER
environment:
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_NAME_TESTING=testing
- DATABASE_NAME_DEV=dev
- DATABASE_URL=twist-database
- CONFIG_FILE=./project/default_config.json
- MONGO_PASSWORD=user
- CELERY_RESULT_BACKEND=redis://redis:6379
- CELERY_BROKER_URL=redis://redis:6379/0
- MONGO_PORT=27017
depends_on:
- celery
- twist-database
celery:
container_name: celery
build: .
command: celery -A project.api.tasks --loglevel=debug worker
volumes:
- '.:/usr/src/app'
twist-database:
image: mongo:latest
container_name: "twist-database"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_USER=mongo
volumes:
- /data/db
ports:
- 27017:27017 # expose ports - HOST:CONTAINER
command: mongod
redis:
image: "redis:alpine"
command: redis-server
volumes:
- '/redis'
ports:
- '6379:6379'
However when I run my docker-compose file and generate the containers, I end up with this in the celery worker logs:
[2017-07-20 16:53:06,721: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Which means the worker is ignoring the configuration set for redis when celery was created, and trying to use rabbitmq instead. I've tried changing the project.api.tasks to project and project.celery_app, but to no avail.
It seems to me like the celery service should have the environment variables CELERY_RESULT_BACKEND and CELERY_BROKER_URL as well.
You need to link the docker services together. The most straight-forward mechanism to do this is to add a networks section in your dockerfile.
I am learning celery and I created a project to test my configuration. I installed celery==4.0.0 and django-celery-beat==1.0.1 according to the latest documentation.
In drf_project(main project dir with manage.py)/drf_project/celery.py
from __future__ import absolute_import, unicode_literals
from celery import Celery
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'drf_project.settings')
app = Celery('drf_project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
In drf_project/drf_project/settings.py
INSTALLED_APPS += ('django_celery_beat',)
CELERYBEAT_SCHEDULE = {
"test_1": {
"task": "tasks.print_test",
"schedule": timedelta(seconds=2),
},
}
In drf_project/drf_project/init.py
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ['celery_app']
In my user_management app (drf_project/user_mangement/) I added a tasks.py
from celery import Celery
from time import strftime
app = Celery()
#app.task
def print_test():
print strftime('%Y-%m-%d %H:%M:%S')
with open('abc.txt', 'ab+') as test_file:
test_file.writeline(strftime('%Y-%m-%d %H:%M:%S'))
when i run the celery worker and my django project dev server in different terminals by:
celery -A drf_project worker -l info
and
python manage.py runserver
I can see my task in celery log like:
[tasks]
. user_management.tasks.print_test
But it is not executing. Also I am not getting any error. SO what I am doing wrong? I followed the official documentation of celery.
For running periodic tasks you have to run two services: celery beat together with celery worker.
You can find more information at the bottom of following page.
Here is the docker-compose file configuration that I have set up to run celery worker and celery beat. It does the job. Make sure you change main_project_folder name in the docker-compose file below:
version: '3'
services:
redis:
image: "redis:latest"
ports:
- "6379:6379"
worker:
build:
context: .
dockerfile: Dockerfile
image: madefire/chordtest
command: bash -c "celery -A main_project_folder_name worker -l INFO"
environment:
- BROKER_URL=redis://redis:6379/0
- RESULT_BACKEND=redis://redis:6379/0
- C_FORCE_ROOT=true
volumes:
- ./:/app/
depends_on:
- redis
celery_beat:
build:
context: .
dockerfile: Dockerfile
image: madefire/chordtest
command: bash -c "celery -A main_project_folder_name beat"
environment:
- BROKER_URL=redis://redis:6379/0
- RESULT_BACKEND=redis://redis:6379/0
- C_FORCE_ROOT=true
volumes:
- ./:/app/
depends_on:
- redis