Celery Task not working with redis in flask docker container - python

I am trying to run a celery task in a flask docker container and I am getting error like below when celery task is executed
web_1 | sock.connect(socket_address)
web_1 | OSError: [Errno 99] Cannot assign requested address
web_1 |
web_1 | During handling of the above exception, another exception occurred: **[shown below]**
web_1 | File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1 | raise ConnectionError(self._error_message(e))
web_1 | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
Without the celery task the application is working fine
docker-compose.yml
version: '3'
services:
web:
build: ./
volumes:
- ./app:/app
ports:
- "80:80"
environment:
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
depends_on:
- redis
redis:
container_name: redis
image: redis:6.2.6
ports:
- "6379:6379"
expose:
- "6379"
worker:
build:
context: ./
hostname: worker
command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
main.py
from flask import Flask
from instance import config, exts
from decouple import config as con
def create_app(config_class=config.Config):
app = Flask(__name__)
app.config.from_object(config.Config)
app.secret_key = con('flask_secret_key')
exts.mail.init_app(app)
from routes.test_route import test_api
app.register_blueprint(test_api)
return app
app = create_app()
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
I am using Flask blueprint for splitting the api routes
test_route.py
from flask import Flask, render_template, Blueprint
from instance.exts import celery
test_api = Blueprint('test_api', __name__)
#test_api.route('/test/<string:name>')
def testfnn(name):
task = celery.send_task('CeleryTask.reverse',args=[name])
return task.id
Celery tasks are also written in separate file
celery_tasks.py
from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time
celery= Celery('tasks',
broker = config('CELERY_BROKER_URL'),
backend = config('CELERY_RESULT_BACKEND'))
class CeleryTask:
#celery.task(name='CeleryTask.reverse')
def reverse(string):
time.sleep(25)
return string[::-1]
.env
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
Dockerfile
FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]
requirements.txt
Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32
Folder Structure
Thanks in Advance

In the end of your docker-compose.yml you can add:
networks:
your_net_name:
name: your_net_name
And in each container:
networks:
- your_net_name
These two steps will put all the containers at the same network. By default docker creates one, but as I've had problems letting them be auto-renamed, I think this approach gives you more control.
Finally I'd also change your env variable to use the container address:
CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0
So you'd also add this section to your redis container:
hostname: redis_addr
This way the env var will get whatever address docker has assigned to the container.

Related

docker compose up raises ModuleNotFound

Trying to bring my fastAPI app together with docker compose. It works out of the docker but on docker it doesn't see my modules like endpoints and others. Not sure what Am doing wrong...
from endpoints import pizza_endpoints, order_endpoints
ModuleNotFoundError: No module named 'endpoints'
endpoints is folder with init and its imports are .py files
dockerfile:
FROM python:3.9-slim
COPY ./backend /backend
ENV PYTHONPATH "${PYTHONPATH}:/backend"
ENV PYTHONUNBUFFERED 1
WORKDIR /backend
EXPOSE 8000
RUN pip3 install -r requirements.txt
docker compose:
version: '3.9'
services:
backend:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn backend.main:app --host 0.0.0.0'
ports:
- 8008:8000
environment:
- DATABASE_URL=postgresql://postgres:postgres#db:5432/pypizza
depends_on:
- db
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=pypizza
volumes:
postgres_data:
main.py:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from endpoints import pizza_endpoints, order_endpoints
from dependency import database
from SQL import models
models.Base.metadata.create_all(database.engine)
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins = ["*"],
allow_credentials = True,
allow_methods = ["*"],
allow_headers = ["*"]
)
app.include_router(pizza_endpoints.router, prefix="/pizza")
app.include_router(order_endpoints.router, prefix="/order")
#app.get("/")
async def welcome_page():
return {"message": "hello"}

Celery tasks not running in docker-compose

I have a docker-compose where there are three components: app, celery, and redis. These are implemented in DjangoRest.
I have seen this question several times on stackoverflow and have tried all the solutions listed. However, the celery task is not running.
The behavior that celery has is the same as the app, that is, it is starting the django project, but it is not running the task.
docker-compose.yml
version: "3.8"
services:
app:
build: .
volumes:
- .:/django
ports:
- 8000:8000
image: app:django
container_name: myapp
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:alpine
container_name: redis
ports:
- 6379:6379
volumes:
- ./redis/data:/data
restart: always
environment:
- REDIS_PASSWORD=
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
celery:
image: celery:3.1
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: celery -A myapp worker -l INFO -c 8
volumes:
- .:/django
depends_on:
- redis
- app
links:
- redis
DockerFile
FROM python:3.9
RUN useradd --create-home --shell /bin/bash django
USER django
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
USER django
WORKDIR /home/django
COPY requirements.txt ./
# set path
ENV PATH=/home/django/.local/bin:$PATH
# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
docker-entrypoint.sh
# run migration first
python manage.py migrate
# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell
# start the server
python manage.py runserver 0.0.0.0:8000
celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
settings.py
CELERY_BROKER_URL = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'
Your docker-entrypoint.sh script unconditionally runs the Django server. Since you declare it as the image's ENTRYPOINT, the Compose command: is passed to it as arguments but your script ignores these.
The best way to fix this is to pass the specific command – "run the Django server", "run a Celery worker" - as the Dockerfile CMD or Compose command:. The entrypoint script ends with the shell command exec "$#" to run that command.
#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell
# run the container CMD
exec "$#"
In your Dockerfile you need to declare a default CMD.
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000
Now in your Compose setup, if you don't specify a command:, it will use that default CMD, but if you do, that will be run instead. In both cases your entrypoint script will run but when it gets to the final exec "$#" line it will run the provided command.
That means you can delete the command: override from your app container. (You do need to leave it for the Celery container.) You can simplify this setup further by removing the image: and container_name: settings (Compose will pick reasonable defaults for both of these) and the volumes: mount that hides the image content.

Changes on template files inside volume not showing on Flask frontend

I am using a docker-compose Flask implementation with the following configuration
docker-compose:
version: '3'
services:
dashboard:
build:
context: dashboard/
args:
APP_PORT: "8080"
container_name: dashboard
ports:
- "8080:8080"
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: "8080"
volumes:
- ./dashboard/:/usr/src/app
dashboard/Dockerfile:
FROM python:3.7-slim-bullseye
ENV PYTHONUNBUFFERED True
ARG APP_PORT
ENV APP_HOME /usr/src/app
WORKDIR $APP_HOME
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD exec gunicorn --bind :$APP_PORT --workers 1 --threads 8 --timeout 0 main:app
dashboard/main.py:
import os
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
If I apply any change to the index.html file in my host system using VSCode, these changes won't apply when I refresh the page. However, I have tried getting into the container with docker exec -it dashboard bash and cat /usr/src/app/templates/index.html and they are reflected inside the container, since the volume is shared between the host and the container.
If I stop the container and run it again the changes are applied, but as I am working on frontend doing that all the time is pretty annoying.
Why the changes won't show on the browser but they are replicated on the container?
You should use: TEMPLATES_AUTO_RELOAD=True
From https://flask.palletsprojects.com/en/2.0.x/config/
It appears that the templates are preloaded and won't update until you enable this feature.

Docker, Flask, SQLAlchemy: ValueError: invalid literal for int() with base 10: 'None'

I have a flask app that can be initialized successfully and connects to Postgresql database. However, when i try to dockerize this app, i get the below error message. "SQLALCHEMY_DATABASE_URI" is correct and i can connect to it, so i can't figure where I have gone wrong.
docker-compose logs
app_1 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 60, in __init__
app_1 | self.port = int(port)
app_1 | ValueError: invalid literal for int() with base 10: 'None'
Postgres database connects successfully in Docker container
postgres_1 | LOG: database system is ready to accept connections
config.py
from os import environ
import os
RDS_USERNAME = environ.get('RDS_USERNAME')
RDS_PASSWORD = environ.get('RDS_PASSWORD')
RDS_HOSTNAME = environ.get('RDS_HOSTNAME')
RDS_PORT = environ.get('RDS_PORT')
RDS_DB_NAME = environ.get('RDS_DB_NAME')
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://{username}:{password}#{hostname}:{port}/{dbname}"\
.format(username = RDS_USERNAME, password = RDS_PASSWORD, \
hostname = RDS_HOSTNAME, port = RDS_PORT, dbname = RDS_DB_NAME)
flask_app.py (entry point)
def create_app():
app = Flask(__name__, static_folder="./static", template_folder="./static")
app.config.from_pyfile('./app/config.py', silent=True)
register_blueprint(app)
register_extension(app)
with app.app_context():
print(db) -> This prints the correct path for SQLALCHEMY_DATABASE_URI
db.create_all()
db.session.commit()
return app
def register_blueprint(app):
app.register_blueprint(view_blueprint)
app.register_blueprint(race_blueprint)
def register_extension(app):
db.init_app(app)
migrate.init_app(app)
app = create_app()
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080, debug=True)
Dockerfile
FROM ubuntu
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python-pip && pip install --upgrade pip
RUN mkdir /home/ubuntu
WORKDIR /home/ubuntu/celery-scheduler
ADD requirements.txt /home/ubuntu/celery-scheduler/
RUN pip install -r requirements.txt
COPY . /home/ubuntu/celery-scheduler
EXPOSE 5000
CMD ["python", "flask_app.py", "--host", "0.0.0.0"]
docker-compose.yml
version: '2'
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
depends_on:
- postgres
postgres:
restart: always
image: postgres:9.6
environment:
- POSTGRES_USER=${RDS_USERNAME}
- POSTGRES_PASSWORD=${RDS_PASSWORD}
- POSTGRES_HOSTNAME=${RDS_HOSTNAME}
- POSTGRES_DB=${RDS_DB_NAME}
ports:
- "5432:5432"
You need to set environment variables RDS_USERNAME, RDS_PASSWORD, RDS_HOSTNAME, RDS_PORT , and RDS_DB_NAME in Dockerfile with ENV key value, for example
ENV RDS_PORT 5432
Answer:
1) Create a .env file with the variable definitions (I assumed that env variables will be 'pulled' from .bash_profile, but this is not the case...Remember to add .env to .gitignore for privacy)
RDS_USERNAME=xxx
RDS_PASSWORD=xxx
2) Specify the environment variables in docker-compose under app.
docker-compose.yml
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile
environment:
- RDS_USERNAME=${RDS_USERNAME}
- RDS_PASSWORD=${RDS_PASSWORD}
- RDS_HOSTNAME=${RDS_HOSTNAME}
- RDS_DB_NAME=${RDS_DB_NAME}
volumes:
- .:/app
depends_on:
- postgres

Celery worker in docker won't get correct message broker

I'm creating a flask service using an app factory pattern and I need to use celery for async tasks. I'm also using docker and docker-compose to contain and run everything. My structure looks like this:
server
|
+-- manage.py
+-- docker-compose.yml
+-- requirements.txt
+-- Dockerfile
|
+-- project
| |
| +-- api
| |
| +--tasks.py
|
| +-- __init__.py
My tasks.py file looks like this:
from project import celery_app
#celery_app.task
def celery_check(test):
print(test)
I call manage.py to run which looks like this:
# manage.py
from flask_script import Manager
from project import create_app
app = create_app()
manager = Manager(app)
if __name__ == '__main__':
manager.run()
And my __init__.pylooks like this:
# project/__init__.py
import os
import json
from flask_mongoalchemy import MongoAlchemy
from flask_cas import CAS
from flask import Flask
from itsdangerous import JSONWebSignatureSerializer as JWT
from flask_httpauth import HTTPTokenAuth
from celery import Celery
# instantiate the database and CAS
db = MongoAlchemy()
cas = CAS()
# Auth stuff (ReplaceMe is replaced below in create_app())
jwt = JWT("ReplaceMe")
auth = HTTPTokenAuth('Bearer')
celery_app = Celery(__name__, broker=os.environ.get("CELERY_BROKER_URL"))
def create_app():
# instantiate the app
app = Flask(__name__, template_folder='client/templates', static_folder='client/static')
# set config
app_settings = os.getenv('APP_SETTINGS')
app.config.from_object(app_settings)
# Send new static files every time if debug is enabled
if app.debug:
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
# Get the secret keys
parse_secret(app.config['CONFIG_FILE'], app)
celery_app.conf.update(app.config)
print(celery_app.conf)
# set up extensions
db.init_app(app)
cas.init_app(app)
# Replace the secret key with the app's
jwt.secret_key = app.config["SECRET_KEY"]
parse_config(app.config['CONFIG_FILE'])
# register blueprints
from project.api.views import twist_blueprint
app.register_blueprint(twist_blueprint)
return app
In my docker-compose I start a worker and define some environment variables like this:
version: '2.1'
services:
twist-service:
container_name: twist-service
build: .
volumes:
- '.:/usr/src/app'
ports:
- 5001:5000 # expose ports - HOST:CONTAINER
environment:
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_NAME_TESTING=testing
- DATABASE_NAME_DEV=dev
- DATABASE_URL=twist-database
- CONFIG_FILE=./project/default_config.json
- MONGO_PASSWORD=user
- CELERY_RESULT_BACKEND=redis://redis:6379
- CELERY_BROKER_URL=redis://redis:6379/0
- MONGO_PORT=27017
depends_on:
- celery
- twist-database
celery:
container_name: celery
build: .
command: celery -A project.api.tasks --loglevel=debug worker
volumes:
- '.:/usr/src/app'
twist-database:
image: mongo:latest
container_name: "twist-database"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_USER=mongo
volumes:
- /data/db
ports:
- 27017:27017 # expose ports - HOST:CONTAINER
command: mongod
redis:
image: "redis:alpine"
command: redis-server
volumes:
- '/redis'
ports:
- '6379:6379'
However when I run my docker-compose file and generate the containers, I end up with this in the celery worker logs:
[2017-07-20 16:53:06,721: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Which means the worker is ignoring the configuration set for redis when celery was created, and trying to use rabbitmq instead. I've tried changing the project.api.tasks to project and project.celery_app, but to no avail.
It seems to me like the celery service should have the environment variables CELERY_RESULT_BACKEND and CELERY_BROKER_URL as well.
You need to link the docker services together. The most straight-forward mechanism to do this is to add a networks section in your dockerfile.

Categories

Resources