Celery tasks not running in docker-compose - python

I have a docker-compose where there are three components: app, celery, and redis. These are implemented in DjangoRest.
I have seen this question several times on stackoverflow and have tried all the solutions listed. However, the celery task is not running.
The behavior that celery has is the same as the app, that is, it is starting the django project, but it is not running the task.
docker-compose.yml
version: "3.8"
services:
app:
build: .
volumes:
- .:/django
ports:
- 8000:8000
image: app:django
container_name: myapp
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:alpine
container_name: redis
ports:
- 6379:6379
volumes:
- ./redis/data:/data
restart: always
environment:
- REDIS_PASSWORD=
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
celery:
image: celery:3.1
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: celery -A myapp worker -l INFO -c 8
volumes:
- .:/django
depends_on:
- redis
- app
links:
- redis
DockerFile
FROM python:3.9
RUN useradd --create-home --shell /bin/bash django
USER django
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
USER django
WORKDIR /home/django
COPY requirements.txt ./
# set path
ENV PATH=/home/django/.local/bin:$PATH
# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
docker-entrypoint.sh
# run migration first
python manage.py migrate
# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell
# start the server
python manage.py runserver 0.0.0.0:8000
celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
settings.py
CELERY_BROKER_URL = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'

Your docker-entrypoint.sh script unconditionally runs the Django server. Since you declare it as the image's ENTRYPOINT, the Compose command: is passed to it as arguments but your script ignores these.
The best way to fix this is to pass the specific command – "run the Django server", "run a Celery worker" - as the Dockerfile CMD or Compose command:. The entrypoint script ends with the shell command exec "$#" to run that command.
#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell
# run the container CMD
exec "$#"
In your Dockerfile you need to declare a default CMD.
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000
Now in your Compose setup, if you don't specify a command:, it will use that default CMD, but if you do, that will be run instead. In both cases your entrypoint script will run but when it gets to the final exec "$#" line it will run the provided command.
That means you can delete the command: override from your app container. (You do need to leave it for the Celery container.) You can simplify this setup further by removing the image: and container_name: settings (Compose will pick reasonable defaults for both of these) and the volumes: mount that hides the image content.

Related

Docker is taking wrong settings file when creating image

I have Django application where my settings are placed in folder named settings. Inside this folder I have init.py, base.py, deployment.py and production.py.
My wsgi.py looks like this:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
application = get_wsgi_application()
My Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN mkdir /code
COPY . /code/
WORKDIR /code
RUN pip install --no-cache-dir git+https://github.com/ByteInternet/pip-install-privates.git#master#egg=pip-install-privates
RUN pip install --upgrade pip
RUN pip_install_privates --token {GITHUB-TOKEN} /code/requirements.txt
RUN playwright install --with-deps chromium
RUN playwright install-deps
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
EXPOSE 80
My docker-compose file:
version: '3'
services:
app:
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env
Problem
Every time I create image Docker is taking settings from development.py instead of production.py. I tried to change my setting using this command:
set DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
It works fine when using conda/venv and I am able to switch to production mode however when creating Docker image it does not take into consideration production.py file at all.
Question
Is there anything else I should be aware of that causes issues like this and how can I fix it?
YES, there is something else you need to check:
When you run your docker container you can specify environment variables.
If you declare environment variable DJANGO_SETTINGS_MODULE=myapp_settings.development it will override what you specified inside of wsgi.py!
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
code above basically means: declare "myapp_settings.settings.production" as the default but if environment variable DJANGO_SETTINGS_MODULE is declared, take the value of that variable.
Edit 1
Maybe you can try specifying the environment variable inside your docker-compose file:
version: '3'
services:
app:
environment:
- DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env

Celery Task not working with redis in flask docker container

I am trying to run a celery task in a flask docker container and I am getting error like below when celery task is executed
web_1 | sock.connect(socket_address)
web_1 | OSError: [Errno 99] Cannot assign requested address
web_1 |
web_1 | During handling of the above exception, another exception occurred: **[shown below]**
web_1 | File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1 | raise ConnectionError(self._error_message(e))
web_1 | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
Without the celery task the application is working fine
docker-compose.yml
version: '3'
services:
web:
build: ./
volumes:
- ./app:/app
ports:
- "80:80"
environment:
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
depends_on:
- redis
redis:
container_name: redis
image: redis:6.2.6
ports:
- "6379:6379"
expose:
- "6379"
worker:
build:
context: ./
hostname: worker
command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
main.py
from flask import Flask
from instance import config, exts
from decouple import config as con
def create_app(config_class=config.Config):
app = Flask(__name__)
app.config.from_object(config.Config)
app.secret_key = con('flask_secret_key')
exts.mail.init_app(app)
from routes.test_route import test_api
app.register_blueprint(test_api)
return app
app = create_app()
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
I am using Flask blueprint for splitting the api routes
test_route.py
from flask import Flask, render_template, Blueprint
from instance.exts import celery
test_api = Blueprint('test_api', __name__)
#test_api.route('/test/<string:name>')
def testfnn(name):
task = celery.send_task('CeleryTask.reverse',args=[name])
return task.id
Celery tasks are also written in separate file
celery_tasks.py
from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time
celery= Celery('tasks',
broker = config('CELERY_BROKER_URL'),
backend = config('CELERY_RESULT_BACKEND'))
class CeleryTask:
#celery.task(name='CeleryTask.reverse')
def reverse(string):
time.sleep(25)
return string[::-1]
.env
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
Dockerfile
FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]
requirements.txt
Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32
Folder Structure
Thanks in Advance
In the end of your docker-compose.yml you can add:
networks:
your_net_name:
name: your_net_name
And in each container:
networks:
- your_net_name
These two steps will put all the containers at the same network. By default docker creates one, but as I've had problems letting them be auto-renamed, I think this approach gives you more control.
Finally I'd also change your env variable to use the container address:
CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0
So you'd also add this section to your redis container:
hostname: redis_addr
This way the env var will get whatever address docker has assigned to the container.

Docker compose executable file not found in $PATH": unknown

but I'm having a problem.
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 0
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
compose.yml :
version: '3'
services:
db:
image: postgres
volumes:
- ./docker/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=sampledb
- POSTGRES_USER=sampleuser
- POSTGRES_PASSWORD=samplesecret
- POSTGRES_INITDB_ARGS=--encoding=UTF-8
django:
build: .
environment:
- DJANGO_DEBUG=True
- DJANGO_DB_HOST=db
- DJANGO_DB_PORT=5432
- DJANGO_DB_NAME=sampledb
- DJANGO_DB_USERNAME=sampleuser
- DJANGO_DB_PASSWORD=samplesecret
- DJANGO_SECRET_KEY=dev_secret_key
ports:
- "8000:8000"
command:
- python3 manage.py runserver
volumes:
- .:/code
error :
ERROR: for django Cannot start service django: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"python3 manage.py runserver\": executable file not found in $PATH": unknown
At first, I thought Python Manage was wrong.
But i tried command ls , To my surprise, I succeeded.
Then I tried the ls -al command, but it failed.
I think the addition of a command to write space is causing a problem.
how can i fix it ?
When you use list syntax in the docker-compose.yml file, each item is taken as a word. You're running the shell equivalent of
'python3 manage.py runserver'
You can either break this up into separate words yourself
command:
- python3
- manage.py
- runserver
or have Docker Compose do it for you
command: python3 manage.py runserver
In general fixed properties of the image like this should be specified in the Dockerfile, not in the docker-compose.yml. Every time you run this image you're going to want to run this same command, and you're going to want to run the code built into the image. There are two syntaxes, with the same basic difference:
# Explicitly write out the words
CMD ["python3", "manage.py", "runserver"]
# Docker wraps in sh -c '...' which splits words for you
CMD python3 manage.py runserver
With the code built into the image and a reasonable default command defined there, you can delete the volumes: and command: from your docker-compose.yml file.

Using Celery to send email in Flask Admin

I have been developing a Flask Admin app which also had an API. Part of the app included a function that sent an email when called. I was advised I should use celery to send the email.
I followed the advice on https://blog.miguelgrinberg.com/post/using-celery-with-flask
I added the following code:
config.py:
CELERY_BROKER_URL = "redis://redis:6379/0"
init.py
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
Previously the code was:
#app.route('/api/postupdate', methods=['POST'])
#auth_token_required
def post_update():
if not request.json[0]:
return make_response(jsonify({'error': 'Request not in JSON'}), 400)
updates = []
for entry in request.json:
updates.append({'trackingnumber':entry['trackingnumber'], 'date': datetime.strptime(entry['date'], '%Y-%m-%d %H:%M:%S'), \
'status':entry['status'], 'location':entry['location']})
send_email(updates)
return make_response(jsonify({'success': 'Update added'}), 200)
I changed the line from send_email(updates) to send_email. delay(updates)
I then added #celery.task on top of def send_email()
However, now, the emails are never sent. I am not even sure where to beginning trying to troubleshoot. No errors are thrown and the program continues as if it was successful.
Everything is in separate docker containers. Here is my docker compose file:
version: '2'
services:
db:
image: postgres
environment:
- PG_PASSWORD=postgres
nginx:
image: nginx:latest
links:
- dev:uwsgi
ports:
- "80:80"
- "8080:8080"
volumes:
- ./x/nginx/nginx.conf:/etc/nginx/nginx.conf
redis:
image: redis
ports:
- "6379:6379"
dev:
build: ./x/
volumes:
- ./x/app:/code/app
expose:
- "3031"
depends_on:
- db
links:
- redis
scraper:
build: ./Scraper/
volumes:
- ./Scraper/scraper.py:/code/scraper.py
- ./Scraper/x.py:/code/x.py
depends_on:
- db
- dev
After advice received I have made the following amendments:
Added the following to docker-compose.yaml file as a new service
celery:
build: ./Worker/
links:
- redis
volumes:
- ./x/app:/code/app
run.sh:
celery worker -A app.celery
The new dockerfile:
FROM ubuntu:latest
ENV TERM xterm
RUN apt-get update -y && apt-get install -y python3-pip python3.5-dev build-essential libpq-dev nano
ADD ./requirements /code/requirements
ADD run.sh /code/run.sh
RUN pip3 install --upgrade pip
RUN pip3 install -r /code/requirements/base.txt
WORKDIR /code
RUN chmod 777 run.sh
CMD "./run.sh"
This now results in the celery service exiting immediately with the code 1.
What your setup has done so far is to put your email tasks in a queue managed by redis, but you don't execute your tasks yet.
To execute yout tasks you need to run an additionally docker container where your run the celery worker process (to execute the tasks)!
In the blog
https://blog.miguelgrinberg.com/post/using-celery-with-flask
some lines before Conclusion you find the following instruction to run the celery worker:
On the second terminal run a Celery worker. This is done with the celery
command, which is installed in your virtual environment. Since this is the
process that will be sending out emails, the MAIL_USERNAME and MAIL_PASSWORD
environment variables must be set to a valid Gmail account before starting
the worker:
$ export MAIL_USERNAME=
$ export MAIL_PASSWORD=
$ source venv/bin/activate
(venv) $ celery worker -A app.celery --loglevel=info
Your new container has to be linked to the redis container and has to contain the same volume as your dev container.

dockerize does not delay the container initialization

I am now preparing the images for my project. I use dockerize to control my initialization. I am not sure that hardcode the IP address given by docker is way to go or not?
Problem:
backend does not wait until the database finish initialization first.
Terminal says
backend_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
backend_1 | Is the server running on host "sakahama_db" (172.21.0.2) and accepting
backend_1 | TCP/IP connections on port 5432?
Here are my files:
devdb.dockerfile
FROM postgres:9.5
# Install hstore extension
COPY ./Dockerfiles/hstore.sql /docker-entrypoint-initdb.d
RUN mkdir -p /var/lib/postgresql-static/data
ENV PGDATA /var/lib/postgresql-static/data
hstore.sql
create extension hstore;
backend.dockerfile
FROM python:2
RUN apt-get update && apt-get install -y wget
ENV DOCKERIZE_VERSION v0.2.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY requirements ./requirements
RUN pip install -r requirements/local.txt
COPY . .
EXPOSE 8000
CMD echo "dockerize"
CMD ["dockerize", "-wait", "tcp://sakahama_db:5432"]
CMD echo "migrate"
CMD ["python", "sakahama/manage.py", "migrate"]
CMD echo "runserver"
CMD ["python", "sakahama/manage.py", "runserver", "0.0.0.0:8000"]
docker-compose.yml
version: "2"
services:
backend:
build:
context: .
dockerfile: Dockerfiles/backend.dockerfile
restart: "always"
environment:
DATABASE_URL: postgres://username:password#sakahama_db:5432/sakahama
REDISCLOUD_URL: redis://redis:6379/0
links:
- sakahama_db
ports:
- "9000:8000"
sakahama_db:
build:
context: .
dockerfile: Dockerfiles/devdb.dockerfile
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: sakahama
ports:
- "5435:5432"
redis:
image: redis
expose:
- "6379"
Question: How to use dockerize properly?
Update:
I had tried temporary solution like this. But it does not work
backend-entrypoint.sh
#!/bin/bash
echo "dockerize"
dockerize -wait tcp://sakahama_db:5432
echo "migrate"
python sakahama/manage.py migrate
echo "runserver"
python sakahama/manage.py runserver 0.0.0.0:8000
and docker-compose.yml:
I add one line
command: ["sh", "Dockerfiles/backend-entrypoint.sh"]
When your Postgres container is up it starts to receive tcp packages you send with the command: dockerize -wait tcp://sakahama_db:5432 , but it does not mean that the Postgres service is ready. It takes some time to load, to set up users, passwords, create the db or load the databases and make all the grants needed.
I had a similar issue with Flask and MySQL, I created an sh script like you did and inside it I made a simple loop to check if the service was up before starting the Flask application
I am not a shell script Senior but here follow the script:
# testing if databas is up
mysql -h database -uroot -proot databasename
ISDBUP=$?
while [[ $ISDBUP != "0" ]]; do
echo "database is not up yet, waiting for 5 seconds"
sleep 5;
mysql -h database -uroot -proot databasename -e "SELECT 1;";
ISDBUP=$?
done
# starting the application
python server.py app

Categories

Resources