Using Celery to send email in Flask Admin - python

I have been developing a Flask Admin app which also had an API. Part of the app included a function that sent an email when called. I was advised I should use celery to send the email.
I followed the advice on https://blog.miguelgrinberg.com/post/using-celery-with-flask
I added the following code:
config.py:
CELERY_BROKER_URL = "redis://redis:6379/0"
init.py
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
Previously the code was:
#app.route('/api/postupdate', methods=['POST'])
#auth_token_required
def post_update():
if not request.json[0]:
return make_response(jsonify({'error': 'Request not in JSON'}), 400)
updates = []
for entry in request.json:
updates.append({'trackingnumber':entry['trackingnumber'], 'date': datetime.strptime(entry['date'], '%Y-%m-%d %H:%M:%S'), \
'status':entry['status'], 'location':entry['location']})
send_email(updates)
return make_response(jsonify({'success': 'Update added'}), 200)
I changed the line from send_email(updates) to send_email. delay(updates)
I then added #celery.task on top of def send_email()
However, now, the emails are never sent. I am not even sure where to beginning trying to troubleshoot. No errors are thrown and the program continues as if it was successful.
Everything is in separate docker containers. Here is my docker compose file:
version: '2'
services:
db:
image: postgres
environment:
- PG_PASSWORD=postgres
nginx:
image: nginx:latest
links:
- dev:uwsgi
ports:
- "80:80"
- "8080:8080"
volumes:
- ./x/nginx/nginx.conf:/etc/nginx/nginx.conf
redis:
image: redis
ports:
- "6379:6379"
dev:
build: ./x/
volumes:
- ./x/app:/code/app
expose:
- "3031"
depends_on:
- db
links:
- redis
scraper:
build: ./Scraper/
volumes:
- ./Scraper/scraper.py:/code/scraper.py
- ./Scraper/x.py:/code/x.py
depends_on:
- db
- dev
After advice received I have made the following amendments:
Added the following to docker-compose.yaml file as a new service
celery:
build: ./Worker/
links:
- redis
volumes:
- ./x/app:/code/app
run.sh:
celery worker -A app.celery
The new dockerfile:
FROM ubuntu:latest
ENV TERM xterm
RUN apt-get update -y && apt-get install -y python3-pip python3.5-dev build-essential libpq-dev nano
ADD ./requirements /code/requirements
ADD run.sh /code/run.sh
RUN pip3 install --upgrade pip
RUN pip3 install -r /code/requirements/base.txt
WORKDIR /code
RUN chmod 777 run.sh
CMD "./run.sh"
This now results in the celery service exiting immediately with the code 1.

What your setup has done so far is to put your email tasks in a queue managed by redis, but you don't execute your tasks yet.
To execute yout tasks you need to run an additionally docker container where your run the celery worker process (to execute the tasks)!
In the blog
https://blog.miguelgrinberg.com/post/using-celery-with-flask
some lines before Conclusion you find the following instruction to run the celery worker:
On the second terminal run a Celery worker. This is done with the celery
command, which is installed in your virtual environment. Since this is the
process that will be sending out emails, the MAIL_USERNAME and MAIL_PASSWORD
environment variables must be set to a valid Gmail account before starting
the worker:
$ export MAIL_USERNAME=
$ export MAIL_PASSWORD=
$ source venv/bin/activate
(venv) $ celery worker -A app.celery --loglevel=info
Your new container has to be linked to the redis container and has to contain the same volume as your dev container.

Related

How to run a command in docker-compose after a service run?

I have searched but I couldn't find a solution for my problem. My docker-compose.yml file as below.
#
version: '2.1'
services:
mongo:
image: mongo_db
build: mongo_image
container_name: my_mongodb
restart: always
networks:
- isolated_network
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
entrypoint: ["python3", "/tmp/script/get_api_to_mongodb.py", "&"]
networks:
isolated_network:
So here I use a custom Dockerfile. And my Dockerfile is like below.
FROM mongo:latest
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN pip3 install requests
RUN pip3 install pymongo
RUN apt-get clean -y
RUN mkdir -p /tmp/script
COPY get_api_to_mongodb.py /tmp/script/get_api_to_mongodb.py
#CMD ["python3","/tmp/script/get_api_to_mongodb.py","&"]
Here I want to create a container which have MongoDB and after create the container I collect a data using an API and send the data to MongoDB. But when I run the python script in that time mongodb is not initialized. So I need to run my script after container is created and right after mongodb initialized. Thanks in advance.
You should run this script as a separate container. It's not "part of the database", like an extension or plugin, but rather an ordinary client process that happens to connect to the database and that you want to run relatively early on. In general, if you're thinking about trying to launch a background process in a container, it's often a better approach to run foreground processes in two separate containers.
This setup means you can use a simpler Dockerfile that starts from an image with Python preinstalled:
FROM python:3.10
RUN pip install requests pymongo
WORKDIR /app
COPY get_api_to_mongodb.py .
CMD ["./get_api_to_mongodb.py"]
Then in your Compose setup, declare this as a second container alongside the first one. Since the script is in its own image, you can use the unmodified mongo image.
version: '2.4'
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
loader:
build: .
restart: on-failure
depends_on:
- mongodb
# environment:
# - MONGO_HOST=mongo
# - MONGO_USERNAME=root
# - MONGO_PASSWORD=root_pw
Note that the loader will re-run every time you run docker-compose up -d. You also may have to wait for the database to do its initialization before you can run the loader process; see Docker Compose wait for container X before starting Y.
It's likely you have an existing Compose service for your real application
version: '2.4'
services:
mongo: { ... }
app:
build: .
...
If that image contains the loader script, then you can docker-compose run it. This launches a new temporary container, using most of the attributes from the Compose service declaration, but you provide an alternate command: and the ports: are ignored.
docker-compose run app ./get_api_to_mongodb.py
One might ideally like a workflow where first the database container starts; then once it's accepting requests, run the loader script as a temporary container; then once that's completed start the main application server. This is mostly beyond Compose's capabilities, though you can probably get close with a combination of extended depends_on: declarations and a healthcheck: for the database.

Celery tasks not running in docker-compose

I have a docker-compose where there are three components: app, celery, and redis. These are implemented in DjangoRest.
I have seen this question several times on stackoverflow and have tried all the solutions listed. However, the celery task is not running.
The behavior that celery has is the same as the app, that is, it is starting the django project, but it is not running the task.
docker-compose.yml
version: "3.8"
services:
app:
build: .
volumes:
- .:/django
ports:
- 8000:8000
image: app:django
container_name: myapp
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:alpine
container_name: redis
ports:
- 6379:6379
volumes:
- ./redis/data:/data
restart: always
environment:
- REDIS_PASSWORD=
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
celery:
image: celery:3.1
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: celery -A myapp worker -l INFO -c 8
volumes:
- .:/django
depends_on:
- redis
- app
links:
- redis
DockerFile
FROM python:3.9
RUN useradd --create-home --shell /bin/bash django
USER django
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
USER django
WORKDIR /home/django
COPY requirements.txt ./
# set path
ENV PATH=/home/django/.local/bin:$PATH
# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
docker-entrypoint.sh
# run migration first
python manage.py migrate
# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell
# start the server
python manage.py runserver 0.0.0.0:8000
celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
settings.py
CELERY_BROKER_URL = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'
Your docker-entrypoint.sh script unconditionally runs the Django server. Since you declare it as the image's ENTRYPOINT, the Compose command: is passed to it as arguments but your script ignores these.
The best way to fix this is to pass the specific command – "run the Django server", "run a Celery worker" - as the Dockerfile CMD or Compose command:. The entrypoint script ends with the shell command exec "$#" to run that command.
#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell
# run the container CMD
exec "$#"
In your Dockerfile you need to declare a default CMD.
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000
Now in your Compose setup, if you don't specify a command:, it will use that default CMD, but if you do, that will be run instead. In both cases your entrypoint script will run but when it gets to the final exec "$#" line it will run the provided command.
That means you can delete the command: override from your app container. (You do need to leave it for the Celery container.) You can simplify this setup further by removing the image: and container_name: settings (Compose will pick reasonable defaults for both of these) and the volumes: mount that hides the image content.

Celery Task not working with redis in flask docker container

I am trying to run a celery task in a flask docker container and I am getting error like below when celery task is executed
web_1 | sock.connect(socket_address)
web_1 | OSError: [Errno 99] Cannot assign requested address
web_1 |
web_1 | During handling of the above exception, another exception occurred: **[shown below]**
web_1 | File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1 | raise ConnectionError(self._error_message(e))
web_1 | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
Without the celery task the application is working fine
docker-compose.yml
version: '3'
services:
web:
build: ./
volumes:
- ./app:/app
ports:
- "80:80"
environment:
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
depends_on:
- redis
redis:
container_name: redis
image: redis:6.2.6
ports:
- "6379:6379"
expose:
- "6379"
worker:
build:
context: ./
hostname: worker
command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
main.py
from flask import Flask
from instance import config, exts
from decouple import config as con
def create_app(config_class=config.Config):
app = Flask(__name__)
app.config.from_object(config.Config)
app.secret_key = con('flask_secret_key')
exts.mail.init_app(app)
from routes.test_route import test_api
app.register_blueprint(test_api)
return app
app = create_app()
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
I am using Flask blueprint for splitting the api routes
test_route.py
from flask import Flask, render_template, Blueprint
from instance.exts import celery
test_api = Blueprint('test_api', __name__)
#test_api.route('/test/<string:name>')
def testfnn(name):
task = celery.send_task('CeleryTask.reverse',args=[name])
return task.id
Celery tasks are also written in separate file
celery_tasks.py
from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time
celery= Celery('tasks',
broker = config('CELERY_BROKER_URL'),
backend = config('CELERY_RESULT_BACKEND'))
class CeleryTask:
#celery.task(name='CeleryTask.reverse')
def reverse(string):
time.sleep(25)
return string[::-1]
.env
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
Dockerfile
FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]
requirements.txt
Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32
Folder Structure
Thanks in Advance
In the end of your docker-compose.yml you can add:
networks:
your_net_name:
name: your_net_name
And in each container:
networks:
- your_net_name
These two steps will put all the containers at the same network. By default docker creates one, but as I've had problems letting them be auto-renamed, I think this approach gives you more control.
Finally I'd also change your env variable to use the container address:
CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0
So you'd also add this section to your redis container:
hostname: redis_addr
This way the env var will get whatever address docker has assigned to the container.

Launch php in python docker

i have a problem with this docker-compose
version: '3'
services:
app:
image: php:7
command: php -S 0.0.0.0:8000 /app/get_count_of_day.php
ports:
- "8000:8000"
volumes:
- .:/app
composer:
restart: 'no'
image: composer/composer:php7
command: install
volumes:
- .:/app
python:
image: python:3
command: bash -c "pip3 install -r /app/requirements.txt && celery worker -l info -A cron --beat --workdir=/app/python"
links:
- redis
volumes:
- .:/app
depends_on:
- app
redis:
image: 'redis:3.0-alpine'
command: redis-server
ports:
- "6379:6379"
My celery task
import os
from celery import Celery
from celery.schedules import crontab
os.chdir("..")
app = Celery(broker='redis://redis:6379/0')
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(10.0, run_cron.s(), name='add every 10')
#app.task
def run_cron():
os.system("/usr/local/bin/php index.php")
My error is php not found
python_1 | sh: 1: /usr/local/bin/php: not found
python_1 | [2018-06-15 15:08:29,491: INFO/ForkPoolWorker-2] Task cron.run_cron[e7c338c1-7b9c-4d6f-b607-f4e354fbd623] succeeded in
0.003908602000592509s: None
python_1 | [2018-06-15 15:08:39,487: INFO/Beat] Scheduler: Sending due task add every 10 (cron.run_cron)
but if i go manually to docker with
docker exec -i -t 1ff /bin/bash
i found php in directory
Binaries from container "app" are not exposed in container "python", this is docker's MO. To run index.php script you can jus open this page via http request: curl http://app/index.php or do the same entirely in python via urllib2 or requests (I recommend the last option).
But in case your request fails because it can't find the app domain - original answer below is your solution.
In case you have to perform more complicated operations inside the app container you should really think about exposing them through internal API or something like that, but as I understand it, docker containers should do one thing and one thing only. If you need to run some complex shell script in your php container - you are breaking this principle. app container is for serving php pages, so it should do exactly that.
As a last resort, you can totally hack on docker, for example exposing docker control socket inside your celery container and issuing commands to other containers directly. This can be really dangerous and heavily discouraged in the docs, but you do you ;)
[EDIT: originally misread question...]
In default docker network you can't address containers by name. Add
networks:
my-net:
to the end of docker-compose and
networks:
- my-net
to every container that needs to talk with each other.

Cannot see my web application using Docker container

I am trying to test my web application using a docker container, but I am not able to see it when I try to access it through my browser.
The docker compose file looks like
version: '2'
services:
db:
image: postgres
volumes:
- ~/pgdata:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_PASSWORD: "dbpassword"
PGDATA: "/var/lib/postgresql/data/pgdata"
ports:
- "5432:5432"
web:
build:
context: .
dockerfile: Dockerfile-web
ports:
- "5000:5000"
volumes:
- ./web:/web
depends_on:
- db
backend:
build:
context: .
dockerfile: Dockerfile-backend
volumes:
- ./backend:/backend
depends_on:
- db
The dockerfile-web looks like
FROM python
ADD web/requirements.txt /web/requirements.txt
ADD web/bower.json /web/bower.json
WORKDIR /web
RUN \
wget https://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.xz && \
tar xJf node-*.tar.xz -C /usr/local --strip-components=1 && \
rm -f node-*.tar.xz
RUN npm install -g bower
RUN bower install --allow-root
RUN pip install -r requirements.txt
RUN export MYFLASKAPP_SECRET='makethewebsite'
CMD python manage.py server
The ip for my docker machine is
docker-machine ip
192.168.99.100
But when I try
http://192.168.99.100:5000/
in my browser it just says that the site cannot be reached.
It seems like it is refusing the connection.
When I ping my database in the browser I can see that my database response in a log
http://192.168.99.100:5432/
So I tried wget inside the container and got
$ docker exec 3bb5246a0623 wget http://localhost:5000/
--2016-07-23 05:25:16-- http://localhost:5000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34771 (34K) [text/html]
Saving to: ‘index.html.1’
0K .......... .......... .......... ... 100% 5.37M=0.006s
2016-07-23 05:25:16 (5.37 MB/s) - ‘index.html.1’ saved [34771/34771]
Anyone know how I can get my web application to show up through my browser?
I had to enable external visibility for my flask application.
You can see it here
Can't connect to Flask web service, connection refused

Categories

Resources