Flower - Celery async task monitoring tool - On Flask/Docker web API - python

Having trouble running Flower to monitor the celery async tasks that are running on my docker-deployed flask app. I've tried everything but the documentation on getting flower running in a docker deployed environment is pretty sparse & I'm still relatively new to this.
The web & celery & flower portions of my docker-compose.yml file
version: "3.6"
services:
web:
image: <image here>
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager] # this parameter should be worker when in the cloud with managers and workers
command: ./docker_setup.sh postgres postgres_test
depends_on:
- celery
environment:
- PYTHONUNBUFFERED=1
secrets:
- <secret shtuff>
networks:
- webnet
labels:
- <local deployment label>
celery:
image: <image here>
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager] # this parameter should be worker when in the cloud with managers and workers
command: celery worker -A celery_worker.celery --loglevel=info
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
secrets:
- <secret shtuff>
networks:
- webnet
labels:
- <local deployment label>
flower:
image: <image here>
environment:
- PYTHONUNBUFFERED=1
working_dir: /code
command: celery flower -A celery_worker.celery --port=5555
depends_on:
- postgres
- redis
- celery
ports:
- "5555:5555"
links:
- db
- redis
networks:
- webnet
When I deploy this locally through docker (such that I can access the Web API via localhost), it works fine & I can see through the celery logs that the app is running and handling async requests smoothly. However, when I try to access the flower monitoring app by executing $ flower & going to http://localhost:5555, the flower app loads but no threads or workers are shown. Any advice or help would be greatly appreciated!

Wow. Made a silly oversight & forgot to include flower==0.9.2 in my requirements.txt file in my app. Once I did that, flower was exposed on localhost:5555 after doing a local deployment. Works like a charm!

Related

flask, neo4j and docker : Unable to retrieve routing information

I try to develop a mindmap API with flask and neo4j, i would like to dockerize all my project.
All services are started but backend dont want to communicate with noe4J ...
I have this error :
neo4j.exceptions.ServiceUnavailable: Unable to retrieve routing information
Here is my code : https://github.com/lquastana/mindmaps
To reproduce the error just run a docker compose command and reach this endpoint : http://localhost:5000/mindmaps
On my web service declaration I change NEO4J_URL from localhost to neo4j ( name of my service )
version: '3'
services:
web:
build: ./backend
command: flask run --host=0.0.0.0 #gunicorn --bind 0.0.0.0:5000 mindmap_api:app
ports:
- 5000:5000
environment:
- FLASK_APP=mindmap_api
- FLASK_ENV=development
- NEO4J_USERNAME=neo4j
- NEO4J_PASSWORD=airline-mexico-archer-ecology-bahama-7381
- NEO4J_URL=neo4j://neo4j:7687 # HERE
- NEO4J_DATABASE=neo4j
depends_on:
- neo4j
volumes:
- ./backend:/usr/src/app
neo4j:
image: neo4j
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./neo4j/conf:/neo4j/conf
- ./neo4j/data:/neo4j/data
- ./neo4j/import:/neo4j/import
- ./neo4j/logs:/neo4j/logs
- ./neo4j/plugins:/neo4j/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
- NEO4J_AUTH=neo4j/airline-mexico-archer-ecology-bahama-7381

Use docker-compose to push application in Dokku in Digital Ocean

I have a dockerized flask application with 2 services inside docker-compose. How can I use docker-compose to push my application inside of Dokku on Digital Ocean?
version: "3.9"
services:
web:
build: .
container_name: ad
ports:
- "5000:5000"
volumes:
- ".:/app"
scheduler:
image: mcuadros/ofelia:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config.ini:/etc/ofelia/config.ini
depends_on:
- web
I have setup and configured a Dokku droplet. Any help is appreciated.
Resources -
https://dokku.com/docs/deployment/builders/dockerfiles/
https://auth0.com/blog/hosting-applications-using-digitalocean-and-dokku/
https://www.linode.com/docs/guides/deploy-a-flask-application-with-dokku/

Multi-repository docker-compose

I have two services, on two different GitLab repositories, deployed to the same host. I am currently using supervisord to run all of the services. The CI/CD for each repository pushes the code to the host.
I am trying to replace supervisord with Docker. What I did was the following:
Set up a Dockerfile for each service.
Created a third repository with only a docker-compose.yml, that runs docker-compose up in its CI to build and run the two services. I expect this repository to only be deployed once.
I am looking for a way to have the docker-compose automatically update when I deploy one of the two services.
Edit: Essentially, I am trying to figure out the best way to use docker-compose with a multi repository setup and one host.
My docker-compose:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
build: .
command: gunicorn -c gunicorn_conf.py --bind 0.0.0.0:5000 --chdir server "app:app" --timeout 120
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
build: .
command: celery worker -A server.celery_config:celery
volumes:
- .:/app
depends_on:
- redis
celery-beat:
build: .
command: celery beat -A server.celery_config:celery --loglevel=INFO
volumes:
- .:/app
depends_on:
- redis
other-service:
build: .
command: python other-service.py
volumes:
- .:/other-service
depends_on:
- redis
If you're setting this up in the context of a CI system, the docker-compose.yml file should just run the images; it shouldn't also take responsibility for building them.
Do not overwrite the code in a container using volumes:.
You mention each service's repository has a Dockerfile, which is a normal setup. Your CI system should run docker build there (and typically docker push). Then your docker-compose.yml file just needs to mention the image: that the CI system builds:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
image: "me/django:${DJANGO_VERSION:-latest}"
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
image: "me/django:${DJANGO_VERSION:-latest}"
command: celery worker -A server.celery_config:celery
depends_on:
- redis
I hint at docker push above. If you're using Docker Hub, or a cloud-hosted Docker image repository, or are running a private repository, the CI system should run docker push after it builds each image, and (if it's not Docker Hub) the image: lines need to include the repository address.
The other important question here is what to do on rebuilds. I'd recommend giving each build a unique Docker image tag, a timestamp or a source control commit ID both work well. In the docker-compose.yml file I show above, I use an environment variable to specify the actual image tag, so your CI system can run
DJANGO_VERSION=20200113.1114 docker-compose up -d
Then Compose will know about the changed image tag, and will be able to recreate the containers based on the new images.
(This approach is highly relevant in the context of cluster systems like Kubernetes. Pushing images to a registry is all but required there. In Kubernetes changing the name of an image: triggers a redeployment, so it's also all but required to use a unique image tag per build. Except that there are multiple and more complex YAML files, the overall approach in Kubernetes would be very similar to what I've laid out here.)

Application context error in Flask app with Celery in Docker

I'm attempting to use Flask and Celery in Docker and am having issues with the Flask application context.
Flask==1.0.2
celery==4.2.0
Flask-CeleryExt==0.3.1
Here is some pertinent code.
docker-compose.yaml
version: '3'
services:
myapp:
build:
context: .
dockerfile: compose/dev/myapp/Dockerfile
ports:
- '5000:5000'
- '8888:8888'
env_file: .env
environment:
- FLASK_ENV=development
volumes:
- .:/myapp
entrypoint: /wait-for-postgres.sh
command: flask run --host=0.0.0.0
depends_on:
- postgres
- redis
networks:
- flask-redis-celery
celery:
build:
context: .
dockerfile: compose/dev/celery/Dockerfile
command: 'celery -A myapp.tasks worker -Q default --loglevel=info'
env_file: .env
volumes:
- .:/myapp
depends_on:
- redis
- myapp
networks:
- flask-redis-celery
extensions.py
from flask_celeryext import FlaskCeleryExt
ext = FlaskCeleryExt()
app.py in a register_extensions function. I'm using the application factory pattern in my app.
ext.init_app(app)
Inside of the myapp container, I can get to ext.celery per the documentation and see that I have a Celery instance and correctly send a task to
<Celery default at 0x7f600d0e7f98>
However, attempting to do the same in the celery container in my tasks file results in ext.celery being None.
tasks.py
from coupon.extensions import ext
celery = ext.celery # This is None
#celery.task(name='tasks.my_task', max_retries=2, default_retry_delay=60)
def my_task(some_args):
# etc.
Error
AttributeError: 'NoneType' object has no attribute 'task'
I've attempted numerous other options as well including make_celery as noted in the Flask docs, but cannot get to Flask and my models in the celery container, so do not believe this is very specific to Flask-CeleryExt.
I can make Celery tasks work fine if they do not access Flask objects, but I need to access SQLAlchemy models and custom classes from my Celery tasks.
How can I make Celery work properly in my celery container and be able to access Flask objects?

Launch php in python docker

i have a problem with this docker-compose
version: '3'
services:
app:
image: php:7
command: php -S 0.0.0.0:8000 /app/get_count_of_day.php
ports:
- "8000:8000"
volumes:
- .:/app
composer:
restart: 'no'
image: composer/composer:php7
command: install
volumes:
- .:/app
python:
image: python:3
command: bash -c "pip3 install -r /app/requirements.txt && celery worker -l info -A cron --beat --workdir=/app/python"
links:
- redis
volumes:
- .:/app
depends_on:
- app
redis:
image: 'redis:3.0-alpine'
command: redis-server
ports:
- "6379:6379"
My celery task
import os
from celery import Celery
from celery.schedules import crontab
os.chdir("..")
app = Celery(broker='redis://redis:6379/0')
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(10.0, run_cron.s(), name='add every 10')
#app.task
def run_cron():
os.system("/usr/local/bin/php index.php")
My error is php not found
python_1 | sh: 1: /usr/local/bin/php: not found
python_1 | [2018-06-15 15:08:29,491: INFO/ForkPoolWorker-2] Task cron.run_cron[e7c338c1-7b9c-4d6f-b607-f4e354fbd623] succeeded in
0.003908602000592509s: None
python_1 | [2018-06-15 15:08:39,487: INFO/Beat] Scheduler: Sending due task add every 10 (cron.run_cron)
but if i go manually to docker with
docker exec -i -t 1ff /bin/bash
i found php in directory
Binaries from container "app" are not exposed in container "python", this is docker's MO. To run index.php script you can jus open this page via http request: curl http://app/index.php or do the same entirely in python via urllib2 or requests (I recommend the last option).
But in case your request fails because it can't find the app domain - original answer below is your solution.
In case you have to perform more complicated operations inside the app container you should really think about exposing them through internal API or something like that, but as I understand it, docker containers should do one thing and one thing only. If you need to run some complex shell script in your php container - you are breaking this principle. app container is for serving php pages, so it should do exactly that.
As a last resort, you can totally hack on docker, for example exposing docker control socket inside your celery container and issuing commands to other containers directly. This can be really dangerous and heavily discouraged in the docs, but you do you ;)
[EDIT: originally misread question...]
In default docker network you can't address containers by name. Add
networks:
my-net:
to the end of docker-compose and
networks:
- my-net
to every container that needs to talk with each other.

Categories

Resources