I have a dockerized flask application with 2 services inside docker-compose. How can I use docker-compose to push my application inside of Dokku on Digital Ocean?
version: "3.9"
services:
web:
build: .
container_name: ad
ports:
- "5000:5000"
volumes:
- ".:/app"
scheduler:
image: mcuadros/ofelia:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config.ini:/etc/ofelia/config.ini
depends_on:
- web
I have setup and configured a Dokku droplet. Any help is appreciated.
Resources -
https://dokku.com/docs/deployment/builders/dockerfiles/
https://auth0.com/blog/hosting-applications-using-digitalocean-and-dokku/
https://www.linode.com/docs/guides/deploy-a-flask-application-with-dokku/
Related
I am trying to run flask, postgres and nginx services with following docker-compose:
version: '3.6'
services:
postgres:
image: postgres:10.5
container_name: postgres
hostname: postgres
user: postgres
ports:
- "5432:5432"
networks:
- db-tier
environment:
CUSTOM_CONFIG: /etc/postgres/postgresql.conf
volumes:
- ./postgres/sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
- ./postgres/postgresql.conf:/etc/postgres/postgresql.conf
command: postgres -c config_file=/etc/postgres/postgresql.conf
restart: always
app:
image: python/app:0.0.1
container_name: app
hostname: app
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
networks:
- app-tier
- db-tier
stdin_open: true
nginx:
image: nginx:1.22
container_name: nginx-reverse-proxy-flask
ports:
- "8080:80"
depends_on:
- app
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- app-tier
networks:
app-tier:
driver: bridge
db-tier:
driver: bridge
This is what app.config["SQLALCHEMY_DATABASE_URI"] equals to postgresql://denis:1234Five#postgres:5432/app
The error after docker-compose up is:
app | psycopg2.OperationalError: could not connect to server: Connection refused
app | Is the server running on host "postgres" (192.168.32.2) and accepting
app | TCP/IP connections on port 5432?
What could cause this type of error? I double checked the container name of postgres service, and it running with this name postgres why flask app doesn't "see" it?
It could be issue with no propper startup probe in your postgres container and docker-compose.yml
Look at for reference how to setup them at Docker - check if postgres is ready
So after postgres starts as container, docker starts your app, but postgres inside of container is not ready yet and you get your error
This issue was resolved with postgres pg_isready adding following lines to postgres service:
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
Took as solution from here Safe ways to specify postgres parameters for healthchecks in docker compose
I try to develop a mindmap API with flask and neo4j, i would like to dockerize all my project.
All services are started but backend dont want to communicate with noe4J ...
I have this error :
neo4j.exceptions.ServiceUnavailable: Unable to retrieve routing information
Here is my code : https://github.com/lquastana/mindmaps
To reproduce the error just run a docker compose command and reach this endpoint : http://localhost:5000/mindmaps
On my web service declaration I change NEO4J_URL from localhost to neo4j ( name of my service )
version: '3'
services:
web:
build: ./backend
command: flask run --host=0.0.0.0 #gunicorn --bind 0.0.0.0:5000 mindmap_api:app
ports:
- 5000:5000
environment:
- FLASK_APP=mindmap_api
- FLASK_ENV=development
- NEO4J_USERNAME=neo4j
- NEO4J_PASSWORD=airline-mexico-archer-ecology-bahama-7381
- NEO4J_URL=neo4j://neo4j:7687 # HERE
- NEO4J_DATABASE=neo4j
depends_on:
- neo4j
volumes:
- ./backend:/usr/src/app
neo4j:
image: neo4j
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./neo4j/conf:/neo4j/conf
- ./neo4j/data:/neo4j/data
- ./neo4j/import:/neo4j/import
- ./neo4j/logs:/neo4j/logs
- ./neo4j/plugins:/neo4j/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
- NEO4J_AUTH=neo4j/airline-mexico-archer-ecology-bahama-7381
This is a question to understand what should be the best practice to follow in order to connect the dots on the argument. Currently I am developing a Dockerized Django website. In this website, one of the apps will be named 'dashboards', where I wish to publish data which is currently stored locally in .csv (updated every day through scheduled tasks).
Now, I am trying to understand what should be the next steps to follow in order to connect these data to the Dockerized Django website. My first guess would be to schedule locally .sql scripts to 'append' the new data into the db that I can create locally. Then, connect the db to the Dockerized Django website through volumes belonging the postgreSQL service. Just a guess that I need to test. But, is there a way to skip everything locally and just do the work inside my Docker container?
You can find the Github repository here. Many thanks!
docker-compose.yml:
version: '3.8'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
hostname: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
Dockefile:
FROM python:3.7-slim
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
My docker-compose file is as follows:
version: '3'
services:
db:
image: mongo:4.2
container_name: mongo-db
restart: always
environment:
MONGO_INITDB_DATABASE: VMcluster
ports:
- "16006:27017"
volumes:
- ./initdb.js:/docker-entrypoint-initdb.d/initdb.js
web:
build:
context: .
dockerfile: Dockerfile_Web
command: python manage.py runserver 0.0.0.0:8000
container_name: cluster-monitor-web
volumes:
- .:/vmCluster_service
ports:
- "9900:8000"
depends_on:
- db
cronjobs:
build:
context: .
dockerfile: Dockerfile_Cron
command: ["cron", "-f"]
container_name: cluster-monitor-cron
I want to implement a feature where the user should be able to update the crontab from the Django web. I'm done with Django part i.e python code for backend. But the Django web container is not able to access the crontab container.
how can I make Django web container to access the crontab container and update the crontab?
Im using python-crontab module in django which throws an error :
"[Errno 2] No such file or directory: '/usr/bin/crontab':
'/usr/bin/crontab'"
Having trouble running Flower to monitor the celery async tasks that are running on my docker-deployed flask app. I've tried everything but the documentation on getting flower running in a docker deployed environment is pretty sparse & I'm still relatively new to this.
The web & celery & flower portions of my docker-compose.yml file
version: "3.6"
services:
web:
image: <image here>
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager] # this parameter should be worker when in the cloud with managers and workers
command: ./docker_setup.sh postgres postgres_test
depends_on:
- celery
environment:
- PYTHONUNBUFFERED=1
secrets:
- <secret shtuff>
networks:
- webnet
labels:
- <local deployment label>
celery:
image: <image here>
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager] # this parameter should be worker when in the cloud with managers and workers
command: celery worker -A celery_worker.celery --loglevel=info
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
secrets:
- <secret shtuff>
networks:
- webnet
labels:
- <local deployment label>
flower:
image: <image here>
environment:
- PYTHONUNBUFFERED=1
working_dir: /code
command: celery flower -A celery_worker.celery --port=5555
depends_on:
- postgres
- redis
- celery
ports:
- "5555:5555"
links:
- db
- redis
networks:
- webnet
When I deploy this locally through docker (such that I can access the Web API via localhost), it works fine & I can see through the celery logs that the app is running and handling async requests smoothly. However, when I try to access the flower monitoring app by executing $ flower & going to http://localhost:5555, the flower app loads but no threads or workers are shown. Any advice or help would be greatly appreciated!
Wow. Made a silly oversight & forgot to include flower==0.9.2 in my requirements.txt file in my app. Once I did that, flower was exposed on localhost:5555 after doing a local deployment. Works like a charm!