flask app service can't connect to postgres service within docker-compose - python

I am trying to run flask, postgres and nginx services with following docker-compose:
version: '3.6'
services:
postgres:
image: postgres:10.5
container_name: postgres
hostname: postgres
user: postgres
ports:
- "5432:5432"
networks:
- db-tier
environment:
CUSTOM_CONFIG: /etc/postgres/postgresql.conf
volumes:
- ./postgres/sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
- ./postgres/postgresql.conf:/etc/postgres/postgresql.conf
command: postgres -c config_file=/etc/postgres/postgresql.conf
restart: always
app:
image: python/app:0.0.1
container_name: app
hostname: app
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
networks:
- app-tier
- db-tier
stdin_open: true
nginx:
image: nginx:1.22
container_name: nginx-reverse-proxy-flask
ports:
- "8080:80"
depends_on:
- app
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- app-tier
networks:
app-tier:
driver: bridge
db-tier:
driver: bridge
This is what app.config["SQLALCHEMY_DATABASE_URI"] equals to postgresql://denis:1234Five#postgres:5432/app
The error after docker-compose up is:
app | psycopg2.OperationalError: could not connect to server: Connection refused
app | Is the server running on host "postgres" (192.168.32.2) and accepting
app | TCP/IP connections on port 5432?
What could cause this type of error? I double checked the container name of postgres service, and it running with this name postgres why flask app doesn't "see" it?

It could be issue with no propper startup probe in your postgres container and docker-compose.yml
Look at for reference how to setup them at Docker - check if postgres is ready
So after postgres starts as container, docker starts your app, but postgres inside of container is not ready yet and you get your error

This issue was resolved with postgres pg_isready adding following lines to postgres service:
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
Took as solution from here Safe ways to specify postgres parameters for healthchecks in docker compose

Related

flask, neo4j and docker : Unable to retrieve routing information

I try to develop a mindmap API with flask and neo4j, i would like to dockerize all my project.
All services are started but backend dont want to communicate with noe4J ...
I have this error :
neo4j.exceptions.ServiceUnavailable: Unable to retrieve routing information
Here is my code : https://github.com/lquastana/mindmaps
To reproduce the error just run a docker compose command and reach this endpoint : http://localhost:5000/mindmaps
On my web service declaration I change NEO4J_URL from localhost to neo4j ( name of my service )
version: '3'
services:
web:
build: ./backend
command: flask run --host=0.0.0.0 #gunicorn --bind 0.0.0.0:5000 mindmap_api:app
ports:
- 5000:5000
environment:
- FLASK_APP=mindmap_api
- FLASK_ENV=development
- NEO4J_USERNAME=neo4j
- NEO4J_PASSWORD=airline-mexico-archer-ecology-bahama-7381
- NEO4J_URL=neo4j://neo4j:7687 # HERE
- NEO4J_DATABASE=neo4j
depends_on:
- neo4j
volumes:
- ./backend:/usr/src/app
neo4j:
image: neo4j
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./neo4j/conf:/neo4j/conf
- ./neo4j/data:/neo4j/data
- ./neo4j/import:/neo4j/import
- ./neo4j/logs:/neo4j/logs
- ./neo4j/plugins:/neo4j/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
- NEO4J_AUTH=neo4j/airline-mexico-archer-ecology-bahama-7381

Properly migrating Postgres database to Docker/Django/Heroku/Postgres

I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local
I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"

python container keeps trying to connect to mongo container via localhost

I am going to lose my mind because I've been stuck on this for 2 days and can't figure out why python wants to keep using 127.0.0.1 instead of the host I have specified. My docker compose snippet is:
# Use root/example as user/password credentials
version: '3.1'
services:
mongo_service:
image: mongo
#command: --default-authentication-plugin=mysql_native_password
command: mongo
restart: always
ports:
- '27017:27017'
cinemas_api:
container_name: cinemas_api
hostname: cinemas_api
build:
context: ./cinemas_api
dockerfile: Dockerfile
links:
- mongo_service
ports:
- 5000:5000
expose:
- '5000'
depends_on:
- mongo_service
booking_api:
container_name: booking_api
hostname: booking_api
build:
context: ./booking_api
dockerfile: Dockerfile
ports:
- 5050:5000
depends_on:
- mongo_service
networks:
db_net:
external: true
#docker-compose -f docker-compose.yaml up --build
then in cinemas.py, I try to connect:
client = MongoClient('mongodb://mongo_service:27017/test')
However, I get the error
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 ::
caused by :: Connection refused :

How two diffrent docker container api communicate with each other requests?

In one container there Django app running which expose to host port 8000
and another container flask app is running which expose to the host port 8001. So there is a condition where the flask app API end needs to communicate with the Django app API end.
Code
req = requests.get('http://192.168.43.66:8000/api/some')
Error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.43.66', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc6f96942e0>: Failed to establish a new connection: [Errno 110] Connection timed out'))
But if I change the request URL to some other API end which not running on the container it gets the response.
And each Django API end is working fine if I want to access it through some other external source like postman or browser.
Here is the docker-compose file content
Django App docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
flask app docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
network: 'host'
command: 'python3 main.py'
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
Note: in the project YAML is well formatted so that's not an error.
when you expose a port from container, the host can access it but it doesn't mean other containers can access it too.
you either have to set the network mode to host (which won't work on windows, this is only possible if you're running them on linux).
OR you can run it in a docker-compose and define your own network, here is an example:
version : '3.4'
services:
UI:
container_name: django_app
image: django_image
networks:
my_net:
ipv4_address: 172.26.1.1
ports:
- "8080:8000"
api:
container_name: flask_app
image: flask_image
networks:
my_net:
ipv4_address: 172.26.1.2
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
now your django app can access your flask app on 172.26.1.2
EDIT :
now that you have added your docker-compose files too, you should not create apps in two different docker-compose (that's why you were getting the conflicting ip range error, you were building two networks with same range).
place everything in a single docker-compose, give them ip addresses and you should be fine.
you could make your apps to read ip address of services they relate on from environment variables and then pass these env variables to your container for more flexibility.

Can not connect to the container based on postgres

I am new to docker container. I am trying to unit test my Flask application on Circle CI automatically. However it can not connect to postgres container. It works in my local computer (macOS Sierra). Let me know if you need more information to solve this issue. Thank you!!
docker-compose.yml
version: '3'
services:
web:
container_name: web
build: ./web
ports:
- "5000:5000"
depends_on:
- postgres
volumes:
- ./web/.:/app
tty: true
postgres:
container_name: postgres
build: ./db
ports:
- "5432:5432"
config.yml
version: 2
jobs:
build:
machine: true
working_directory: ~/repo
steps:
- checkout
- run:
name: Install Docker Compose
command: |
sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
- run:
name: Start container and veryfy it's working
command: |
set -x
cd ~/repo/docker
docker-compose up --build -d
- run:
name: Run test
command: |
cd ~/repo/docker
docker-compose run web python tests/test_therapies.py
Circle Ci build log
connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 393, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
Is the server running on host "postgres" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
----------------------------------------------------------------------
Ran 1 test in 0.025s
FAILED (errors=1)
Exited with code 1
I think the problem is that the postgres service is not fully up when your web app starts. Based on your comment about it working after adding the sleep timer, it seems this is the problem.
You can run a container called dadarek/wait-for-dependencies as a mechanism to wait for services to be up(in your case, postgres).
Here is how you can implement it:
1). Add a new service to your docker-compose.yml
waitfordb:
image: dadarek/wait-for-dependencies
depends_on:
- postgres
command: postgres:5432
Your docker-compose.yml should look now look like this:
version: '3'
services:
waitfordb:
image: dadarek/wait-for-dependencies
depends_on:
- postgres
command: postgres:5432
web:
container_name: web
build: ./web
ports:
- "5000:5000"
depends_on:
- waitfordb
- postgres
volumes:
- ./web/.:/app
tty: true
postgres:
container_name: postgres
build: ./db
ports:
- "5432:5432"
2). Startup compose
docker-compose run --rm waitfordb
docker-compose up -d web postgres
The result is that your web service should now wait for port 5432 to be up in your postgres container, before trying to start up.

Categories

Resources