Can not connect to the container based on postgres - python

I am new to docker container. I am trying to unit test my Flask application on Circle CI automatically. However it can not connect to postgres container. It works in my local computer (macOS Sierra). Let me know if you need more information to solve this issue. Thank you!!
docker-compose.yml
version: '3'
services:
web:
container_name: web
build: ./web
ports:
- "5000:5000"
depends_on:
- postgres
volumes:
- ./web/.:/app
tty: true
postgres:
container_name: postgres
build: ./db
ports:
- "5432:5432"
config.yml
version: 2
jobs:
build:
machine: true
working_directory: ~/repo
steps:
- checkout
- run:
name: Install Docker Compose
command: |
sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
- run:
name: Start container and veryfy it's working
command: |
set -x
cd ~/repo/docker
docker-compose up --build -d
- run:
name: Run test
command: |
cd ~/repo/docker
docker-compose run web python tests/test_therapies.py
Circle Ci build log
connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 393, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
Is the server running on host "postgres" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
----------------------------------------------------------------------
Ran 1 test in 0.025s
FAILED (errors=1)
Exited with code 1

I think the problem is that the postgres service is not fully up when your web app starts. Based on your comment about it working after adding the sleep timer, it seems this is the problem.
You can run a container called dadarek/wait-for-dependencies as a mechanism to wait for services to be up(in your case, postgres).
Here is how you can implement it:
1). Add a new service to your docker-compose.yml
waitfordb:
image: dadarek/wait-for-dependencies
depends_on:
- postgres
command: postgres:5432
Your docker-compose.yml should look now look like this:
version: '3'
services:
waitfordb:
image: dadarek/wait-for-dependencies
depends_on:
- postgres
command: postgres:5432
web:
container_name: web
build: ./web
ports:
- "5000:5000"
depends_on:
- waitfordb
- postgres
volumes:
- ./web/.:/app
tty: true
postgres:
container_name: postgres
build: ./db
ports:
- "5432:5432"
2). Startup compose
docker-compose run --rm waitfordb
docker-compose up -d web postgres
The result is that your web service should now wait for port 5432 to be up in your postgres container, before trying to start up.

Related

flask app service can't connect to postgres service within docker-compose

I am trying to run flask, postgres and nginx services with following docker-compose:
version: '3.6'
services:
postgres:
image: postgres:10.5
container_name: postgres
hostname: postgres
user: postgres
ports:
- "5432:5432"
networks:
- db-tier
environment:
CUSTOM_CONFIG: /etc/postgres/postgresql.conf
volumes:
- ./postgres/sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
- ./postgres/postgresql.conf:/etc/postgres/postgresql.conf
command: postgres -c config_file=/etc/postgres/postgresql.conf
restart: always
app:
image: python/app:0.0.1
container_name: app
hostname: app
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
networks:
- app-tier
- db-tier
stdin_open: true
nginx:
image: nginx:1.22
container_name: nginx-reverse-proxy-flask
ports:
- "8080:80"
depends_on:
- app
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- app-tier
networks:
app-tier:
driver: bridge
db-tier:
driver: bridge
This is what app.config["SQLALCHEMY_DATABASE_URI"] equals to postgresql://denis:1234Five#postgres:5432/app
The error after docker-compose up is:
app | psycopg2.OperationalError: could not connect to server: Connection refused
app | Is the server running on host "postgres" (192.168.32.2) and accepting
app | TCP/IP connections on port 5432?
What could cause this type of error? I double checked the container name of postgres service, and it running with this name postgres why flask app doesn't "see" it?
It could be issue with no propper startup probe in your postgres container and docker-compose.yml
Look at for reference how to setup them at Docker - check if postgres is ready
So after postgres starts as container, docker starts your app, but postgres inside of container is not ready yet and you get your error
This issue was resolved with postgres pg_isready adding following lines to postgres service:
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
Took as solution from here Safe ways to specify postgres parameters for healthchecks in docker compose

Error -2 connecting to redis://redis:6379:6379. Name or service not known

I was hoping to get some insight to what I am missing, currently trying to run a docker-compose config with python (walrus for db wrapper) and redis image, but I keep receiving the same error:
redis.exceptions.ConnectionError: Error -2 connecting to redis://redis:6379. Name or service not known.
I tried different solutions on stack overflow to fix this but still nothing is working.
Here are the related docker-compose config:
version: '3.3'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
command: ["redis-server"]
entrypoint: redis-server --appendonly yes
consumers:
build: ./consumers
container_name: consumers
environment:
- REDIS_HOST=redis://redis
command: "./run.sh"
depends_on:
- redis
restart: always
tty: true
networks:
default:
driver: bridge
Dockerfile:
FROM python:3.10
WORKDIR /consumers
# Copy Dependencies
COPY requirements.txt requirements.txt
COPY run.sh .
# Install Dependencies
RUN pip install -r requirements.txt
COPY . .
ENV REDIS_HOST=redis://redis
RUN chmod a+x run.sh
# Run executable consumer.py
CMD [ "./run.sh"]
And connection in python using walrus to redis:
rdb = Database(host=os.getenv("REDIS_HOST", "localhost"), port=6379)
Locally without docker the setup works fine. Any direction in this case would be really appreciated.
Thank you
The following configuration made it work, removed entrypoint, created a new custom network and exposed port. REDIS_HOST was modified to redis i.e. container name. All together made it work but while trying only one of these the problem persisted.
version: '3.5'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
expose:
- 6379:6379
command: ["redis-server"]
networks:
- connections
consumers-g1:
build: ./consumers
container_name: consumers-g1
environment:
- REDIS_HOST=redis
command: "./run.sh"
expose:
- 6379:6379
networks:
- connections
restart: always
tty: true
networks:
connections:
name: connections
driver: bridge

Properly migrating Postgres database to Docker/Django/Heroku/Postgres

I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local
I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"

python container keeps trying to connect to mongo container via localhost

I am going to lose my mind because I've been stuck on this for 2 days and can't figure out why python wants to keep using 127.0.0.1 instead of the host I have specified. My docker compose snippet is:
# Use root/example as user/password credentials
version: '3.1'
services:
mongo_service:
image: mongo
#command: --default-authentication-plugin=mysql_native_password
command: mongo
restart: always
ports:
- '27017:27017'
cinemas_api:
container_name: cinemas_api
hostname: cinemas_api
build:
context: ./cinemas_api
dockerfile: Dockerfile
links:
- mongo_service
ports:
- 5000:5000
expose:
- '5000'
depends_on:
- mongo_service
booking_api:
container_name: booking_api
hostname: booking_api
build:
context: ./booking_api
dockerfile: Dockerfile
ports:
- 5050:5000
depends_on:
- mongo_service
networks:
db_net:
external: true
#docker-compose -f docker-compose.yaml up --build
then in cinemas.py, I try to connect:
client = MongoClient('mongodb://mongo_service:27017/test')
However, I get the error
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 ::
caused by :: Connection refused :

How one docker container can modify the file of another docker container?

My docker-compose file is as follows:
version: '3'
services:
db:
image: mongo:4.2
container_name: mongo-db
restart: always
environment:
MONGO_INITDB_DATABASE: VMcluster
ports:
- "16006:27017"
volumes:
- ./initdb.js:/docker-entrypoint-initdb.d/initdb.js
web:
build:
context: .
dockerfile: Dockerfile_Web
command: python manage.py runserver 0.0.0.0:8000
container_name: cluster-monitor-web
volumes:
- .:/vmCluster_service
ports:
- "9900:8000"
depends_on:
- db
cronjobs:
build:
context: .
dockerfile: Dockerfile_Cron
command: ["cron", "-f"]
container_name: cluster-monitor-cron
I want to implement a feature where the user should be able to update the crontab from the Django web. I'm done with Django part i.e python code for backend. But the Django web container is not able to access the crontab container.
how can I make Django web container to access the crontab container and update the crontab?
Im using python-crontab module in django which throws an error :
"[Errno 2] No such file or directory: '/usr/bin/crontab':
'/usr/bin/crontab'"

Categories

Resources