I know this question has been asked a million times, and I've read as many of the answers as I can find. They all seem to come to one conclusion (db hostname is the container service name).
I got it to work in my actual code base, but it started failing when I added ffmpeg install to the Dockerfile. Nothing else had to be done, just installing FFPMEG via apt-get install -y ffmpeg would cause my python code to get the connection refused message. If I removed the ffmpeg install line, then my code would connect to the db just fine. Although re-running the container would trigger the dreaded connection refused error.
So I created a quick sample app so I could post here and try to get some thoughts on what's going on. But now this sample code won't connect to the db no matter what I do.
So here goes - And thanks in advance for any help:
myapp.py
# import ffmpeg
import psycopg2
if __name__ == "__main__":
print("Starting app...")
# probe = ffmpeg.probe("131698249.mp4")
# print(probe)
try:
connection = psycopg2.connect(
user="docker", password="docker", host="db", port="5432", database="docker")
cursor = connection.cursor()
postgreSQL_select_Query = "select * from test_table"
cursor.execute(postgreSQL_select_Query)
print("Selecting rows from table using cursor.fetchall")
records = cursor.fetchall()
print("Print each row and it's columns values")
for row in records:
print(row)
cursor.close()
connection.close()
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
Dockerfile
WORKDIR /usr/src/app
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD ["python", "myapp.py"]
docker-compose.yml
version: '3.8'
services:
db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "8000:5432"
expose:
- "5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- pg_data:/var/lib/postgresql/data
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
volumes:
pg_data:
If I build and run the code: docker compose up --detach
Everything gets built and started. The Database starts up and gets populated with table/data from the init.sql (not included here)
The app container starts and the code executes, but immediately fails with the Connection refused error.
However, if from my computer I run: psql -U docker -h localhost -p 8000 -d docker
it connects without any error and I can query the database as expected.
But the app in the container won't connect and if I run the container with docker run -it myapp /bin/bash and then from inside the container run: psql -U docker -h db -p 5432 -d docker I get the Connection refused error.
If anyone has any thoughts or ideas I would be so grateful. I've been wrestling with this for three days now.
Looks like I've resolved it. I was sure I'd tried this before, but regardless adding a networks section to the docker-compose.yml seemed to fix the issue.
I also had to do the second docker-compose up -d as suggested by David Maze's comment. But the combination of the two seem to have fixed my issue.
Here's my updated docker-compose.yml for complete clarity:
version: '3.8'
services:
postgres-db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "5500:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- dock-db-test
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
networks:
- dock-db-test
networks:
dock-db-test:
external: false
name: dock-db-test
Related
I'm running an app inside a docker container. That app uses docker Postgres image to save data in a database. I need to keep a local copy of this database's data to avoid losing data if the container is removed or purged somehow ..so I am using volumes inside my `docker-compose.YAML file,, but still the local DB folder is always empty .. so whenever I move the container or purge it the data are lost
docker-compose.yaml
version: "2"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- '5433:5432'
restart: always
command: -p 5433
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
django-apache2:
build: .
container_name: rolla_django
restart: always
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
ports:
- '4002:80'
- '4003:443'
volumes:
- ./www/:/var/www/html
- ./www/demo_app/static_files:/var/www/html/demo_app/static_files
- ./www/demo_app/media:/var/www/html/demo_app/media
# command: sh -c 'python manage.py migrate && python manage.py loaddata db_backkup.json && apache2ctl -D FOREGROUND'
command: sh -c 'wait-for-it db:5433 -- python manage.py migrate && apache2ctl -D FOREGROUND'
depends_on:
- db
as you can see i used ./data/db:/var/lib/postgresql/data , but locally the ./data/db directory is always empty !!
NOTE
when I use docker volume list it shows no volumes at all
According to your setup, the data is in /tmp: PGDATA=/tmp. Remove this and you volume mapping should work.
Also your command -p 5433 makes postgres run on port 5433, but you still map the port 5432. So if you cant reach the database it might be because of that.
I was hoping to get some insight to what I am missing, currently trying to run a docker-compose config with python (walrus for db wrapper) and redis image, but I keep receiving the same error:
redis.exceptions.ConnectionError: Error -2 connecting to redis://redis:6379. Name or service not known.
I tried different solutions on stack overflow to fix this but still nothing is working.
Here are the related docker-compose config:
version: '3.3'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
command: ["redis-server"]
entrypoint: redis-server --appendonly yes
consumers:
build: ./consumers
container_name: consumers
environment:
- REDIS_HOST=redis://redis
command: "./run.sh"
depends_on:
- redis
restart: always
tty: true
networks:
default:
driver: bridge
Dockerfile:
FROM python:3.10
WORKDIR /consumers
# Copy Dependencies
COPY requirements.txt requirements.txt
COPY run.sh .
# Install Dependencies
RUN pip install -r requirements.txt
COPY . .
ENV REDIS_HOST=redis://redis
RUN chmod a+x run.sh
# Run executable consumer.py
CMD [ "./run.sh"]
And connection in python using walrus to redis:
rdb = Database(host=os.getenv("REDIS_HOST", "localhost"), port=6379)
Locally without docker the setup works fine. Any direction in this case would be really appreciated.
Thank you
The following configuration made it work, removed entrypoint, created a new custom network and exposed port. REDIS_HOST was modified to redis i.e. container name. All together made it work but while trying only one of these the problem persisted.
version: '3.5'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
expose:
- 6379:6379
command: ["redis-server"]
networks:
- connections
consumers-g1:
build: ./consumers
container_name: consumers-g1
environment:
- REDIS_HOST=redis
command: "./run.sh"
expose:
- 6379:6379
networks:
- connections
restart: always
tty: true
networks:
connections:
name: connections
driver: bridge
I have an small python backend and a mariaDB. Deperated in Docker Services.
The docker-compse looks like this:
version: '3.5'
networks:
web:
name: web
external: true
wsm:
name: wsm
internal: true
volumes:
wsm-partsfinder-db:
name: wsm-partsfinder-db
services:
wsmbackend:
build:
context: .
dockerfile: ./docker/Dockerfile
container_name: wsm-file-parts-backend
restart: always
depends_on:
- wsmdb
ports:
- "8888:8888"
networks:
- web
- wsm
wsmdb:
container_name: wsmdb
image: mariadb:10.7.1
command: --default-authentication-plugin=mariadb_native_password
restart: unless-stopped
environment:
MARIADB_ROOT_PASSWORD: password
MARIADB_USER: wsm
MARIADB_PASSWORD: password
MARIADB_DATABASE: wsm_parts
volumes:
- wsm-partsfinder-db:/var/lib/mysql
networks:
- wsm
ports:
- "4485:3306"
The Dockerfile which is called for wsmbackend service looks like this:
FROM python
RUN apt-get update -y
RUN apt-get upgrade -y
COPY . /wsm
WORKDIR /wsm
RUN pip install -r requirements.txt
RUN yoyo apply --database mysql://wsm:password#wsmdb:4485/wsm_parts ./migrations
EXPOSE 8888
CMD ["/bin/sh", "-c", "python main.py"]
I got an error in yoyo apply ...
What is the issue in this case?
Thanks in advance
You are not able to run queries on the database in your build stage, because your database container not is started at that current point.
The RUN statement is only executed in the build stage. You need to move it to the CMD (entrypoint), so it's executed when the container and the database is started.
I am trying to prepare a docker-compose file that stands up 2 containers. They are postgres and python app inside an alpine image. Just consider before reading ı need to use python inside alpine.
My Dockerfile is:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./app.py" ]
My app.py file:
import psycopg2
from config import config
def connect():
""" Connect to the PostgreSQL database server """
conn = None
try:
# read connection parameters
params = config()
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
conn = psycopg2.connect(**params)
# create a cursor
cur = conn.cursor()
# execute a statement
print('PostgreSQL database version:')
cur.execute ("SELECT * FROM my_table;")
# display the PostgreSQL database server version
db_version = cur.fetchone()
print(db_version)
# close the communication with the PostgreSQL
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
print('Database connection closed.')
if __name__ == '__main__':
connect()
I started python container with that command:
docker run -it my_image app.py
I seperately started 2 container(postgres and python) and make it worked. However my container works only once and its job is to make an select job inside postgresql database.
That was first part. My main goal is below.
For the simplicity i prepared a docker-compose.yml file:
version: '3'
services:
python:
image: python
build:
context: .
dockerfile: Dockerfile
postgres:
image: postgres:${TAG:-latest}
build:
context: .
environment:
POSTGRES_PASSWORD: example
ports:
- "5435:5432"
networks:
- postgres
networks:
postgres:
My dockerfile is here
When i type docker-compose up my postgres starts but python exits with code 0.
my_file_python_1 exited with code 0
What should i do for stands alone container for python apps with docker-compose? It always works only once. I can make it work constantly with
docker run -d -it my_image app.py
But my goal is to make it with docker-compose.
version: '3'
services:
python:
image: python
build:
context: .
dockerfile: Dockerfile
postgres:
image: postgres:${TAG:-latest}
build:
context: .
environment:
POSTGRES_PASSWORD: example
ports:
- "5435:5432"
networks:
- postgres
networks:
postgres:
tty: true
If exit code is 0 means container exited after all execution, means you have to run the process in foreground to keep container running. if exit code is other than 0, means it is exiting because of code issue. so try to run any foreground process.
Could you check if enabling the tty option (see reference) in your docker-compose.yml file the container keeps running?
I am trying to test my web application using a docker container, but I am not able to see it when I try to access it through my browser.
The docker compose file looks like
version: '2'
services:
db:
image: postgres
volumes:
- ~/pgdata:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_PASSWORD: "dbpassword"
PGDATA: "/var/lib/postgresql/data/pgdata"
ports:
- "5432:5432"
web:
build:
context: .
dockerfile: Dockerfile-web
ports:
- "5000:5000"
volumes:
- ./web:/web
depends_on:
- db
backend:
build:
context: .
dockerfile: Dockerfile-backend
volumes:
- ./backend:/backend
depends_on:
- db
The dockerfile-web looks like
FROM python
ADD web/requirements.txt /web/requirements.txt
ADD web/bower.json /web/bower.json
WORKDIR /web
RUN \
wget https://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.xz && \
tar xJf node-*.tar.xz -C /usr/local --strip-components=1 && \
rm -f node-*.tar.xz
RUN npm install -g bower
RUN bower install --allow-root
RUN pip install -r requirements.txt
RUN export MYFLASKAPP_SECRET='makethewebsite'
CMD python manage.py server
The ip for my docker machine is
docker-machine ip
192.168.99.100
But when I try
http://192.168.99.100:5000/
in my browser it just says that the site cannot be reached.
It seems like it is refusing the connection.
When I ping my database in the browser I can see that my database response in a log
http://192.168.99.100:5432/
So I tried wget inside the container and got
$ docker exec 3bb5246a0623 wget http://localhost:5000/
--2016-07-23 05:25:16-- http://localhost:5000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34771 (34K) [text/html]
Saving to: ‘index.html.1’
0K .......... .......... .......... ... 100% 5.37M=0.006s
2016-07-23 05:25:16 (5.37 MB/s) - ‘index.html.1’ saved [34771/34771]
Anyone know how I can get my web application to show up through my browser?
I had to enable external visibility for my flask application.
You can see it here
Can't connect to Flask web service, connection refused