This question already has an answer here:
Connect to a PostgreSQL database on a Docker container
(1 answer)
Closed 1 year ago.
I am fairly new to docker and I do not know what's causing the issue to not run my python script on docker..
here is how i am creating my docker-compose.yml
version: "3.6"
services:
app :
build: ./app/
db:
build: ./database/
Here is the error :
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
app_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
app_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
app_1 | Is the server running on host "127.0.0.1" and accepting
app_1 | TCP/IP connections on port 5432?
upon running docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------
542132_app_final_db_1 docker-entrypoint.sh postgres Up 5432/tcp
app_1 python abc ... Exit 1
How do I solve it? Please help. I am fairly new to Docker/Docker-compose. Thanks!
I suppose you did not configure the docker network as host network (containers live on the same network interface of the host maschine). You either need to connect the containers via a link in the docker-compose.yml file or you need to put the containers into a custom network (not the default one). I just read this is the default, so you might not need the link.
Furthermore you need you python app to connect to the hostname of the database resp. the linked name:
In your case you need to take the following to measures for the standard solution:
Config connectivity between app and db
version: "3.6"
services:
app :
build: ./app/
links:
- db:database
db:
build: ./database/
config a proper db connection in the app
In your app you need to connect to: database:5432, not to localhost:5432
Alternate solutions
use network to generate a custom network, so you have dns resolution and can use db:5432 without the link
use host network (docker run --networking host). Not sure if that is possible in docker compose
See https://docs.docker.com/compose/networking/
Related
This question has been asked already
for example
Docker: Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432?
but I still can't figure out how to properly connect the application to the database.
Files:
Dockerfile
FROM python:3.10-slim
WORKDIR /app
COPY . .
RUN pip install --upgrade pip
RUN pip install "fastapi[all]" sqlalchemy psycopg2-binary
docker-compose.yml
version: '3.8'
services:
ylab:
container_name: ylab
build:
context: .
entrypoint: >
sh -c "uvicorn main:app --reload --host 0.0.0.0"
ports:
- "8000:8000"
postgres:
container_name: postgr
image: postgres:15.1-alpine
environment:
POSTGRES_DB: "fastapi_database"
POSTGRES_PASSWORD: "password"
ports:
- "5433:5432"
main.py
import fastapi as _fastapi
import sqlalchemy as _sql
import sqlalchemy.ext.declarative as _declarative
import sqlalchemy.orm as _orm
DATABASE_URL = "postgresql://postgres:password#localhost:5433/fastapi_database"
engine = _sql.create_engine(DATABASE_URL)
SessionLocal = _orm.sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = _declarative.declarative_base()
class Menu(Base):
__tablename__ = "menu"
id = _sql.Column(_sql.Integer, primary_key=True, index=True)
title = _sql.Column(_sql.String, index=True)
description = _sql.Column(_sql.String, index=True)
app = _fastapi.FastAPI()
# Create table 'menu'
Base.metadata.create_all(bind=engine)
This works if I host only the postgres database in the container and my application is running locally, but if the database and application are in their own containers, no matter how I try to change the settings, the error always comes up:
"sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 5433 failed: Connection refused
ylab | Is the server running on that host and accepting TCP/IP connections?
ylab | connection to server at "localhost" (::1), port 5433 failed: Cannot assign requested address
ylab | Is the server running on that host and accepting TCP/IP connections?"
The error comes up in
Base.metadata.create_all(bind=engine)
I also tried
DATABASE_URL = "postgresql://postgres:password#postgres:5433/fastapi_database"
but still error:
"sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "postgres" (172.23.0.2), port 5433 failed: Connection refused
ylab | Is the server running on that host and accepting TCP/IP connections?"
There is some kind of config file or something mentioned in the answer above but I can't figure out how to manage that config.
You should update your config to reference the service name of postgres and the port the database runs on inside the container
DATABASE_URL = "postgresql://postgres:password#postgres:5432/fastapi_database"
When your app was running locally on your machine with the database running in the container then localhost:5433 would work since port 5433 on the host was mapped to 5432 inside the db container.
When you then put the app in its own container but still refer to localhost then it will be looking for the postgres database inside the same container the app is in which is not right.
When you put the right service name but with port 5433 you will also get an error since port 5433 is only being mapped on the host running the containers not from inside the containers them self.
So what you want to do in the app container is just target the database service on port 5432 as thats the port postgres will be running on inside the container.
You also probably want to look at a depends on script that will not start the fast api app until the db is up and ready.
I have a mosquitto mqtt brocker running in docker. I am starting it is a docker compose file. Now I am trying to connect with the broker, it was working locally. When I try to connect as a docker container it is not working although I have changed the host/brocker address from local to compose image name. How can I make it work ?
Here What I have tried
Docker compose ( edited )
version: '3.5'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdatapg:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
mosquitto:
image: eclipse-mosquitto
networks:
- postgres
ports:
- "1883:1883"
volumes:
- ./conf:/mosquitto/conf
- ./data:/mosquitto/data
- ./log:/mosquitto/log
app:
restart: always
build: .
depends_on:
- db
networks:
- postgres
networks:
postgres:
driver: bridge
volumes:
pgdatapg:
and part of my python
broker = "mosquitto"
port = 1883
topic = "py/mqtt/test"
def connect_mqtt():
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.on_connect = on_connect
client.connect(broker, port)
return client
Here is the conf file
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
listener 1883
## Authentication ##
allow_anonymous false
password_file /mosquitto/conf/mosquitto.conf
I am getting the following error
| ConnectionRefusedError: [Errno 111] Connection refused
When running with docker compose, the containers started as services by default are placed on dedicated docker overlay network named after the project (which defaults to the dir name the docker-compose.yml file is held in) e.g. network called foo_default
https://docs.docker.com/compose/networking/
Services are only accessible from other containers that are connected to the same network (and to the host via what ever ports are exposed).
So if you only have mosquitto in the docker-compose.yml then no other containers will be able to connect to it. If you include the container that the python is running in as a service in the compose file then it will be able to connect.
You can also change the networks containers connect to in the compose file.
https://docs.docker.com/compose/networking/#specify-custom-networks
EDIT:
You have forced the mosquitto service to use network_mode: host network so it's not on the same postgres network as the app. Containers can be on multiple networks, but mosquitto should not be bound to the host network to make all this work.
EDIT2:
You are also not setting a username/password in app even though you have require authentication in mosquitto.conf and you are pointing the password file at the config file which just won't work. I suggest you remove the last line of the mosquitto.conf file and set allow_anonymous true.
P.s. I suspect that the mosquitto container currently isn't actually starting due to the last line of the config file.
I am trying to connect to a remote Oracle database from inside a docker container, but fail with the following error:
oracledb.exceptions.DatabaseError: ORA-12154: TNS:could not resolve the connect identifier specified
The connection works from my host machine, and I've added port mapping and network to my docker-compose:
ports:
- "80:80"
network_mode: "host"
I am using python and oracledb library for connection, and these are the commands:
import oracledb
io = '/usr/lib/oracle/21/client64/lib'
oracledb.init_oracle_client(lib_dir=io)
dsn = 'DBNAME:1521/XXX'
connection = oracledb.connect(
user="my_user",
password="my_pwd",
dsn=dsn
)
At this point, I don't fully understand if there is even something that I could do, or if this error originates from the database settings, and I should contact the DB admins.
Thanks!
I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.
I have done ssh port forwarding which listens to 2000 in my local. So the service is running on localhost:2000, it is Neo4J server. In my local, I am able to connect with Neo4J using Neo4J desktop by giving bolt://localhost:2000 and can look at the datas.
but i am not able to connect with host bolt://localhost:2000 in docker container.
I looked at the answers here
From inside of a Docker container, how do I connect to the localhost of the machine?
I added extra hosts in docker-compose.yml
flask_service:
build:
context: ./project
dockerfile: my_dockerfile
container_name: flask_container
stdin_open: true
tty: true
ports:
- 5000:5000
extra_hosts:
- "myhost:175.1.344.136"
175.1.344.136 being my host IP
and I have used both bolt://175.1.344.136:2000 and bolt://myhost:2000 inside container but it is not connecting. Also i want to know which is the right way bolt://175.1.344.136:2000 or bolt://myhost:2000
I get an error
2021-05-25T10:53:55+0000.060 [INFO] neo4j_proxy.__init__:74 (8:Thread-11) - NEO4J endpoint: bolt://myhost:2000
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 831, in _connect
s.connect(resolved_address)
ConnectionRefusedError: [Errno 111] Connection refused
I am using MacOs, Please help me resolve this.
Thanks in advance!
you need to use the container service name neo4j and default port 7687 to use inside the same docker host.
bolt://neo4j:7687
version: "3.8"
services:
neo4j:
image: neo4j:3.4.4-enterprise
ports:
- 7474:7474
- 7687:7687
my_service:
build:
context: .
environment:
DATABASE_URL: "bolt://neo4j:7687"