can not connect with docker postgress sql container with psycopg2 - python

I am running a Postgres SQL database container with following command:
docker run --name db -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -v pg
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
when I try to connect to the container database I get the following error:
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
I cant use here Docker compose in this context ( I know how to run it though ).
What else I need to add in my docker command so that I can connect from python ?

Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
No, you don't, your dockerfile is exposing the port 5432 to the host machine as stated by the flag -p 5432:5432
So if, you are trying to connect to the docker from your host machine, yoi will use the host localhost
I think you are confusing between docker and docker networking when we have multiple docker trying to communicate with each other as is the case is with docker-compose.
In case of docker-compose, when you have multiple services running, they can communicate with each other using the docker containers name as the host. Similar if you have a network between docker containers, they can communicate with each other using the docker name as the host
So if it was docker-compose, with the docker running on one container, and your app in another, in that case you would replace localhost with db.
Hope that clarifies things

If your Python program is running on the Docker host, then you don't want to "of course" change localhost to db in your connection string, since (a) Docker doesn't change your host DNS settings (b) you're using -p to publish the service running on port 5432 to the host on port 5432.
You would only use the name db from another Docker container running in the same Docker network.

Related

Python can't connect to mysql using docker

I'm new to docker and right now I'm trying to use mysql in my mac without install it so I wanted to use Docker but I keep getting this error
sqlalchemy.exc.InterfaceError: (mysql.connector.errors.InterfaceError) 2003: Can't connect to MySQL server on 'localhost:3306' (61 Connection refused)
Reproduce the problem:
I pulled mysql/mysql-server:
docker pull mysql/mysql-server
Inside the terminal:
docker run --name=mydb -d mysql/mysql-server
Then I changed its password into :123456
Inside the code database.py:
SQLALCHEMY_DATABASE_URL = 'mysql+mysqlconnector://mydb:123456#localhost:3306/authorize_info'
At this step I will get this error
sqlalchemy.exc.InterfaceError: (mysql.connector.errors.InterfaceError) 2003: Can't connect to MySQL server on 'localhost:3306' (61 Connection refused)
I also got this error when I changed 'localhost' into '172.17.0.1'
Where am I wrong and how to fix it? Please help me
When you run a container, it doesn't automatically map its ports to the ports on your machine. You need to tell docker what the port in the container should be mapped to on your local machine by using the -p option. If you want to be able to access port 3306 in the container by using port 3306 on your local machine, you should start the container like this
docker run --name=mydb -d -p 3306:3306 mysql/mysql-server
The first number is the port on your local machine and the second number is the port in the container.
there is one more thing. you can user name of a container to connect if if your app is in the same network as database. if not, you should use host IP to connect to the database.

Port not exposed but still reachable on internal docker network

(I'm having the inverse problem of exposing a port and it's not reachable.)
In my case I have 2 containers on the same network. One is an Alpine Python running a Python Flask app. The other is a barebones Ubuntu 18.04. The services are initialised basically like this:
docker-compose.yml:
version: '3'
services:
pythonflask:
build: someDockerfile # from python:3.6-alpine
restart: unless-stopped
ubuntucontainer:
build: someOtherDockerfile #from ubuntu:18.04
depends_on:
- pythonflask
restart: unless-stopped
The Python Flask app runs on port 5000.
Notice the lack of expose: - 5000 in the docker-compose.yml file.
The problem is that I'm able to get a correct response when cURLing http://pythonflask:5000 from inside ubuntucontainer
Steps:
$ docker exec -it ubuntucontainer /bin/bash
...and then within the container...
root#ubuntucontainer:/# curl http://pythonflask:5000/
...correctly returns my response from the Flask app.
However from my machine running docker:
$ curl http://localhost:5000/
Doesn't return anything (as expected).
As I test different ports, they get automatically exposed each time. What is doing this?
Connectivity between containers is achieved by placing the containers on the same docker network and communicating over the container ip and port (rather than the host published port). So what does expose do then?
Expose is documentation
Expose in docker is used by image creators to document the expected port that the application will listen on inside the container. With the exception of some tools and a flag in docker that uses this metadata documentation, it is not used to control access between containers or modify docker's networking. Applications may be reconfigured at runtime to listen to a different port, and you can connect to ports that have not been exposed.
For DNS lookups between containers, the network needs to be user created, not one of the default networks from docker (e.g. DNS is not enabled in the default bridge network named "bridge"). With DNS, you can lookup the container name, service name (from a compose file), and any network aliases created for that container on that network.
The other half of the equation is "publish" in docker. This creates a mapping from the host to the container to allow external access. It is implemented with a proxy process that runs on the host and forwards new connections. Because of the implementation, you can publish a port on the host even if the container is not listening on the port, though you would receive an error when trying to connect to that port in that scenario.
The lack of expose: ... just means that there is no port exposed from the service group you defined in your docker-compose.yml
Within the images you use, there are still exposed ports which are reachable from within the network that is automatically created by docker-compose.
That is why you reach one container from within another. In addition every container can be accessed via service name from the docker-compose.yml on the internal network.
You should not be able to access flask from your host (http://localhost:5000)

Different errors coming from different host values for mysql config in docker project

I'm contributing to a new project, but getting info about the setup/build is difficult. I can get through these steps in the build process:
$ docker-machine create -d virtualbox dev;
$ eval $(docker-machine env dev)
$ docker-compose build
$ docker-compose up -d
The next command fails:
$ docker-compose run web /usr/local/bin/python manage.py migrate
...with this error:
(2005, "Unknown MySQL server host 'mysql' (0)")
When I change the mysql HOST from mysql to localhost, I get a new error:
(2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
I've read about each error, but the proposed solutions aren't relevant to my code (besides the suggestion of setting the HOST to localhost). Which host value is correct and what should be done about the respective error?
I'm not actually sure if mysql is running, where it should be running, and how to check its status.
I suspect that mysql is in another container, and project container is called "web" in the docker-compose.yml.
When you change mysql to localhost it will attempt to connect to local mysql in the web container (via linux socket) but of course it doesn't exist, because it has it's own container which I suspect is called mysql in docker-compose.yml.
To view the running containers you can use sudo docker ps, if mysql container is stopped/restarting you can investigate using docker logs <mysql container name/ID>.
If thats the case, try to look for mounts in the docker-compose.yml to investigate further.

Accessing Docker Container on Centos Server

I've managed to deploy a Django app inside a docker container on my personal Mac using localhost with Apache. For this, I use docker-compose with the build and up commands. I'm trying to run the same Django app on a CentOS server using a docker image generated on my local machine. Apache is also running on the server on port 90.
docker run -it -d --hostname xxx.xxx.xxx -p 9090:9090 --name test idOfImage
How can I access this container with Apache using the hostname and port number in the URL? Any help would be greatly appreciated. Thanks.
From other containers the best way to access this container is to attach both to the same network and use the container's --name as a DNS name and the internal port (the second port from the -p option, which isn't strictly required for this case); from outside a container or from other hosts use the host's IP address or DNS name and the published port (the first port from the -p option).
The docker run --hostname option isn't especially useful; the only time you'd want to specify it is if you have some magic licensed software that only ran if it had a particular hostname.
Avoid localhost in a Docker context, except for the very specific case where you know you're running a process on the host system outside a container and you're trying to access a container's published port or some other service running on the host. Don't use "localhost" as a generic term, it has a very specific context-dependent meaning (every process believes it's running "on localhost").

Connect my docker to external docker

I have a Redis container as a stand alone now I want to connect to this inside my container (another docker container). But I can't seem to successfully connect. Below is the list of docker
As you can see my container flexapi_api_1 will try to connect to localredis but I always get a connection timeout. When trying to do docker inspect localredis I get the result as shown below
I'm not sure if I need to use the ip 172.17.0.2 as the host ip or I'll use the 0.0.0.0 as host ip for the redis. Is there a way to connect my container to another external container?
You can connect from one container to another using the container name as long as the containers are connected to the same network.
Create a network and connect the container to it:
docker network create mynet
docker network connect mynet localredis
docker network connect mynet flexapi_api_1
Now flexapi_api_1 should be able to connect to redis via localredis

Categories

Resources