Python can't connect to mysql using docker - python

I'm new to docker and right now I'm trying to use mysql in my mac without install it so I wanted to use Docker but I keep getting this error
sqlalchemy.exc.InterfaceError: (mysql.connector.errors.InterfaceError) 2003: Can't connect to MySQL server on 'localhost:3306' (61 Connection refused)
Reproduce the problem:
I pulled mysql/mysql-server:
docker pull mysql/mysql-server
Inside the terminal:
docker run --name=mydb -d mysql/mysql-server
Then I changed its password into :123456
Inside the code database.py:
SQLALCHEMY_DATABASE_URL = 'mysql+mysqlconnector://mydb:123456#localhost:3306/authorize_info'
At this step I will get this error
sqlalchemy.exc.InterfaceError: (mysql.connector.errors.InterfaceError) 2003: Can't connect to MySQL server on 'localhost:3306' (61 Connection refused)
I also got this error when I changed 'localhost' into '172.17.0.1'
Where am I wrong and how to fix it? Please help me

When you run a container, it doesn't automatically map its ports to the ports on your machine. You need to tell docker what the port in the container should be mapped to on your local machine by using the -p option. If you want to be able to access port 3306 in the container by using port 3306 on your local machine, you should start the container like this
docker run --name=mydb -d -p 3306:3306 mysql/mysql-server
The first number is the port on your local machine and the second number is the port in the container.

there is one more thing. you can user name of a container to connect if if your app is in the same network as database. if not, you should use host IP to connect to the database.

Related

can not connect with docker postgress sql container with psycopg2

I am running a Postgres SQL database container with following command:
docker run --name db -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -v pg
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
when I try to connect to the container database I get the following error:
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
I cant use here Docker compose in this context ( I know how to run it though ).
What else I need to add in my docker command so that I can connect from python ?
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
No, you don't, your dockerfile is exposing the port 5432 to the host machine as stated by the flag -p 5432:5432
So if, you are trying to connect to the docker from your host machine, yoi will use the host localhost
I think you are confusing between docker and docker networking when we have multiple docker trying to communicate with each other as is the case is with docker-compose.
In case of docker-compose, when you have multiple services running, they can communicate with each other using the docker containers name as the host. Similar if you have a network between docker containers, they can communicate with each other using the docker name as the host
So if it was docker-compose, with the docker running on one container, and your app in another, in that case you would replace localhost with db.
Hope that clarifies things
If your Python program is running on the Docker host, then you don't want to "of course" change localhost to db in your connection string, since (a) Docker doesn't change your host DNS settings (b) you're using -p to publish the service running on port 5432 to the host on port 5432.
You would only use the name db from another Docker container running in the same Docker network.

Cannot connect to remote Database instance from my docker container, however can connect from my host computer

I have a problem connecting to remote database instances from a docker container.
I have a Docker container with a simple piece of Python code that connects to a remote MongoDB instance
client = MongoClient('mongodb-example_conn_string.com')
db = client.test_db
collection = db.test_collection
print(collection.find_one())
I can run this piece of code from my host machine (a laptop running Linux Mint 20) and it prints the result as expected.
When I build a Docker image (python:3.6.10-alpine) for this script and Docker Run then image I get an error message. The Container is running on the host laptop.
e.g.
docker build . -t py_connection_test
docker run --rm py_connection_test run
I get this error
pymongo.errors.ServerSelectionTimeoutError: mongodb-example_conn_string.com:27017: [Errno -2] Name does not resolve, Timeout: 30s, Topology Description: <TopologyDescription id: 60106f40288b81e007fe75a8, topology_type: Single, servers: [<ServerDescription ('mongodb-example_conn_string.com', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongodb-example_conn_string.com:27017: [Errno -2] Name does not resolve',)>]>
The MongoDB remote instance is an internal database at work and a VPN (Using OpenVPN) is required to access it. I've used traceroute on host machine and docker container to confirm that network traffic is routed through the VPN, all seems to be fine there.
I've tried Docker Run with flag
--network="host"
But the same thing happens
I'm scratching my head at this, why does the same connection url not working in both cases? Is there something simple I've missed?
I've figured out the issue, thanks to Max for pointing me to look into DNS.
My problem was a faulty /etc/resolv.conf file on my host machine that the Docker Container was picking up. It contained 2 nameserver entries
In my case I could create the file /etc/docker/daemon.json on my host and add my dns entry there for the Container to pickup when run. e.g. adding lines:
{
"dns": ["172.31.0.2"]
}
Editing / creating this file requires a Docker service restart
I got some helpful hints from https://l-lin.github.io/post/2018/2018-09-03-docker_ubuntu_18_dns/
If you are not using DNS to specify the connection but are using a VPN to reach the resource and run into this issue it is most likely to be related to a docker network covering the IP range of your VPN, see this github issue for more details.
For a temporary solution, try docker network prune, if that does not help try killing all containers then pruning and if that does not help then either try a full docker restart than prune or the next step.
For a permanent solution (or at least more longlasting) check this guide (it will require restarting the Docker Daemon).

Different errors coming from different host values for mysql config in docker project

I'm contributing to a new project, but getting info about the setup/build is difficult. I can get through these steps in the build process:
$ docker-machine create -d virtualbox dev;
$ eval $(docker-machine env dev)
$ docker-compose build
$ docker-compose up -d
The next command fails:
$ docker-compose run web /usr/local/bin/python manage.py migrate
...with this error:
(2005, "Unknown MySQL server host 'mysql' (0)")
When I change the mysql HOST from mysql to localhost, I get a new error:
(2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
I've read about each error, but the proposed solutions aren't relevant to my code (besides the suggestion of setting the HOST to localhost). Which host value is correct and what should be done about the respective error?
I'm not actually sure if mysql is running, where it should be running, and how to check its status.
I suspect that mysql is in another container, and project container is called "web" in the docker-compose.yml.
When you change mysql to localhost it will attempt to connect to local mysql in the web container (via linux socket) but of course it doesn't exist, because it has it's own container which I suspect is called mysql in docker-compose.yml.
To view the running containers you can use sudo docker ps, if mysql container is stopped/restarting you can investigate using docker logs <mysql container name/ID>.
If thats the case, try to look for mounts in the docker-compose.yml to investigate further.

Can I connect to a mysql database on another server?

I have a mysql database on a server for work. I ssh ont the server and then enter the database using mysql -u username -p at which time the command line will prompt me for my password.
I'd like to access the database remotely for some development. I see that mysql.connector is a library for connecting to mysql databases, but can I ssh onto the server and then access the database using python?
You can use SSH tunneling to redirect a port listening on your local machine to a port on the remote machine.
ssh -L9999:localhost:3306 me#my.work.com
This will redirect traffic from port 9999 on your machine to port 3306 on my.work.com. We gave localhost to -L since we want to tunnel to the server itself. We could also create a tunnel through your work server to some machine accessible to it only.
Now you can connect your connector on your own machine using port 9999 and the traffic is tunneled to my.work.com:3306.
You can use the tunnel.py code from https://gist.github.com/shnjp/856179.
with make_tunnel('me#mywork.com:3306') as t:
mysql.connector.connect(host='localhost',
database='mysql',user='root',password='PASS')
This assumes that your localhost doesn't have any application running on port 3306. If you have some, then you need to use "port=" argument in make_tunnel and provide a different port to use on the localhost.
You can also connect from python to the database without using any tunnel.
For that, enable the mysql server to allow connections from the outside uncommenting and changing the bind-address line in the /etc/mysql/my.cnf file to bind-address = 0.0.0.0. After that, restart the server with sudo service mysql restart and finally grant permissions to your user to access from the outside GRANT SELECT,INSERT,UPDATE,DELETE ON your_database.* TO 'your_user'#'%' IDENTIFIED BY 'your_pass';
Now you'll be able to connect from python to the database
import mysql.connector
ip_of_the_database = 'x.x.x.x'
cnx = mysql.connector.connect(host = ip_of_the_database,
user = 'your_user',
password = 'your_pass',
database = 'your_database')

Psycopg2 reporting pg_hba.conf error

I've run into a weird situation while trying to use PostgreSQL and Psycopg2. For some reason, every time I attempt to connect to the postgre database via python, I get the following error:
psycopg2.OperationalError: FATAL: no pg_hba.conf entry for host "127.0.0.1", user "steve", database "steve", SSL on
FATAL: no pg_hba.conf entry for host "127.0.0.1", user "steve", database "steve", SSL off
Naturally, I checked pg_hba.conf to see what the issue was, but everything appeared to be configured correctly as far as I can see:
pg_hba.conf:
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
In addition, I've found that I can connect to the database via psql as I would expect:
$ psql -U steve -h 127.0.0.1
...
steve=>
Anyone have any ideas as to what could be going on here? Thanks in advance!
Typical explanations include:
You are connecting to the wrong server.
Is the DB server running on the same host as Python does?
You got the wrong port.
Check the server log if you see a connection attempt. You have to log connections for that, of course. See the config parameter log_connections.
You did not reload (SIGHUP) the server after changing pg_hba.conf - or reloaded the wrong cluster (if you have multiple DB clusters).
Use pg_ctl or pg_ctlcluser on Debian and derivatives for that.
Or, on modern Linux installations with systemd (incl. Debian & friends), typically:
sudo systemctl reload postgresql
Or, if there are multiple installations, check with:
sudo systemctl status postgres*
And then realod the one you want with something like:
sudo systemctl reload postgresql#14-main
I recently got into this same issue and I found the solution for this problem.
System:
I have an application server (with these packages installed python, django, psycopg2 and postgres client 9.6.1 (postgresql-9.6.1.tar.gz)), for instance ip address 10.0.0.1(a private address).
And AWS postgres RDS server "aws_rds_host_name" or any database IP address.
Error:
django.db.utils.OperationalError: FATAL: no pg_hba.conf entry for host "10.0.0.1", user "your_user", database "your_db", SSL off
Solution:
While installing the postgres client 9.6.1 source package in application server 10.0.0.1, we have to pass an argument "--with-openssl". I suggest to remove the existing the postgres client and install with below steps.
Download the postgres client source package 9.6.1 (postgresql-9.6.1.tar.gz)
Untar the package postgresql-9.6.1.tar.gz.
./configure --prefix="your_preferred_postgres_path_if_needed" --with-openssl (this '--with-openssl' argument is important to get rid of that error)
make
make install
After successful installation, that error didn't occur when we ran the django project with psycopg2.
I hope this solution helps someone.

Categories

Resources