When I try to connect to MySql Database my script get stuck. It simply does nothing. I use docker-compose.yml file (shown below) to run MySql database.
version: '3.1'
services:
db:
image: mysql
# NOTE: use of "mysql_native_password" is not recommended: https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password
# (this is just an example, not intended to be a production configuration)
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: database
adminer:
image: adminer
restart: always
ports:
- 3306:8080
My php script for database connection:
<?php
$conn = mysqli_connect('127.0.0.1', 'root', 'example', 'database');
if ($conn->connect_errno) {
echo("failed");
exit();
}
$conn->close();
?>
I have also created python script which also get stuck. Here is the code:
import mysql.connector
mydb = None
try:
mydb = mysql.connector.connect(
host="127.0.0.1",
user="root",
password="database",
use_pure=True
)
except Exception as e:
print(e)
print(mydb)
Log from docker-compose:
adminer_1 | [Thu Aug 11 22:03:52 2022] [::ffff:172.18.0.1]:56018 Accepted
For tests purposes I used php8.0-cli and python3.
Do you have any ideas what could be the reason?
You will either need to:
1: Expose the port on the database Docker container to allow access to it through your application
For example:
ports:
- "3306:3306"
2: Create a container for your application which can store your script in it, and you can tunnel into your mysql server via a virtual network.
For example:
networks:
default:
driver: bridge
And you can use this network on both of your application by adding the following under each app.
networks:
- default
Heres a good resource to kick you off. https://runnable.com/docker/docker-compose-networking
Related
I have the same issue while trying to connect to mysql(percona) container upped with docker-compose.
I have this simple code
import sqlalchemy
engine = sqlalchemy.create_engine(
f'mysql+pymysql://root:admin#127.0.0.1:3306/DB_MYAPP',
encoding='utf8'
)
connection = engine.connect()
connection.close()
Here is the part of docker-compose.yml
mysql:
image: 'percona:latest'
container_name: 'mysql'
environment:
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: DB_MYAPP
MYSQL_USER: test_qa
MYSQL_PASSWORD: qa_test
ports:
- '3306:3306'
volumes:
- '/home/rolf/PycharmProject/Endshpiel/mysql/myapp_db:/docker-entrypoint-initdb.d'
Container is working
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8f1556e66ccb percona:latest "/docker-entrypoint.…" 55 minutes ago Up 9 minutes 0.0.0.0:3306->3306/tcp mysql
Also i tried increasing max_allowed_packet
I can easily connect to docker container via linux terminal (host:127.0.0.1 port:3306) and work with database. But when i try to connect to container with python i get this error
mysql | 2022-05-24T23:17:10.428251Z 6 [Note] Aborted connection 6
to db: 'DB_MYAPP' user: 'root' host: '172.20.0.1' (Got an error
reading communication packets)
endpoints.yml
tracker_store:
type: SQL
dialect: "mysql" # the dialect used to interact with the db
url: "127.0.0.1:3306" # (optional) host of the sql db, e.g. "localhost"
db: "rasa" # path to your db
username: "*****" # username used for authentication
password: "*****"
I am using docker-compose to run the Rasa server.
version: '3'
services:
rasa:
image: rasa/rasa:1.10.8-full
expose:
- "5432"
ports:
- "5005:5005"
volumes:
- ./:/app
command: ["run", "--enable-api", "--cors", "*", "--debug"]
mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password
volumes:
- .data/mysql:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: "*****"
MYSQL_ALLOW_EMPTY_PASSWORD: "No"
MYSQL_DATABASE: rasa
ports:
- "3306"
app:
image: rasa_actions:latest
expose:
- "5055"
I've also tried replacing the dialect with mysql+pymysql or postgresql. The former gives a module error and the latter gives a connection error, despite the database being normally accessible.
I have read online that there is some issue with MySQL and Rasa. I need some clarification with how to use it.
Which error message do you see?
In general, we recommend to use pymysql as a dialect.
I would like to have a python flask application that runs with a postgresql database (psycopg2). So I made this docker-compose file:
version: "3"
services:
web:
depends_on:
- database
container_name: web
build:
context: "."
dockerfile: "docker/Dockerfile.web"
ports:
- 5000:5000
volumes:
- database:/var/run/postgresql
database:
container_name: database
environment:
POSTGRES_PASSWORD: "password"
POSTGRES_USER: "user"
POSTGRES_DB: "products"
image: postgres
expose:
- 5432
volumes:
- database:/var/run/postgresql
volumes:
database:
In my app.py I try to connect to postgres like this:
conn = psycopg2.connect(database="products", user="user", password="password", host="database", port="5432")
When I run docker-compose up I get the following error:
"Is the server running on host "database" (172.21.0.2) and accepting TCP/IP connections on port 5432?"
I don't know where I have mistaken here.
The container "database" exposes its port 5432.
Both containers are on the same network which is "web_app_default".
The socket file existes in /var/run/postgresql directory on "web" container.
Any ideas ?
Thanks for replies and have a nice day.
I think what happened is that even though you have the flag depends_on set to database, that only means that the web container will start after database container starts. However, for the first time, the database will generally take quite some time to set up and when your web server is up, the database is still not ready to accept the connection.
2 ways to work around the problem here:
Easy way with no change in code: run docker-compose up -d (detach mode) and wait for the database to finish initializing. Then run docker-compose up -d again and your web container will now be able to connect to the database.
Second way is to update the web container with restart: always so docker-compose will keep trying to restart your web container until it runs successfully (until the database is ready to accept connection)
version: "3"
services:
web:
depends_on:
- database
...
restart: always
...
I'm trying to get my dockerized python-script to get data from an also dockerized mariadb.
I know this should be possible with networks or links. However, due to links being deprecated (According to the Docker documentation), I'd rather not use links.
docker-compose:
version: "3.7"
services:
[...]
mariadb:
build: ./db
container_name: maria_db
expose:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
MYSQL_PASSWORD: user
restart: always
networks:
- logrun_to_mariadb
[...]
logrun_engine:
build: ./logrun_engine
container_name: logrun_engine
restart: always
networks:
- logrun_to_mariadb
networks:
logrun_to_mariadb:
external: false
name: logrun_to_mariadb
The logrun_engine container executes a python-script on startup:
import mysql.connector as mariadb
class DBConnector:
def __init__(self, dbname):
self.mariadb_connection = mariadb.connect(host='mariadb', port='3306', user='root', password='root', database=dbname)
self.cursor = self.mariadb_connection.cursor()
def get_Usecases(self):
self.cursor.execute("SELECT * FROM Test")
tests = []
for test in self.cursor:
print(test)
print("Logrun-Engine running...")
test = DBConnector('test_db')
test.get_Usecases()
Whenever I run docker-compose up -d, my logrun_engine logs are full of the error message:
_mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on 'mariadb' (111)
When I run the python script locally and connect to a local mariadb, it works with no problems, so the script should be correct.
Most answers I found concerning this error-message are that the people used localhost or 127.0.0.1 instead of the docker container, which I already have.
I tried with bridged networks, host networks, links etc. but apparently I haven't found the correct thing yet.
Any idea how to connect these two containers?
OK, so I was just too impatient and didn't let mysql start up properly before querying the database, thanks #DanielFarrel for pointing that out.
When I added a 10sec delay in the python script before querying the database, it magically worked...
Sleep maybe one solution. However, it may be problematic in case db goes up slowly.
As an alternative you can use agent that will make sure db is up before connecting to it similar to solution here.
Run:
docker-compose up -d agent
After agent is up you are sure db is up and you app may run:
docker-compose up -d logrun_engine
The solution does use --links, however it can be easily modified to use docker networks.
I am trying to prepare a docker-compose file that stands up 2 containers. They are postgres and python app inside an alpine image. Just consider before reading ı need to use python inside alpine.
My Dockerfile is:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./app.py" ]
My app.py file:
import psycopg2
from config import config
def connect():
""" Connect to the PostgreSQL database server """
conn = None
try:
# read connection parameters
params = config()
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
conn = psycopg2.connect(**params)
# create a cursor
cur = conn.cursor()
# execute a statement
print('PostgreSQL database version:')
cur.execute ("SELECT * FROM my_table;")
# display the PostgreSQL database server version
db_version = cur.fetchone()
print(db_version)
# close the communication with the PostgreSQL
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
print('Database connection closed.')
if __name__ == '__main__':
connect()
I started python container with that command:
docker run -it my_image app.py
I seperately started 2 container(postgres and python) and make it worked. However my container works only once and its job is to make an select job inside postgresql database.
That was first part. My main goal is below.
For the simplicity i prepared a docker-compose.yml file:
version: '3'
services:
python:
image: python
build:
context: .
dockerfile: Dockerfile
postgres:
image: postgres:${TAG:-latest}
build:
context: .
environment:
POSTGRES_PASSWORD: example
ports:
- "5435:5432"
networks:
- postgres
networks:
postgres:
My dockerfile is here
When i type docker-compose up my postgres starts but python exits with code 0.
my_file_python_1 exited with code 0
What should i do for stands alone container for python apps with docker-compose? It always works only once. I can make it work constantly with
docker run -d -it my_image app.py
But my goal is to make it with docker-compose.
version: '3'
services:
python:
image: python
build:
context: .
dockerfile: Dockerfile
postgres:
image: postgres:${TAG:-latest}
build:
context: .
environment:
POSTGRES_PASSWORD: example
ports:
- "5435:5432"
networks:
- postgres
networks:
postgres:
tty: true
If exit code is 0 means container exited after all execution, means you have to run the process in foreground to keep container running. if exit code is other than 0, means it is exiting because of code issue. so try to run any foreground process.
Could you check if enabling the tty option (see reference) in your docker-compose.yml file the container keeps running?