I have a couple of services which query objects from the database.
Event.objects.filter
Connection.objects.filter
and other methods to retrieve different objects from MySQL database.
I come from JVM background, and I know that to setup a JDBC connection, you need a connector. A programmer can open a connection, query the database and close the connection. A programmer also can use Hibernate, which handles connection according to the configuration. Also it is possible to use pooled connections, so connections are not closed and removed, but stored in the pool untill they are needed.
However, I checked my teams Python Django code, and I did not find how db connection is configured. The only thing I got is which does not configure connections.
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
try:
import database_password
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': "mydb",
'USER': 'user',
'PASSWORD': database_password.password,
'HOST': '10.138.67.149',
'PORT': '3306'
}
}
Each thread maintains its own connection. See the docs for full details.
PostgreSQL + PgBouncer (connection pooler) + Django is a common setup. I'm not sure whether there's a similar connection pooler you could use with MySQL.
Related
I'm using Peewee as ORM and connect to a Postgres database (psycopg2) using the Playhouse extension db_url.connect. My URL is a vanilla postgresql://username:pass#host:port/dbname?options=... so not using pooling or anything advanced at the moment.
Some times when I call connect it hangs for a long time and doesn't come back. So I appended to my database URL the parameter &connect_timeout=3 meaning to try for at most 3 seconds and fail-fast with a timeout rather than hanging forever. However, I am not sure whether this argument is supported by Peewee/Playhouse/Psycopg2 ... can anyone confirm?
Furthermore, where can I find all the URL parameters supported by Peewee/Playhouse/Psycopg2?
The psycopg2 doc links in turn to the libpq list of supported parameters:
https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
connect_timeout is supported by both peewee and psycopg2:
>>> from playhouse.db_url import *
>>> db = connect('postgresql://../peewee_test?connect_timeout=3')
>>> conn = db.connection()
>>> conn.get_dsn_parameters()
{'user': 'postgres',
'passfile': '...',
'channel_binding': 'prefer',
'connect_timeout': '3', # Our connect timeout
'dbname': 'peewee_test',
'host': 'localhost',
'port': '5432',
...}
Peewee passes the parameters, including arbitrary ones like connect_timeout, back to the constructor of the DB-API Connection class.
I’m currently connecting to my database through my hard coding link in my config file, but I want to be able to dynamically generate this connection so I will be able to use multiple connections in the future.
SQLALCHEMY_DATABASE_URI = ‘postgresql+psycopg2://<blah>/database’
You could put the database connection parameters in an external file, eg connections_settings.ini
[credentials]
host=localhost
dbname=test
username=me
password=secret
and then read them with the configparser module, and create the connection url with string interpolation
import configparser
config = configparser.ConfigParser('connection_settings')['credentials']
connection_settings = {
'host': config['host'],
'dbname': config['dbname'],
'user': config['username'],
'password': config['password']
}
SQLALCHEMY_DATABASE_URI = f'postgresql+psycopg2://{host}/{dbname}'
To check if merge migrations are required, I can run manage.py makemigrations --check or manage.py makemigrations --dry-run
However, both of those require the database to be up. If it's not up, it will error out with something like
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)")
Theoretically, since a merge migration issue occurs because of two migration with the same parent, you don't need the database instance to be up to check for this condition.
I need this because I want my CI to check for this case. I can spin up a docker database but it's extra work for something that's not even logically dependent. I'm also sure there are people out there who are interested in checking for this in their CI, who don't want to deal with containerization.
Has anyone found an easy way to to check for migration merge conflicts without needing a database up?
Since the goal is to run makemigrations --dry without a mysql database up, the easiest workaround I came up with is to create a new settings file called makemigrations_settings.py that overrides the database to use the builtin sqlite database.
from your_main_settings import *
DATABASES = {
'default' : {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'database_name',
'USER': 'your_mom',
'PASSWORD': '',
'HOST': '',
'PORT': '',
},
}
Then you can run
python manage.py makemigrations --check --settings yourapp.makemigrations_settings
Alternatively, you can less elegantly do something like
if (sys.argv[0:2] == ['manage.py', 'makemigrations']
and ('--dry-run' in sys.argv or '--check' in sys.argv)):
DATABASES = {
'default' : {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'database_name',
'USER': 'your_mom',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
I'm using a python driver (mysql.connector) and do the following:
_db_config = {
'user': 'root',
'password': '1111111',
'host': '10.20.30.40',
'database': 'ddb'
}
_connection = mysql.connector.connect(**_db_config) # connect to a remote server
_cursor = _connection.cursor(buffered=True)
_cursor.execute("""SELECT * FROM database LIMIT 1;""")
In some cases, the call to _cursor.execute() hangs with no exception
By the way, when connecting to a local MySQL server it seems to be ok
The result might hang due to large databases or not optimized queries.
you can try following things
optimize your query
use a better connector driver
I generally use python3.x and for this I either use "pymysql" or "pypyodbc"
which are new and optimized drivers
here is a post to work with "pymysql" Gives small idea to with with mysql and python3
Moving to MySQLdb (instead of mysql.connector) solved all the issues :-)
I've just spent a week on the problems recorded in this question: https://stackoverflow.com/questions/21315427/why-does-the-ca-information-need-to-be-in-a-tuple-for-mysqldb
Have now boiled it down to one problem. Here's a script that connects to the MySQL server I have on Amazon RDS:
#! /usr/bin/env python
import MySQLdb
ssl = ({'ca': '/home/James/Downloads/mysql-ssl-ca-cert-copy.pem'},)
conn = MySQLdb.connect(host='db.doopdoop.eu-west-n.rds.amazonaws.com', user='user', passwd='pass', ssl=ssl)
cursor = conn.cursor()
cursor.execute("SHOW STATUS LIKE 'Ssl_Cipher'")
print cursor.fetchone()
This gives me back ('Ssl_cipher', ''), which I gather means the connection is not encrypted. Have also tried this using Django's manage.py shell function, and got the same result. Why, then, am I getting no exceptions? Data is flowing, certainly, but apparently the security is just being ignored. Any help on where I'm going wrong here would be appreciated.
I have tried updating MySQL-Python to 1.2.5 with no success.
Possible workaround for your issue: Use a default file. It will look something like this:
[mysql]
host = db.doopdoop.eu-west-n.rds.amazonaws.com
user = user
password = pass
ssl-ca = /home/James/Downloads/mysql-ssl-ca-cert-copy.pem
Then change your connect() call to:
conn = MySQLdb.connect(read_default_file=options_path)
where options_path is the path to the file above. This also keeps authentication data out of your code.
Django settings will look like this:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'OPTIONS': {
'read_default_file': '/path/to/my.cnf',
},
}
}
Ref: https://docs.djangoproject.com/en/dev/ref/databases/#connecting-to-the-database