how can I check database connection to mysql in django - python

how can I do it?
I thought, I can read something from database, but it looks too much, is there something like?:
settings.DATABASES['default'].check_connection()

All you need to do is start a application and if its not connected it will fail. Other way you can try is on shell try following -
from django.db import connections
from django.db.utils import OperationalError
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
connected = False
else:
connected = True

Run the shell
python manage.py shell
Execute this script
import django
print(django.db.connection.ensure_connection())
If it print None means everything is okay, otherwise it will throw an error if something wrong happens on your db connection

It's an old question but it needs an updated answer
python manage.py check --database default
If you're not using default or if you want to test other databases listed in your settings just name it.
It is available since version 3.1 +
Check the documentation

I use the following Django management command called wait_for_db:
import time
from django.db import connection
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""Django command that waits for database to be available"""
def handle(self, *args, **options):
"""Handle the command"""
self.stdout.write('Waiting for database...')
db_conn = None
while not db_conn:
try:
connection.ensure_connection()
db_conn = True
except OperationalError:
self.stdout.write('Database unavailable, waiting 1 second...')
time.sleep(1)
self.stdout.write(self.style.SUCCESS('Database available!'))

Assuming you needed this because of docker, BUT is not limitted to docker, remember this is at the end of the day Bash, and thus works everywhere *NIX.
You will first need to be using django-environ, since it will make this a whole lot easier.
The DATABASE_URL environment variable will be used inside your Django app, and here. Your settings would look like this:
import environ
env = environ.Env()
...
DATABASES = {
'default': env.db('DATABASE_URL'),
'other': env.db('DATABASE_OTHER_URL') # for illustration purposes
}
...
Your environment variables should look something like this: (more info here)
# This works with ALL the databases django supports ie (mysql/mssql/sqlite/...)
DATABASE_URL=postgres://user:pass#name_of_box:5432/database_name
DATABASE_OTHER_URL=oracle://user:pass#/(description=(address=(host=name_of_box)(protocol=tcp)(port=1521))(connect_data=(SERVICE_NAME=EX)))
Inside your entrypoint.sh do something like this:
function database_ready() {
# You need to pass a single argument called "evironment_dsn"
python << EOF
import sys
import environ
from django.db.utils import ConnectionHandler, OperationalError
env = environ.Env()
try:
ConnectionHandler(databases={'default': env.db('$1')})['default'].ensure_connection()
except (OperationalError, DatabaseError):
sys.exit(-1)
sys.exit(0)
EOF
}
Then, lets say you want to wait for your main db [the postgres in this case], you add this inside the same entrypoint.sh, under the database_ready function.
until database_ready DATABASE_URL; do
>&2 echo "Main DB is unavailable - sleeping"
sleep 1
done
This will only continue, IF postgres is up and running. What about oracle? Same thing, under the code above, we add:
until database_ready DATABASE_OTHER_URL; do
>&2 echo "Secondary DB is unavailable - sleeping"
sleep 1
done
Doing it this way will give you a couple of advantages:
you don't need to worry about other dependencies such as binaries and the likes.
you can switch databases and not have to worry about this breaking. (code is 100% database agnostic)

Create a file your_app_name/management/commands/waitdb.py and paste the bellow.
import time
from django.core.management.base import BaseCommand
from django.db import connection
from django.db.utils import OperationalError
from django.utils.translation import ngettext
class Command(BaseCommand):
help = 'Checks database connection'
def add_arguments(self, parser):
parser.add_argument(
'--seconds',
nargs='?',
type=int,
help='Number of seconds to wait before retrying',
default=1,
)
parser.add_argument(
'--retries',
nargs='?',
type=int,
help='Number of retries before exiting',
default=3,
)
def handle(self, *args, **options):
wait, retries = options['seconds'], options['retries']
current_retries = 0
while current_retries < retries:
current_retries += 1
try:
connection.ensure_connection()
break
except OperationalError:
plural_time = ngettext('second', 'seconds', wait)
self.stdout.write(
self.style.WARNING(
f'Database unavailable, retrying after {wait} {plural_time}!'
)
)
time.sleep(wait)
python manage.py waitdb --seconds 5 --retries 2
python manage.py waitdb # defaults to 1 seconds & 3 retries

I had a more complicated case where I am using mongodb behind djongo module, and RDS mysql. So not only is it multiple databases, but djongo throws an SQLDecode error instead. I also had to execute and fetch to get this working:
from django.conf import settings
if settings.DEBUG:
# Quick database check here
from django.db import connections
from django.db.utils import OperationalError
dbs = settings.DATABASES.keys()
for db in dbs:
db_conn = connections[db] # i.e. default
try:
c = db_conn.cursor()
c.execute("""SELECT "non_existent_table"."id" FROM "non_existent_table" LIMIT 1""")
c.fetchone()
print("Database '{}' connection ok.".format(db)) # This case is for djongo decoding sql ok
except OperationalError as e:
if 'no such table' in str(e):
print("Database '{}' connection ok.".format(db)) # This is ok, db is present
else:
raise # Another type of op error
except Exception: # djongo sql decode error
print("ERROR: Database {} looks to be down.".format(db))
raise
I load this in my app __init__.py, as I want it to run on startup only once and only if DEBUG is enabled. Hope it helps!

It seems Javier's answer is no longer working. He's one I put together to perform the task of checking database availability in a Docker entrypoint, assuming you have the psycopg2 library available (you're running a Django application, for instance):
function database_ready() {
python << EOF
import psycopg2
try:
db = psycopg2.connect(host="$1", port="$2", dbname="$3", user="$4", password="$5")
except:
exit(1)
exit(0)
EOF
}
until database_ready $DATABASE_HOST $DATABASE_PORT $DATABASE_NAME $DATABASE_USER $DATABASE_PASSWORD; do
>&2 echo "Database is unavailable at $DATABASE_HOST:$DATABASE_PORT/$DATABASE_NAME - sleeping..."
sleep 1
done
echo "Database is ready - $DATABASE_HOST:$DATABASE_PORT/$DATABASE_NAME"```

Related

Output of Python script differ depending on way of activation

hei guys,
I have an executable python script, say get_data.py (located in project_x/src/) which is working properly, when started by: python get_data.py . It gets data (a list of id's which are necessary for further calculations) from a database via mysql.connector and then processes these data in parallel (via multiprocessing) using pool.map.
BUT it is supposed to be started by an .exe-file (located in project_x/exec/)[EDIT: This .exe uses the php command exec() to directly addresses my python script]. This is not working properly but ending in the try-except-block (in wrapper_fun) when catching the (unknown) mistake and not terminating when deleting the try-except-commands.
Do you have any idea what could be going wrong? I would appreciate any idea. I tried logging but there seems to be a permission problem. My idea is that the connection the db cannot be established and therefore there are no id's.
def calculations():
do_something...
def wrapper_fun(id):
try:
calculations(id)
except Exception:
return(False)
if __name__ == "__main__":
import multiprocessing
import mysql.connector
from mysql.connector import Error
host_name = <secret_1>
user_name = <secret_2>
passt = <secret_3>
connection = None
try:
connection = mysql.connector.connect(
host=host_name,
user=user_name,
passwd=user_password
)
except Error as err:
print(f"Error: '{err}'")
d = pd.read_sql_query(query, connection,coerce_float=False)
connection.close()
id_s = list(d.ids)
results = [pool.map(wrapper_fun,id_s)]
...

Django with Peewee Connection Pooling MySQL disconnect

I'm running a Django project with Peewee in Python 3.6 and trying to track down what's wrong with the connection pooling. I keep getting the following error on the development server (for some reason I never experience this issue on my local machine):
Lost connection to MySQL server during query
The repro steps are reliable and are:
Restart Apache on the instance.
Go to my Django page and press a button which triggers a DB operation.
Works fine.
Wait exactly 10 minutes (I've tested enough to get the exact number).
Press another button to trigger another DB operation.
Get the lost connection error above.
The code is structured such that I have all the DB operations inside an independent Python module which is imported into the Django module.
In the main class constructor I'm setting up the DB as such:
from playhouse.pool import PooledMySQLDatabase
def __init__(self, host, database, user, password, stale_timeout=300):
self.mysql_db = PooledMySQLDatabase(host=host, database=database, user=user, password=password, stale_timeout=stale_timeout)
db_proxy.initialize(self.mysql_db)
Every call which needs to make calls out to the DB are done like this:
def get_user_by_id(self, user_id):
db_proxy.connect(reuse_if_open=True)
user = (User.get(User.user_id == user_id))
db_proxy.close()
return {'id': user.user_id, 'first_name': user.first_name, 'last_name': user.last_name, 'email': user.email }
I looked at the wait_timeout value on the MySQL instance and its value is 3600 so that doesn't seem to be the issue (and I tried changing it anyway just to see).
Any ideas on what I could be doing wrong here?
Update:
I found that the /etc/my.cnf configuration file for MySQL has the wait-timeout value set to 600, which matches what I'm experiencing. I don't know why this value doesn't show when I runSHOW VARIABLES LIKE 'wait_timeout'; on the MySQL DB (that returns 3600) but it does seem likely the issue is coming from the wait timeout.
Given this I tried setting the stale timeout to 60, assuming that if it's less than the wait timeout it might fix the issue but it didn't make a difference.
You need to be sure you're recycling the connections properly -- that means that when a request begins you open a connection and when the response is delivered you close the connection. The pool is not recycling the conn most likely because you're never putting it back in the pool, so it looks like its still "in use". This can easily be done with middleware and is described here:
http://docs.peewee-orm.com/en/latest/peewee/database.html#django
I finally came up with a fix which works for my case, after trying numerous ideas. It's not ideal but it works. This post on Connection pooling pointed me in the right direction.
I created a Django middleware class and configured it to be the first in the list of Django middleware.
from peewee import OperationalError
from playhouse.pool import PooledMySQLDatabase
database = PooledMySQLDatabase(None)
class PeeweeConnectionMiddleware(object):
CONN_FAILURE_CODES = [ 2006, 2013, ]
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if database.database: # Is DB initialized?
response = None
try:
database.connect(reuse_if_open=True)
with database.atomic() as transaction:
try:
response = self.get_response(request)
except:
transaction.rollback()
raise
except OperationalError as exception:
if exception.args[0] in self.CONN_FAILURE_CODES:
database.close_all()
database.connect()
response = None
with database.atomic() as transaction:
try:
response = self.get_response(request)
except:
transaction.rollback()
raise
else:
raise
finally:
if not database.is_closed():
database.close()
return response
else:
return self.get_response(request)

How do I capture output after fabric exception

I'm using fabric to run a command on one server against another server. Specifically, I'm running a SQL query through the psql command line.
The fabric run() function is throwing a SystemExit exception which I can catch.
If I go to the server and run the psql command directly I am told:
psql: could not connect to server: Connection timed out
Is the server running on host "xyz.example.com" (10.16.16.66) and accepting
TCP/IP connections on port 5432?
So, I know that the command is not working but what I want is to get that text under psql so my code can be explicit about the problem.
I think that the fabric code is fine because if I change the psql command so it executes on the same server but against a different database, I get no exception and the expected answer. So the problem is that the server I'm running psql on cannot communicate with one of the database servers.
Is it possible to get the results of the psql command through fabric after fabric throwsd the SystemExit exeption?
For reference, here's the sample code:
from __future__ import with_statement
from fabric.api import local, settings, abort, run, cd, execute, env
from fabric.contrib.console import confirm
import sys
import os
def test():
try:
count = run('psql blah blah blah',timeout=60)
print('count: {}'.format(count))
except Exception,ex:
print('====> Exception type: %s' % ex.__class__)
print('====> Exception: %s' % ex)
except SystemExit,ex:
print('====> Exception type: %s' % ex.__class__)
print('====> Exception: %s' % ex)
def go():
print "Working"
env.host_string = "jobs0.onshift.com"
execute(test)
Take a look at the settings context manager module in the fabric docs, and the succeeded and failed properties on the object returned by run
from fabric.api import *
from fabric.context_managers import settings
def test():
with settings(warn_only = True):
res = run('psql blah blah blah', timeout = 60)
if res.succeeded:
print('count: {}'.format(res))
def go():
print "Working"
execute(test, hosts = ["jobs0.onshift.com"])

SQLAlchemy error MySQL server has gone away

Error OperationalError: (OperationalError) (2006, 'MySQL server has gone away') i'm already received this error when i coded project on Flask, but i cant understand why i get this error.
I have code (yeah, if code small and executing fast, then no errors) like this \
db_engine = create_engine('mysql://root#127.0.0.1/mind?charset=utf8', pool_size=10, pool_recycle=7200)
Base.metadata.create_all(db_engine)
Session = sessionmaker(bind=db_engine, autoflush=True)
Session = scoped_session(Session)
session = Session()
# there many classes and functions
session.close()
And this code returns me error 'MySQL server has gone away', but return it after some time, when i use pauses in my script.
Mysql i use from openserver.ru (it's web server like such as wamp).
Thanks..
Looking at the mysql docs, we can see that there are a bunch of reasons why this error can occur. However, the two main reasons I've seen are:
1) The most common reason is that the connection has been dropped because it hasn't been used in more than 8 hours (default setting)
By default, the server closes the connection after eight hours if nothing has happened. You can change the time limit by setting the wait_timeout variable when you start mysqld
I'll just mention for completeness the two ways to deal with that, but they've already been mentioned in other answers:
A: I have a very long running job and so my connection is stale. To fix this, I refresh my connection:
create_engine(conn_str, pool_recycle=3600) # recycle every hour
B: I have a long running service and long periods of inactivity. To fix this I ping mysql before every call:
create_engine(conn_str, pool_pre_ping=True)
2) My packet size is too large, which should throw this error:
_mysql_exceptions.OperationalError: (1153, "Got a packet bigger than 'max_allowed_packet' bytes")
I've only seen this buried in the middle of the trace, though often you'll only see the generic _mysql_exceptions.OperationalError (2006, 'MySQL server has gone away'), so it's hard to catch, especially if logs are in multiple places.
The above doc say the max packet size is 64MB by default, but it's actually 16MB, which can be verified with SELECT ##max_allowed_packet
To fix this, decrease packet size for INSERT or UPDATE calls.
SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness:
http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic
From there,
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
And a test to make sure the above works:
from sqlalchemy import create_engine
e = create_engine("mysql://scott:tiger#localhost/test", echo_pool=True)
c1 = e.connect()
c2 = e.connect()
c3 = e.connect()
c1.close()
c2.close()
c3.close()
# pool size is now three.
print "Restart the server"
raw_input()
for i in xrange(10):
c = e.connect()
print c.execute("select 1").fetchall()
c.close()
from documentation you can use pool_recycle parameter:
from sqlalchemy import create_engine
e = create_engine("mysql://scott:tiger#localhost/test", pool_recycle=3600)
I just faced the same problem, which is solved with some effort. Wish my experience be helpful to others.
Fallowing some suggestions, I used connection pool and set pool_recycle less than wait_timeout, but it still doesn't work.
Then, I realized that global session maybe just use the same connection and connection pool didn't work. To avoid global session, for each request generate a new session which is removed by Session.remove() after processing.
Finally, all is well.
One more point to keep in mind is to manually push the flask application context with database initialization. This should resolve the issue.
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
app = Flask(__name__)
with app.app_context():
db.init_app(app)
https://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-optimistic
def sql_read(cls, sql, connection):
"""sql for read action like select
"""
LOG.debug(sql)
try:
result = connection.engine.execute(sql)
header = result.keys()
for row in result:
yield dict(zip(header, row))
except OperationalError as e:
LOG.info("recreate pool duo to %s" % e)
connection.engine.pool.recreate()
result = connection.engine.execute(sql)
header = result.keys()
for row in result:
yield dict(zip(header, row))
except Exception as ee:
LOG.error(ee)
raise SqlExecuteError()

DatabaseError: current transaction is aborted, commands ignored until end of transaction block?

I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
To get rid of the error, roll back the last (erroneous) transaction after you've fixed your code:
from django.db import transaction
transaction.rollback()
You can use try-except to prevent the error from occurring:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
transaction.rollback()
Refer : Django documentation
In Flask you just need to write:
curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()
P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html
So, I ran into this same issue. The problem I was having here was that my database wasn't properly synced. Simple problems always seem to cause the most angst...
To sync your django db, from within your app directory, within terminal, type:
$ python manage.py syncdb
Edit: Note that if you are using django-south, running the '$ python manage.py migrate' command may also resolve this issue.
Happy coding!
In my experience, these errors happen this way:
try:
code_that_executes_bad_query()
# transaction on DB is now bad
except:
pass
# transaction on db is still bad
code_that_executes_working_query() # raises transaction error
There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.
edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.
The lesson here is don't do try/except/pass.
I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.
However I feel there are valid uses for the pattern and I don't think this issue should be a reason to always avoid it. For example:
try:
profile = user.get_profile()
except ObjectDoesNotExist:
profile = make_default_profile_for_user(user)
do_something_with_profile(profile)
If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode
DATABASES['default'] = {
#.. you usual options...
'OPTIONS': {
'autocommit': True,
}
}
I am unsure if there are important performance considerations (or of any other type).
just use rollback
Example code
try:
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
cur.execute("rollback")
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
You only need to run
rollback;
in PostgreSQL and that's it!
If you get this while in interactive shell and need a quick fix, do this:
from django.db import connection
connection._rollback()
originally seen in this answer
I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:
COMMIT;
I've just got a similar error here. I've found the answer in this link https://www.postgresqltutorial.com/postgresql-python/transaction/
client = PsqlConnection(config)
connection = client.connection
cursor = client.cursor
try:
for query in list_of_querys:
#query format => "INSERT INTO <database.table> VALUES (<values>)"
cursor.execute(query)
connection.commit()
except BaseException as e:
connection.rollback()
Doing this the following query's you send to postgresql will not return an error.
I've got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).
In Flask shell, all I needed to do was a session.rollback() to get past this.
I have met this issue , the error comes out since the error transactions hasn't been ended rightly, I found the postgresql_transactions of Transaction Control command here
Transaction Control
The following commands are used to control transactions
BEGIN TRANSACTION − To start a transaction.
COMMIT − To save the changes, alternatively you can use END TRANSACTION command.
ROLLBACK − To rollback the changes.
so i use the END TRANSACTION to end the error TRANSACTION, code like this:
for key_of_attribute, command in sql_command.items():
cursor = connection.cursor()
g_logger.info("execute command :%s" % (command))
try:
cursor.execute(command)
rows = cursor.fetchall()
g_logger.info("the command:%s result is :%s" % (command, rows))
result_list[key_of_attribute] = rows
g_logger.info("result_list is :%s" % (result_list))
except Exception as e:
cursor.execute('END TRANSACTION;')
g_logger.info("error command :%s and error is :%s" % (command, e))
return result_list
I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:
DatabaseError: value too long for type character varying(100)
I had to debug through the code for the above message to show up, otherwise it displays
DatabaseError: current transaction is aborted
In response to #priestc and #Sebastian, what if you do something like this?
try:
conn.commit()
except:
pass
cursor.execute( sql )
try:
return cursor.fetchall()
except:
conn.commit()
return None
I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.
I believe #AnujGupta's answer is correct. However the rollback can itself raise an exception which you should catch and handle:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
try:
transaction.rollback()
except transaction.TransactionManagementError:
# Log or handle otherwise
If you find you're rewriting this code in various save() locations, you can extract-method:
import traceback
def try_rolling_back():
try:
transaction.rollback()
log.warning('rolled back') # example handling
except transaction.TransactionManagementError:
log.exception(traceback.format_exc()) # example handling
Finally, you can prettify it using a decorator that protects methods which use save():
from functools import wraps
def try_rolling_back_on_exception(fn):
#wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except:
traceback.print_exc()
try_rolling_back()
return wrapped
#try_rolling_back_on_exception
def some_saving_method():
# ...
model.save()
# ...
Even if you implement the decorator above, it's still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn't enough.
This is very strange behavior for me. I'm surprised that no one thought of savepoints. In my code failing query was expected behavior:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
return skipped
I have changed code this way to use savepoints:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
sid = transaction.savepoint()
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
transaction.savepoint_rollback(sid)
else:
transaction.savepoint_commit(sid)
return skipped
I am using the python package psycopg2 and I got this error while querying.
I kept running just the query and then the execute function, but when I reran the connection (shown below), it resolved the issue. So rerun what is above your script i.e the connection, because as someone said above, I think it lost the connection or was out of sync or something.
connection = psycopg2.connect(user = "##",
password = "##",
host = "##",
port = "##",
database = "##")
cursor = connection.cursor()
It is an issue with bad sql execution which does not allow other queries to execute until the previous one gets suspended/rollback.
In PgAdmin4-4.24 there is an option of rollback, one can try this.
you could disable transaction via "set_isolation_level(0)"

Categories

Resources