Abort connection when database is read-only (Flask/SQLAlchemy) - python

I am facing the following issue:
We have configured failover DB nodes for our staging environment. When testing, sometimes the failover happens and Flask keeps open connections to some nodes which are now read-only -- any write operation then fails:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 210, in execute
return self.trace_sql(self.wrapped_.execute, sql, params)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 244, in _trace_sql
result = method(sql, params)
psycopg2.errors.ReadOnlySqlTransaction: cannot execute DELETE in a read-only transaction
I'd like to detect this somehow and close the connection to these nodes, so that any write operation succeeds. Is this possible?

You can import that error class into your module, and then use it in a try-except block:
from psycopg2.errors import ReadOnlySqlTransaction
try:
# your main stuff here
except ReadOnlySqlTransaction:
# terminate the connection

Related

"WinError 10061" When trying to connect to Cloud SQL using Cloud SQL Python Connector

I have created a Cloud SQL instance, and a new database. However, I can't seem to connect to the database using Cloud SQL Python Connector. I have followed the sample code and steps in the documentation, but still failed.
Error:
Traceback (most recent call last):
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 149, in <module>
print(bnm_data_db.isTableExist('MY_TABLE'))
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 81, in isTableExist
with self.__pool.connect() as db_conn:
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3245, in connect
return self._connection_cls(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3269, in raw_connection
return self.pool.connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 455, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 1270, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 719, in checkout
rec = pool._do_get()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 168, in _do_get
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 166, in _do_get
return self._create_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 396, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 681, in __init__
self.__connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 905, in __connect
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 901, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 368, in <lambda>
return lambda rec: creator_fn()
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 27, in getconn
conn: pymysql.connections.Connection = self.__connector.connect(
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 154, in connect
return connect_task.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 261, in connect_async
return await asyncio.wait_for(get_connection(), timeout)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 257, in get_connection
return await self._loop.run_in_executor(None, connect_partial)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\pymysql.py", line 54, in connect
socket.create_connection((ip_address, SERVER_PROXY_PORT)),
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 845, in create_connection
raise err
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 833, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I run this python code on locally on my machine (in a Conda environment).
Here's my code:
from google.cloud.sql.connector import Connector
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
# insert statement
check_table_statement = sqlalchemy.text(
"SELECT count(*) from information_schema.tables",
)
with pool.connect() as db_conn:
result = db_conn.execute(check_table_statement)
print(result)
connector.close()
I have tried to include my IP address in the Cloud SQL instance. However, I am not sure which one is the correct address. I have check my IP address multiple ways, but all of them are different:
Google
whatsmyip.org
cmd command nslookup myip.opendns.com resolver1.opendns.com
No luck with all of them.
Note:
I did not check the Allow only SSL connections box. So, I reckon, there is no need to include the CA certificate.
The CloudSQL instance is using Public IP.
Update:
I have added full error.
To solve this, check the following -
Make sure you enabled Public IP on the Cloud SQL instance
Then, create a user with hosts allowed as % (all IPs).
Next, ensure that you have a database created by the name of my_database by clicking on the Databases button on the left navigation inside Cloud SQL dashboard.
Ensure that you are including the right service credentials in your script with Cloud SQL Client role.
You should be able to connect. Do let me know if you face any issues, will update the answer accordingly :)
A couple things for you to try in order to resolve the ConnectionRefusedError.
Make sure the Cloud SQL Admin API is enabled within your Google Cloud Project.
Make sure the IAM principal you are using for authorization within your environment has the Cloud SQL Client Role granted to it.
Double check that your instance connection name is correct (first argument being passed to the .connect method) by verifying it on the Cloud SQL instance overview page.
If you are connecting to a Cloud SQL instance Private IP make sure you are connecting from a machine within the VPC network and that your code is updated for private IP connections:
from google.cloud.sql.connector import Connector, IPTypes
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
ip_type=IPTypes.PRIVATE,
)
return conn
Let me know if any of these help resolve your connection, otherwise I can take a deeper look into it for you :)

Querying on mysql docker container via python, throwing timeout error after few hours

Inserting via debezium connector to mysql database brought up via docker container.
Trying to query and it is working fine until some number of hours. But, after that, same query is throwing below exception.
export JAVA_HOME=/tmp/tests/artifacts/java-17/jdk-17; export PATH=$PATH:/tmp/tests/artifacts/java-17/jdk-17/bin; docker exec -i mysql_be1e6a mysql --user=demo --password=demo -D demo -e "select count(k) from test_cdc_f0bf84 where uuid = 'd1e5cd6d-8f7a-457c-b2ea-880c2be52f69'"
2023-01-02 16:27:43,812:ERROR: failed to execute query MySQL rows count by uuid:
Traceback (most recent call last):
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 699, in recv
out = self.in_buffer.read(nbytes, self.timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/buffered_pipe.py", line 164, in read
raise PipeTimeout()
paramiko.buffered_pipe.PipeTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/abstract.py", line 667, in try_query
res = query_function()
^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/test_cdc.py", line 635, in <lambda>
query = lambda: self.mysql_query(
^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/abstract.py", line 544, in mysql_query
result = self.ssh.exec_on_host(host, [
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/main/connection.py", line 335, in exec_on_host
return self._exec_on_host(host, commands, fetch, timeout=timeout, limit_output=limit_output)[host]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/main/connection.py", line 321, in _exec_on_host
res = list(out)
^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/file.py", line 125, in __next__
line = self.readline()
^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/file.py", line 291, in readline
new_data = self._read(n)
^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 1361, in _read
return self.channel.recv(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 701, in recv
raise socket.timeout()
TimeoutError
After some time, logged manually to machine and tried to read, it still reads fine. Not sure, what does this issue mean.
As explained, tried querying from database via python. Expected it will return count of rows, which it was happening until certain time, but after that, it threw timeout error and socket error.
Trying to query and it is working fine until some number of hours. But, after that, same query is throwing below exception.
The default value for interactive_timeout and wait_timeout is 28880 seconds (8 hours). you can disable this behavior by setting this system variable to zero in your MySQL config.
source: Configuring session timeouts

How can a background process access the database in Flask?

A MySQL server is initialized in Flask (with connexion) on service startup.
service.app.datastore = DatastoreMySQL(service.config)
class DatastoreMySQL(Datastore):
def __init__(self, config):
...
self.connection_pool = pooling.MySQLConnectionPool(
database=self.database,
host=self.hostname,
username=self.username,
password=self.password,
pool_name="pool_name",
pool_size=self.pool_size,
autocommit=True
)
def exec_query(self, query, params=None):
try:
connection = self.connection_pool.get_connection()
connection.ping(reconnect=True)
with closing(connection.cursor(dictionary=True, buffered=True)) as cursor:
if params:
cursor.execute(query, params)
else:
cursor.execute(query)
finally:
connection.close()
The view functions use the database by passing the DB reference from current_app.
def new():
do_something_in_db(current_app.datastore, request.get_json())
def do_something_in_db(db, data):
db.create_new_item(data)
...
However, a background process (run with APScheduler) must also run do_something_in_db(), but when passed a datastore reference an mysql.connector.errors.OperationalError error is thrown.
My understanding is that this error comes from two sources:
The server timed out and closed the connection. However, in this service the exec_query() function obtains a connection and executes right away, so there should be no reason that it times out. The monitor is also initialized at service startup with a datastore reference, but I am not sure how that can time out given that a new connection is created each time exec_query() is called.
The server dropped an incorrect or too large packet. However, there are no packets here - the process is run by a local background scheduler.
The error in full:
Job "Monitor.monitor_running_queries" raised an exception
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py", line 509, in cmd_query
raw_as_string=raw_as_string)
_mysql_connector.MySQLInterfaceError: MySQL server has gone away
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/k8s-service/lib/datastore/datastore_mysql.py", line 88, in exec_query
cursor.execute(query, params)
File "/usr/local/lib/python3.6/site-packages/mysql/connector/cursor_cext.py", line 276, in execute
raw_as_string=self._raw_as_string)
File "/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py", line 512, in cmd_query
sqlstate=exc.sqlstate)
mysql.connector.errors.DatabaseError: 2006 (HY000): MySQL server has gone away
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/opt/k8s-service/lib/background.py", line 60, in monitor_running_queries
self.handle_process_state(query.id, datastore, hive)
File "/opt/k8s-service/lib/background.py", line 66, in handle_process_state
query = datastore.get_item(query_id)
File "/opt/k8s-service/lib/datastore/datastore.py", line 48, in get_item
return_results=True)
File "/opt/k8s-service/lib/datastore/datastore.py", line 97, in exec_query
connection.close()
File "/usr/local/lib/python3.6/site-packages/mysql/connector/pooling.py", line 131, in close
cnx.reset_session()
File "/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py", line 768, in reset_session
raise errors.OperationalError("MySQL Connection not available.")
mysql.connector.errors.OperationalError: MySQL Connection not available.

Cassandra Celery python timeout happens on raw query execution using django db connection execute

My celery is configured for the Cassandra session like this:
def cassandra_init(*args, **kwargs):
""" Initialize a clean Cassandra connection. """
if cql_cluster is not None:
cql_cluster.shutdown()
if cql_session is not None:
cql_session.shutdown()
connection.setup([settings.DATABASES["default"]["HOST"],], settings.DATABASES["default"]["NAME"])
# Initialize worker context (only standard tasks)
worker_process_init.connect(cassandra_init)
When I am executing a raw cassandra query, timeout happens,
from django.db import connection
cursor = connection.cursor()
total_ap = cursor.execute(
"SELECT cpu_info FROM ap_live_stats;")
It works well all over in my django project but not inside the celery tasks.
Error:
[2018-05-09 18:50:21,576: ERROR/ForkPoolWorker-5] Task apps.statistic.tasks.ap_hourly_data_migrator[77a596d4-61a2-43f4-8580-6abc6e9b5866] raised unexpected: OperationTimedOut("errors={'192.168.98.65': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=192.168.98.65",)
Traceback (most recent call last):
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/home/vkchlt0079/projects/wlcd/src/web_gui/backend/django/wlcd/apps/statistic/tasks.py", line 59, in ap_hourly_data_migrator
"SELECT cpu_info FROM ap_live_stats;")
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/django_cassandra_engine/utils.py", line 47, in execute
return self.cursor.execute(sql)
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/django_cassandra_engine/connection.py", line 12, in execute
return self.connection.execute(*args, **kwargs)
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/django_cassandra_engine/connection.py", line 86, in execute
self.session.set_keyspace(self.keyspace)
File "cassandra/cluster.py", line 2448, in cassandra.cluster.Session.set_keyspace (cassandra/cluster.c:48048)
File "cassandra/cluster.py", line 2030, in cassandra.cluster.Session.execute (cassandra/cluster.c:38536)
File "cassandra/cluster.py", line 3844, in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:80834)
cassandra.OperationTimedOut: errors={'192.168.98.65': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=192.168.98.65
Tried to increase the timeout, but not working and not sure, where it is to be included.
# project/tasks.py
from celery.signals import worker_init
from django.db import connection
#worker_process_init.connect
def connect_db(**kwargs):
connection.reconnect()
This will initiate the db connection required via Django Cassandra engine. Reference

Django-Celery worker hangs when SQL Server Database crashes using FreeTDS driver

I have a celery worker, running on a Ubuntu 14.04 server, that is reading and writing to an MSSQL Server database using pyodbc and the freetds driver. When the SQL Server goes down, the function fails as expected and the celery worker starts trying to clean up and get ready for the next task. At this time, the worker calls django's "connection.close()" method. This method appears to send a command to rollback any incomplete transactions. Since the server is down this throws an exception that is not caught by the celery worker. The worker then hangs and neither releases the task or move on to the next task.
I tried overriding the on_failure and after_return methods for the function and calling connection.close() there (as specified in other answers), but that didn't work. I suspect it is because the when I call connection.close() it has the same issue, and just bubbles the exception up, or because celery's cleanup code gets run before those two methods get called.
Any ideas on how to either catch this exception before it gets to celery, or avoid it all together?
Below is the stack trace of the exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 283, in trace_task
state, retval, uuid, args, kwargs, None,
File "/var/www/cortex/corespring/tasks.py", line 13, in after_return
connections['xxx'].close()
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/init.py", line 317, in close
self.connection.close()
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 2642, in close
self.rollback()
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 2565, in rollback
check_success(self, ret)
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 987, in check_success
ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 965, in ctrl_err
raise DatabaseError(state,err_text)
DatabaseError: (u'08S01', u'[08S01] [FreeTDS][SQL Server]Write to the server failed')

Categories

Resources