Python and MySQL connector - Connection timed out error - python

I have a Python script which connects to my database, gets all the users' hashes and their emails, then parses through those hashes and gets some other data from the DB based on the user's hash value.
The problem is, that my MySQL Python connector breaks at various points and gives me this exception:
Traceback (most recent call last):
File "/home/antonio/.local/lib/python3.9/site-packages/mysql/connector/network.py", line 509, in open_connection
self.sock.connect(sockaddr)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/antonio/.local/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/antonio/.local/lib/python3.9/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/antonio/.local/lib/python3.9/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/antonio/.local/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/antonio/.local/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/antonio/.local/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/antonio/Desktop/PythonProjects/crypton-portfolio-api/crypto_tracking_coins/main_with_flask.py", line 701, in get_all_users_with_portfolio_and_accounting_data
user_coins_from_db = perform_db_query_fetchall('SELECT `portfolio`, `date`, `type`, `invested` FROM `users` WHERE `hash` = %s AND `fake` = "no"', (user_hash, ))
File "/home/antonio/Desktop/PythonProjects/crypton-portfolio-api/crypto_tracking_coins/helpers.py", line 6, in perform_db_query_fetchall
mydb = mysql.connector.connect(
File "/home/antonio/.local/lib/python3.9/site-packages/mysql/connector/__init__.py", line 179, in connect
return MySQLConnection(*args, **kwargs)
File "/home/antonio/.local/lib/python3.9/site-packages/mysql/connector/connection.py", line 95, in __init__
self.connect(**kwargs)
File "/home/antonio/.local/lib/python3.9/site-packages/mysql/connector/abstracts.py", line 716, in connect
self._open_connection()
File "/home/antonio/.local/lib/python3.9/site-packages/mysql/connector/connection.py", line 206, in _open_connection
self._socket.open_connection()
File "/home/antonio/.local/lib/python3.9/site-packages/mysql/connector/network.py", line 511, in open_connection
raise errors.InterfaceError(
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306' (timed out)
My helpers.py file, which includes all the logic for performing SQL queries, looks like this:
import mysql.connector
from config import *
def perform_db_query_fetchall(query: str, params: tuple):
# Initiate DB connection
mydb = mysql.connector.connect(
host=DatabaseConfig.host_db,
user=DatabaseConfig.user_db,
password=DatabaseConfig.password_db,
database=DatabaseConfig.database_db
)
# Initiate DB cursor
c = mydb.cursor()
# Execute SQL query and get the results
c.execute(query, params)
results = c.fetchall()
# Close the DB connection
c.close()
mydb.close()
# Return the results
return results
def perform_db_query_fetchone(query: str, params: tuple):
# Initiate DB connection
mydb = mysql.connector.connect(
host=DatabaseConfig.host_db,
user=DatabaseConfig.user_db,
password=DatabaseConfig.password_db,
database=DatabaseConfig.database_db
)
# Initiate DB cursor
c = mydb.cursor()
# Execute SQL query and get the results
c.execute(query, params)
results = c.fetchone()
# Close the DB connection
c.close()
mydb.close()
# Return the results
return results
def perform_db_query_with_commit(query: str, params: tuple):
# Initiate DB connection
mydb = mysql.connector.connect(
host=DatabaseConfig.host_db,
user=DatabaseConfig.user_db,
password=DatabaseConfig.password_db,
database=DatabaseConfig.database_db,
autocommit=True
)
# Initiate DB cursor
c = mydb.cursor()
# Execute SQL query and get the results
c.execute(query, params)
# Close the DB connection
c.close()
mydb.close()
return
I'm looping with a for loop (about 1900 iterations of the loop) and doing approximately as twice as many SQL queries in that for loop.
It always ends not finishing the job that I need the script to do with just printing out the above mentioned Connection Timed Out exception.
Interesting fact is, that it always crashes on different stage. The last I tried, it failed on the 1600th~ iteration. Sometimes it fails on the 30th~ iteration...
Any ideas what can I do to fix it?
Thank you.

This can happen if the database server is running out of free space. In my case the database server was full.

Related

"WinError 10061" When trying to connect to Cloud SQL using Cloud SQL Python Connector

I have created a Cloud SQL instance, and a new database. However, I can't seem to connect to the database using Cloud SQL Python Connector. I have followed the sample code and steps in the documentation, but still failed.
Error:
Traceback (most recent call last):
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 149, in <module>
print(bnm_data_db.isTableExist('MY_TABLE'))
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 81, in isTableExist
with self.__pool.connect() as db_conn:
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3245, in connect
return self._connection_cls(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3269, in raw_connection
return self.pool.connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 455, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 1270, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 719, in checkout
rec = pool._do_get()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 168, in _do_get
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 166, in _do_get
return self._create_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 396, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 681, in __init__
self.__connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 905, in __connect
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 901, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 368, in <lambda>
return lambda rec: creator_fn()
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 27, in getconn
conn: pymysql.connections.Connection = self.__connector.connect(
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 154, in connect
return connect_task.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 261, in connect_async
return await asyncio.wait_for(get_connection(), timeout)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 257, in get_connection
return await self._loop.run_in_executor(None, connect_partial)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\pymysql.py", line 54, in connect
socket.create_connection((ip_address, SERVER_PROXY_PORT)),
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 845, in create_connection
raise err
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 833, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I run this python code on locally on my machine (in a Conda environment).
Here's my code:
from google.cloud.sql.connector import Connector
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
# insert statement
check_table_statement = sqlalchemy.text(
"SELECT count(*) from information_schema.tables",
)
with pool.connect() as db_conn:
result = db_conn.execute(check_table_statement)
print(result)
connector.close()
I have tried to include my IP address in the Cloud SQL instance. However, I am not sure which one is the correct address. I have check my IP address multiple ways, but all of them are different:
Google
whatsmyip.org
cmd command nslookup myip.opendns.com resolver1.opendns.com
No luck with all of them.
Note:
I did not check the Allow only SSL connections box. So, I reckon, there is no need to include the CA certificate.
The CloudSQL instance is using Public IP.
Update:
I have added full error.
To solve this, check the following -
Make sure you enabled Public IP on the Cloud SQL instance
Then, create a user with hosts allowed as % (all IPs).
Next, ensure that you have a database created by the name of my_database by clicking on the Databases button on the left navigation inside Cloud SQL dashboard.
Ensure that you are including the right service credentials in your script with Cloud SQL Client role.
You should be able to connect. Do let me know if you face any issues, will update the answer accordingly :)
A couple things for you to try in order to resolve the ConnectionRefusedError.
Make sure the Cloud SQL Admin API is enabled within your Google Cloud Project.
Make sure the IAM principal you are using for authorization within your environment has the Cloud SQL Client Role granted to it.
Double check that your instance connection name is correct (first argument being passed to the .connect method) by verifying it on the Cloud SQL instance overview page.
If you are connecting to a Cloud SQL instance Private IP make sure you are connecting from a machine within the VPC network and that your code is updated for private IP connections:
from google.cloud.sql.connector import Connector, IPTypes
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
ip_type=IPTypes.PRIVATE,
)
return conn
Let me know if any of these help resolve your connection, otherwise I can take a deeper look into it for you :)

Connectin Clear Db in heroku application with mysql.connector

I am trying to connect to ClearDb with Heroku from python app I use mysql.connector
My code look like this:
conn = mysql.connector.connect(
host="clearDbHost",
user="123qwe",
password="123qwe",
database="heroku_4fsdfsdf30daf8",
port=3306,
autocommit = True
)
curs = conn.cursor()
Is connection crating the tables, but after this is disconnect and can't execute curs.execute() command later in the code I got this error:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.10/site-packages/mysql/connector/connection_cext.py", line 535, in cmd_query
self._cmysql.query(query,
_mysql_connector.MySQLInterfaceError: Lost connection to MySQL server during query
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/app/twitter.py", line 855,
in
getMyFollowersToDatabse() File "/app/twitter.py", line 186, in getMyFollowersToDatabse
curs.execute("INSERT INTO users (username, follow_id) VALUES ('" + user.username + "', '" + user.id + "')") File
"/app/.heroku/python/lib/python3.10/site-packages/mysql/connector/cursor_cext.py",
line 269, in execute
result = self._cnx.cmd_query(stmt, raw=self._raw, File "/app/.heroku/python/lib/python3.10/site-packages/mysql/connector/connection_cext.py",
line 540, in cmd_query
raise errors.get_mysql_exception(exc.errno, msg=exc.msg, mysql.connector.errors.OperationalError: 2013 (HY000): Lost connection
to MySQL server during query
I see in the Clear Db like is havin ?reconnect=true in the end of the database name.I am not using curs.close()

Problems executing a MySQL query in python

I have problems when executing a method that allows me to eliminate a box in my program, this method is in charge of first eliminating everything that is inside the box and then it eliminates the box, this to avoid conflicts when dealing with foreign keys.
Here is my configuration for the connection:
import mysql.connector
connMySQL = mysql.connector.connect(
host='localhost',
db=wmszf,
user=root,
passwd='',
)
Here is the method:
def deleteBoxComplete(idBox):
cursor = connMySQL.cursor()
cursor.execute('FLUSH QUERY CACHE;')
cursor.close()
cursor = connMySQL.cursor()
cursor.execute(queryDelAllRefInBox(idBox))
connMySQL.commit()
cursor.close()
cursor = connMySQL.cursor()
cursor.execute(queryDeleteBox(idBox))
connMySQL.commit()
cursor.close()
You may notice that I clear the cache, as it is my priority to get the most up-to-date information possible.
Then I leave the query "queryDelAllRefInBox(idBox)":
DELETE FROM
picking_boxitem
WHERE
idBox_id = """+idBox+""";
Then I leave the query "queryDeleteBox(idBox)":
DELETE FROM
picking_box
WHERE
idBox = """+idBox+""";
The problem when executing the "deleteBoxComplete(idBox)" method is that it suddenly closes the connection with the database, it does so arbitrarily, sometimes yes, sometimes not, why does this happen? How can I prevent it? Is there a good practice that allows me to better execute this type of instructions?
Here is the output corresponding to the error:
Traceback (most recent call last):
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\contrib\auth\decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "C:\Users\USUARIO\Desktop\Projects\Produccion\wms\wms\picking\views.py", line 160, in listBoxesInPicking
boxes = getAllBoxInPicking(id)
File "C:\Users\USUARIO\Desktop\Projects\Produccion\wms\wms\MySQL\views.py", line 385, in getAllBoxInPicking
cursor = connMySQL.cursor()
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\connection.py", line 809, in cursor
raise errors.OperationalError("MySQL Connection not available.")
mysql.connector.errors.OperationalError: MySQL Connection not available.
Like other times this can come out:
Traceback (most recent call last):
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\contrib\auth\decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "C:\Users\USUARIO\Desktop\Projects\Produccion\wms\wms\picking\views.py", line 160, in listBoxesInPicking
boxes = getAllBoxInPicking(id)
File "C:\Users\USUARIO\Desktop\Projects\Produccion\wms\wms\MySQL\views.py", line 383, in getAllBoxInPicking
cursor.execute(queryGetAllBoxInPicking(idPicking))
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\cursor.py", line 551, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\connection.py", line 490, in cmd_query
result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query))
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\connection.py", line 384, in _handle_result
elif packet[4] == 0:
IndexError: bytearray index out of range
I appreciate your collaboration.
Additional things I've tried after no response
After looking for information, I found that the mysql engine probably works with much less performance than I imagined when cleaning the cache, so I chose to make a different configuration, instead of making a simple connection I decided to make a "pool connection" for the connections to be managed, the new configuration is as follows:
from django.conf import settings
from mysql.connector import Error
from mysql.connector import pooling
poolname="mysqlpool"
varHost='localhost'
varUser=settings.DATABASES['default']['USER']
varPasswd=settings.DATABASES['default']['PASSWORD']
varDB=settings.DATABASES['default']['NAME']
try:
connection_pool = pooling.MySQLConnectionPool(
pool_name="pynative_pool",
pool_size=10,
pool_reset_session=True,
host=varHost,
database=varDB,
user=varUser,
password=varPasswd)
print("Printing connection pool properties ")
print("Connection Pool Name - ", connection_pool.pool_name)
print("Connection Pool Size - ", connection_pool.pool_size)
connection_object = connection_pool.get_connection()
if connection_object.is_connected():
db_Info = connection_object.get_server_info()
print("Connected to MySQL database using connection pool ... MySQL Server version on ", db_Info)
cursor = connection_object.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("Your connected to - ", record)
except Error as e:
print("Error while connecting to MySQL using Connection pool ", e)
finally:
if connection_object.is_connected():
db_Info = connection_object.get_server_info()
print("Connected to MySQL database using connection pool ... MySQL Server version on ", db_Info)
cursor = connection_object.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("Your connected to - ", record)
However, I got the error that I got before, then I leave the traceability of the error:
Internal Server Error: /picking/listReferencesInBox/179/
Traceback (most recent call last):
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\django\contrib\auth\decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "C:\Users\USUARIO\Desktop\Projects\Produccion\wms\wms\picking\views.py", line 250, in listReferencesInBox
references = getReferencesInBoxMonitor(id)
File "C:\Users\USUARIO\Desktop\Projects\Produccion\wms\wms\MySQL\views.py", line 279, in getReferencesInBoxMonitor
cursor.execute(queryGetReferencesInBoxMonitor(idBox))
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\cursor.py", line 551, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\connection.py", line 490, in cmd_query
result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query))
File "C:\Users\USUARIO\AppData\Local\Programs\Python\Python39\lib\site-packages\mysql\connector\connection.py", line 384, in _handle_result
elif packet[4] == 0:
IndexError: bytearray index out of range
The error occurs to me with the same frequency as before.

Cassandra Celery python timeout happens on raw query execution using django db connection execute

My celery is configured for the Cassandra session like this:
def cassandra_init(*args, **kwargs):
""" Initialize a clean Cassandra connection. """
if cql_cluster is not None:
cql_cluster.shutdown()
if cql_session is not None:
cql_session.shutdown()
connection.setup([settings.DATABASES["default"]["HOST"],], settings.DATABASES["default"]["NAME"])
# Initialize worker context (only standard tasks)
worker_process_init.connect(cassandra_init)
When I am executing a raw cassandra query, timeout happens,
from django.db import connection
cursor = connection.cursor()
total_ap = cursor.execute(
"SELECT cpu_info FROM ap_live_stats;")
It works well all over in my django project but not inside the celery tasks.
Error:
[2018-05-09 18:50:21,576: ERROR/ForkPoolWorker-5] Task apps.statistic.tasks.ap_hourly_data_migrator[77a596d4-61a2-43f4-8580-6abc6e9b5866] raised unexpected: OperationTimedOut("errors={'192.168.98.65': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=192.168.98.65",)
Traceback (most recent call last):
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/home/vkchlt0079/projects/wlcd/src/web_gui/backend/django/wlcd/apps/statistic/tasks.py", line 59, in ap_hourly_data_migrator
"SELECT cpu_info FROM ap_live_stats;")
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/django_cassandra_engine/utils.py", line 47, in execute
return self.cursor.execute(sql)
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/django_cassandra_engine/connection.py", line 12, in execute
return self.connection.execute(*args, **kwargs)
File "/home/vkchlt0079/virtuals/wlc-env/lib/python3.5/site-packages/django_cassandra_engine/connection.py", line 86, in execute
self.session.set_keyspace(self.keyspace)
File "cassandra/cluster.py", line 2448, in cassandra.cluster.Session.set_keyspace (cassandra/cluster.c:48048)
File "cassandra/cluster.py", line 2030, in cassandra.cluster.Session.execute (cassandra/cluster.c:38536)
File "cassandra/cluster.py", line 3844, in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:80834)
cassandra.OperationTimedOut: errors={'192.168.98.65': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=192.168.98.65
Tried to increase the timeout, but not working and not sure, where it is to be included.
# project/tasks.py
from celery.signals import worker_init
from django.db import connection
#worker_process_init.connect
def connect_db(**kwargs):
connection.reconnect()
This will initiate the db connection required via Django Cassandra engine. Reference

Connection problems with SQLAlchemy and multiple processes

I'm using PostgreSQL and SQLAlchemy in a project that consists of a main process which launches child processes. All of these processes access the database via SQLAlchemy.
I'm experiencing repeatable connection failures: The first few child processes work correctly, but after a while a connection error is raised. Here's an MWCE:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, create_engine
from sqlalchemy.orm import sessionmaker
DB_URL = 'postgresql://user:password#localhost/database'
Base = declarative_base()
class Dummy(Base):
__tablename__ = 'dummies'
id = Column(Integer, primary_key=True)
value = Column(Integer)
engine = None
Session = None
session = None
def init():
global engine, Session, session
engine = create_engine(DB_URL)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
def cleanup():
session.close()
engine.dispose()
def target(id):
init()
try:
dummy = session.query(Dummy).get(id)
dummy.value += 1
session.add(dummy)
session.commit()
finally:
cleanup()
def main():
init()
try:
dummy = Dummy(value=1)
session.add(dummy)
session.commit()
p = multiprocessing.Process(target=target, args=(dummy.id,))
p.start()
p.join()
session.refresh(dummy)
assert dummy.value == 2
finally:
cleanup()
if __name__ == '__main__':
i = 1
while True:
print(i)
main()
i += 1
On my system (PostgreSQL 9.6, SQLAlchemy 1.1.4, psycopg2 2.6.2, Python 2.7, Ubuntu 14.04) this yields
1
2
3
4
5
6
7
8
9
10
11
Traceback (most recent call last):
File "./fork_test.py", line 64, in <module>
main()
File "./fork_test.py", line 55, in main
session.refresh(dummy)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1422, in refresh
only_load_props=attribute_names) is None:
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 223, in load_on_ident
return q.one()
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2756, in one
ret = self.one_or_none()
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2726, in one_or_none
ret = list(self)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2797, in __iter__
return self._execute_and_instances(context)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2820, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 469, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: 'SELECT dummies.id AS dummies_id, dummies.value AS dummies_value \nFROM dummies \nWHERE dummies.id = %(param_1)s'] [parameters: {'param_1': 11074}]
This is repeatable and always crashes at the same iteration.
I'm creating a new engine and session after the fork as recommended by the SQLAlchemy documentation and elsewhere. Interestingly, the following slightly different approach does not crash:
import contextlib
import multiprocessing
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, create_engine
from sqlalchemy.orm import sessionmaker
DB_URL = 'postgresql://user:password#localhost/database'
Base = declarative_base()
class Dummy(Base):
__tablename__ = 'dummies'
id = Column(Integer, primary_key=True)
value = Column(Integer)
#contextlib.contextmanager
def get_session():
engine = sqlalchemy.create_engine(DB_URL)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
try:
yield session
finally:
session.close()
engine.dispose()
def target(id):
with get_session() as session:
dummy = session.query(Dummy).get(id)
dummy.value += 1
session.add(dummy)
session.commit()
def main():
with get_session() as session:
dummy = Dummy(value=1)
session.add(dummy)
session.commit()
p = multiprocessing.Process(target=target, args=(dummy.id,))
p.start()
p.join()
session.refresh(dummy)
assert dummy.value == 2
if __name__ == '__main__':
i = 1
while True:
print(i)
main()
i += 1
Since the original code is more complex and cannot simply be switched over to the latter version I'd like to understand why one of these works and the other doesn't.
The only obvious difference is that the crashing code uses global variables for the engine and the session -- these are shared via copy-on-write with the child processes. However, since I reset them directly after the fork I don't understand how that could be a problem.
Update
I re-ran the two code pieces with the latest SQLAlchemy (1.1.5) using both Python 2.7 and Python 3.4. On both the results are basically as described above. However, on Python 2.7 the crash of the first code piece now happens in the 13th iteration (reproducibly) while on 3.4 it already happens in the third iteration (also reproducibly). The second code piece runs without problems on both versions. Here's the traceback from 3.4:
1
2
3
Traceback (most recent call last):
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "fork_test.py", line 64, in <module>
main()
File "fork_test.py", line 55, in main
session.refresh(dummy)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 1424, in refresh
only_load_props=attribute_names) is None:
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/loading.py", line 223, in load_on_ident
return q.one()
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2749, in one
ret = self.one_or_none()
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2719, in one_or_none
ret = list(self)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2790, in __iter__
return self._execute_and_instances(context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2813, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: 'SELECT dummies.id AS dummies_id, dummies.value AS dummies_value \nFROM dummies \nWHERE dummies.id = %(param_1)s'] [parameters: {'param_1': 3397}]
Here's the PostgreSQL log (it's the same for 2.7 and 3.4):
2017-01-18 10:59:36 UTC [22429-1] LOG: database system was shut down at 2017-01-18 10:59:35 UTC
2017-01-18 10:59:36 UTC [22429-2] LOG: MultiXact member wraparound protections are now enabled
2017-01-18 10:59:36 UTC [22428-1] LOG: database system is ready to accept connections
2017-01-18 10:59:36 UTC [22433-1] LOG: autovacuum launcher started
2017-01-18 10:59:36 UTC [22435-1] [unknown]#[unknown] LOG: incomplete startup packet
2017-01-18 11:00:10 UTC [22466-1] user#db LOG: SSL error: decryption failed or bad record mac
2017-01-18 11:00:10 UTC [22466-2] user#db LOG: could not receive data from client: Connection reset by peer
(Note that the message about the incomplete startup packet is harmless)
Quoting "How do I use engines / connections / sessions with Python multiprocessing, or os.fork()?" with added emphasis:
The SQLAlchemy Engine object refers to a connection pool of existing database connections. So when this object is replicated to a child process, the goal is to ensure that no database connections are carried over.
and
However, for the case of a transaction-active Session or Connection being shared, there’s no automatic fix for this; an application needs to ensure a new child process only initiate new Connection objects and transactions, as well as ORM Session objects.
The issue stems from the forked child process inheriting the live global session, which is holding on to a Connection. When target calls init, it overwrites the global references to engine and session, thus decreasing their refcounts to 0 in the child, forcing them to finalize. If you for example one way or another create another reference to the inherited session in the child, you prevent it from being cleaned up – but don't do that. After main has joined and returns to business as usual it is trying to use the now potentially finalized – or otherwise out of sync – connection. As to why this causes an error only after some amount of iterations I'm not sure.
The only way to handle this situation using globals the way you do is to
Close all sessions
Call engine.dispose()
before forking. This will prevent connections from leaking to the child. For example:
def main():
global session
init()
try:
dummy = Dummy(value=1)
session.add(dummy)
session.commit()
dummy_id = dummy.id
# Return the Connection to the pool
session.close()
# Dispose of it!
engine.dispose()
# ...or call your cleanup() function, which does the same
p = multiprocessing.Process(target=target, args=(dummy_id,))
p.start()
p.join()
# Start a new session
session = Session()
dummy = session.query(Dummy).get(dummy_id)
assert dummy.value == 2
finally:
cleanup()
Your second example does not trigger finalization in the child, and so it only seems to work, though it might be as broken as the first, as it is still inheriting a copy of the session and its connection defined locally in main.

Categories

Resources