i'm building a webapp with tornado+sqlalchemy and absolutely random i got this error
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1024, in _handle_dbapi_exception
exc_info
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 822, in _execute_context
conn = self._revalidate_connection()
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 239, in _revalidate_connection
"Can't reconnect until invalid "
sqlalchemy.exc.StatementError: Can't reconnect until invalid transaction is rolled back
I can't figure out how to solve this. I've put all db.commit into a
try:
self.db.commit()
except Exception(e):
self.db.rollback()
That's my class Application.
class Application
[...]
engine = create_engine(options.db_path, convert_unicode=True, echo=options.debug)
models.init_db(engine)
self.db = scoped_session(sessionmaker(bind=engine))
tornado.web.Application.__init__(self, handlers, **settings)
but nothing.
What is the best way to configure sqlalchemy and tornado for a web app like mysql+php?
My way is do rollback on finish, add this to your BaseHandler:
def on_finish(self):
if self.get_status() == 500:
self.db_session.rollback()
I remember having same issuess a while ago. Seems there was some strange stuff related to connection pooling. Disabling pooling seemd to fixed it.
Not the best idea in general, but it worked.
Try passing poolclass=NullPool to create_engine
...
from sqlalchemy.pool import NullPool
...
engine = create_engine(options.db_path, convert_unicode=True, echo=options.debug, poolclass=NullPool)
Related
I have created a Cloud SQL instance, and a new database. However, I can't seem to connect to the database using Cloud SQL Python Connector. I have followed the sample code and steps in the documentation, but still failed.
Error:
Traceback (most recent call last):
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 149, in <module>
print(bnm_data_db.isTableExist('MY_TABLE'))
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 81, in isTableExist
with self.__pool.connect() as db_conn:
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3245, in connect
return self._connection_cls(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\engine\base.py", line 3269, in raw_connection
return self.pool.connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 455, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 1270, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 719, in checkout
rec = pool._do_get()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 168, in _do_get
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\impl.py", line 166, in _do_get
return self._create_connection()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 396, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 681, in __init__
self.__connect()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 905, in __connect
with util.safe_reraise():
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\util\langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 901, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\sqlalchemy\pool\base.py", line 368, in <lambda>
return lambda rec: creator_fn()
File "c:\Users\my_name\my_path\gl_cloudsql_mysql_fx.py", line 27, in getconn
conn: pymysql.connections.Connection = self.__connector.connect(
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 154, in connect
return connect_task.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 261, in connect_async
return await asyncio.wait_for(get_connection(), timeout)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\connector.py", line 257, in get_connection
return await self._loop.run_in_executor(None, connect_partial)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\site-packages\google\cloud\sql\connector\pymysql.py", line 54, in connect
socket.create_connection((ip_address, SERVER_PROXY_PORT)),
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 845, in create_connection
raise err
File "C:\Users\my_name\Anaconda3\envs\composer-bis-gl-env\lib\socket.py", line 833, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I run this python code on locally on my machine (in a Conda environment).
Here's my code:
from google.cloud.sql.connector import Connector
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
# insert statement
check_table_statement = sqlalchemy.text(
"SELECT count(*) from information_schema.tables",
)
with pool.connect() as db_conn:
result = db_conn.execute(check_table_statement)
print(result)
connector.close()
I have tried to include my IP address in the Cloud SQL instance. However, I am not sure which one is the correct address. I have check my IP address multiple ways, but all of them are different:
Google
whatsmyip.org
cmd command nslookup myip.opendns.com resolver1.opendns.com
No luck with all of them.
Note:
I did not check the Allow only SSL connections box. So, I reckon, there is no need to include the CA certificate.
The CloudSQL instance is using Public IP.
Update:
I have added full error.
To solve this, check the following -
Make sure you enabled Public IP on the Cloud SQL instance
Then, create a user with hosts allowed as % (all IPs).
Next, ensure that you have a database created by the name of my_database by clicking on the Databases button on the left navigation inside Cloud SQL dashboard.
Ensure that you are including the right service credentials in your script with Cloud SQL Client role.
You should be able to connect. Do let me know if you face any issues, will update the answer accordingly :)
A couple things for you to try in order to resolve the ConnectionRefusedError.
Make sure the Cloud SQL Admin API is enabled within your Google Cloud Project.
Make sure the IAM principal you are using for authorization within your environment has the Cloud SQL Client Role granted to it.
Double check that your instance connection name is correct (first argument being passed to the .connect method) by verifying it on the Cloud SQL instance overview page.
If you are connecting to a Cloud SQL instance Private IP make sure you are connecting from a machine within the VPC network and that your code is updated for private IP connections:
from google.cloud.sql.connector import Connector, IPTypes
import sqlalchemy
import pymysql
connector = Connector()
# function to return the database connection
def getconn() -> pymysql.connections.Connection:
conn: pymysql.connections.Connection = connector.connect(
"my_project:my_location:my_instance",
"pymysql",
user="user",
password="password",
db="my_database",
ip_type=IPTypes.PRIVATE,
)
return conn
Let me know if any of these help resolve your connection, otherwise I can take a deeper look into it for you :)
I am facing the following issue:
We have configured failover DB nodes for our staging environment. When testing, sometimes the failover happens and Flask keeps open connections to some nodes which are now read-only -- any write operation then fails:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 210, in execute
return self.trace_sql(self.wrapped_.execute, sql, params)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 244, in _trace_sql
result = method(sql, params)
psycopg2.errors.ReadOnlySqlTransaction: cannot execute DELETE in a read-only transaction
I'd like to detect this somehow and close the connection to these nodes, so that any write operation succeeds. Is this possible?
You can import that error class into your module, and then use it in a try-except block:
from psycopg2.errors import ReadOnlySqlTransaction
try:
# your main stuff here
except ReadOnlySqlTransaction:
# terminate the connection
import redis
if redis_client.get(news_digest) is None:
num_of_news += 1
news['digest'] = news_digest
Hey guys, I'm following an online course and I'm not familiar with Redis at all, so I literally have no idea what's going on. Seems like it raises a Connection Error, but I don't know how to fix it. Thank you guys so much.
if redis_client.get(news_digest) is None:
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 1332, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 836, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 1073, in get_connection
connection.connect()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 544, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to localhost:6379. Connection refused.
I'm running a script to check if pymongo exceptions are successfully caught, but so far the only errors I get are Pycharm IDE errors. I turned the mongo daemon off to test this out. The relevant portion of the script is as follows, with IDE errors following after it:
import pymongo
from pymongo import MongoClient
from pymongo import errors
import os
from os.path import basename
def make_collection_name(path):
filename = os.path.splitext(basename(path))[0]
collection_name = filename
if collection_name in db.collection_names():
db[collection_name].drop()
return collection_name
if __name__ == '__main__':
try:
client = MongoClient()
except pymongo.errors.ConnectionFailure as e:
print("Could not connect to MongoDB: %s") % e
except pymongo.errors.ServerSelectionTimeoutError as e:
print("Could not connect to MongoDB: %s") % e
filepath = **hidden filepath**
db = client.TESTDB
collection_name = make_collection_name(filepath)
Instead of having the exceptions handled, I rather get the following errors from the IDE:
Traceback (most recent call last):
File "**hidden path**", line 278, in <module>
collection_name = make_collection_name(filepath)
File "**hidden path**", line 192, in make_collection_name
if collection_name in db.collection_names():
File "C:\Python34\lib\site-packages\pymongo\database.py", line 530, in collection_names
ReadPreference.PRIMARY) as (sock_info, slave_okay):
File "C:\Python34\lib\contextlib.py", line 59, in __enter__
return next(self.gen)
File "C:\Python34\lib\site-packages\pymongo\mongo_client.py", line 859, in _socket_for_reads
with self._get_socket(read_preference) as sock_info:
File "C:\Python34\lib\contextlib.py", line 59, in __enter__
return next(self.gen)
File "C:\Python34\lib\site-packages\pymongo\mongo_client.py", line 823, in _get_socket
server = self._get_topology().select_server(selector)
File "C:\Python34\lib\site-packages\pymongo\topology.py", line 214, in select_server
address))
File "C:\Python34\lib\site-packages\pymongo\topology.py", line 189, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [WinError 10061] No connection could be made because the target machine actively refused it
Process finished with exit code 1
Beginning in PyMongo 3 (not Python 3, PyMongo 3!), the MongoClient constructor no longer blocks trying to connect to the MongoDB server. Instead, the first actual operation you do will wait until the connection completes, and then throw an exception if connection fails.
http://api.mongodb.com/python/current/migrate-to-pymongo3.html#mongoclient-connects-asynchronously
As you can see from your stack trace, the exception is thrown from db.collection_names(), not from MongoClient(). So, wrap your call to make_collection_name in try / except, not the MongoClient call.
I'm using PostgreSQL and SQLAlchemy in a project that consists of a main process which launches child processes. All of these processes access the database via SQLAlchemy.
I'm experiencing repeatable connection failures: The first few child processes work correctly, but after a while a connection error is raised. Here's an MWCE:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, create_engine
from sqlalchemy.orm import sessionmaker
DB_URL = 'postgresql://user:password#localhost/database'
Base = declarative_base()
class Dummy(Base):
__tablename__ = 'dummies'
id = Column(Integer, primary_key=True)
value = Column(Integer)
engine = None
Session = None
session = None
def init():
global engine, Session, session
engine = create_engine(DB_URL)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
def cleanup():
session.close()
engine.dispose()
def target(id):
init()
try:
dummy = session.query(Dummy).get(id)
dummy.value += 1
session.add(dummy)
session.commit()
finally:
cleanup()
def main():
init()
try:
dummy = Dummy(value=1)
session.add(dummy)
session.commit()
p = multiprocessing.Process(target=target, args=(dummy.id,))
p.start()
p.join()
session.refresh(dummy)
assert dummy.value == 2
finally:
cleanup()
if __name__ == '__main__':
i = 1
while True:
print(i)
main()
i += 1
On my system (PostgreSQL 9.6, SQLAlchemy 1.1.4, psycopg2 2.6.2, Python 2.7, Ubuntu 14.04) this yields
1
2
3
4
5
6
7
8
9
10
11
Traceback (most recent call last):
File "./fork_test.py", line 64, in <module>
main()
File "./fork_test.py", line 55, in main
session.refresh(dummy)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1422, in refresh
only_load_props=attribute_names) is None:
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 223, in load_on_ident
return q.one()
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2756, in one
ret = self.one_or_none()
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2726, in one_or_none
ret = list(self)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2797, in __iter__
return self._execute_and_instances(context)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2820, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 469, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: 'SELECT dummies.id AS dummies_id, dummies.value AS dummies_value \nFROM dummies \nWHERE dummies.id = %(param_1)s'] [parameters: {'param_1': 11074}]
This is repeatable and always crashes at the same iteration.
I'm creating a new engine and session after the fork as recommended by the SQLAlchemy documentation and elsewhere. Interestingly, the following slightly different approach does not crash:
import contextlib
import multiprocessing
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, create_engine
from sqlalchemy.orm import sessionmaker
DB_URL = 'postgresql://user:password#localhost/database'
Base = declarative_base()
class Dummy(Base):
__tablename__ = 'dummies'
id = Column(Integer, primary_key=True)
value = Column(Integer)
#contextlib.contextmanager
def get_session():
engine = sqlalchemy.create_engine(DB_URL)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
try:
yield session
finally:
session.close()
engine.dispose()
def target(id):
with get_session() as session:
dummy = session.query(Dummy).get(id)
dummy.value += 1
session.add(dummy)
session.commit()
def main():
with get_session() as session:
dummy = Dummy(value=1)
session.add(dummy)
session.commit()
p = multiprocessing.Process(target=target, args=(dummy.id,))
p.start()
p.join()
session.refresh(dummy)
assert dummy.value == 2
if __name__ == '__main__':
i = 1
while True:
print(i)
main()
i += 1
Since the original code is more complex and cannot simply be switched over to the latter version I'd like to understand why one of these works and the other doesn't.
The only obvious difference is that the crashing code uses global variables for the engine and the session -- these are shared via copy-on-write with the child processes. However, since I reset them directly after the fork I don't understand how that could be a problem.
Update
I re-ran the two code pieces with the latest SQLAlchemy (1.1.5) using both Python 2.7 and Python 3.4. On both the results are basically as described above. However, on Python 2.7 the crash of the first code piece now happens in the 13th iteration (reproducibly) while on 3.4 it already happens in the third iteration (also reproducibly). The second code piece runs without problems on both versions. Here's the traceback from 3.4:
1
2
3
Traceback (most recent call last):
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "fork_test.py", line 64, in <module>
main()
File "fork_test.py", line 55, in main
session.refresh(dummy)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 1424, in refresh
only_load_props=attribute_names) is None:
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/loading.py", line 223, in load_on_ident
return q.one()
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2749, in one
ret = self.one_or_none()
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2719, in one_or_none
ret = list(self)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2790, in __iter__
return self._execute_and_instances(context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/orm/query.py", line 2813, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/vagrant/latest-sqlalchemy-3.4/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: 'SELECT dummies.id AS dummies_id, dummies.value AS dummies_value \nFROM dummies \nWHERE dummies.id = %(param_1)s'] [parameters: {'param_1': 3397}]
Here's the PostgreSQL log (it's the same for 2.7 and 3.4):
2017-01-18 10:59:36 UTC [22429-1] LOG: database system was shut down at 2017-01-18 10:59:35 UTC
2017-01-18 10:59:36 UTC [22429-2] LOG: MultiXact member wraparound protections are now enabled
2017-01-18 10:59:36 UTC [22428-1] LOG: database system is ready to accept connections
2017-01-18 10:59:36 UTC [22433-1] LOG: autovacuum launcher started
2017-01-18 10:59:36 UTC [22435-1] [unknown]#[unknown] LOG: incomplete startup packet
2017-01-18 11:00:10 UTC [22466-1] user#db LOG: SSL error: decryption failed or bad record mac
2017-01-18 11:00:10 UTC [22466-2] user#db LOG: could not receive data from client: Connection reset by peer
(Note that the message about the incomplete startup packet is harmless)
Quoting "How do I use engines / connections / sessions with Python multiprocessing, or os.fork()?" with added emphasis:
The SQLAlchemy Engine object refers to a connection pool of existing database connections. So when this object is replicated to a child process, the goal is to ensure that no database connections are carried over.
and
However, for the case of a transaction-active Session or Connection being shared, there’s no automatic fix for this; an application needs to ensure a new child process only initiate new Connection objects and transactions, as well as ORM Session objects.
The issue stems from the forked child process inheriting the live global session, which is holding on to a Connection. When target calls init, it overwrites the global references to engine and session, thus decreasing their refcounts to 0 in the child, forcing them to finalize. If you for example one way or another create another reference to the inherited session in the child, you prevent it from being cleaned up – but don't do that. After main has joined and returns to business as usual it is trying to use the now potentially finalized – or otherwise out of sync – connection. As to why this causes an error only after some amount of iterations I'm not sure.
The only way to handle this situation using globals the way you do is to
Close all sessions
Call engine.dispose()
before forking. This will prevent connections from leaking to the child. For example:
def main():
global session
init()
try:
dummy = Dummy(value=1)
session.add(dummy)
session.commit()
dummy_id = dummy.id
# Return the Connection to the pool
session.close()
# Dispose of it!
engine.dispose()
# ...or call your cleanup() function, which does the same
p = multiprocessing.Process(target=target, args=(dummy_id,))
p.start()
p.join()
# Start a new session
session = Session()
dummy = session.query(Dummy).get(dummy_id)
assert dummy.value == 2
finally:
cleanup()
Your second example does not trigger finalization in the child, and so it only seems to work, though it might be as broken as the first, as it is still inheriting a copy of the session and its connection defined locally in main.