I am getting a lot of records from my MongoDB and along the way I get an error
File "C:\database\models\mongodb.py", line 228, in __iter__
for result in self.results:
File "C:\Python27\Lib\site-packages\pymongo\cursor.py", line 814, in next
if len(self.__data) or self._refresh():
File "C:\Python27\Lib\site-packages\pymongo\cursor.py", line 776, in _refresh
limit, self.__id))
File "C:\Python27\Lib\site-packages\pymongo\cursor.py", line 720, in __send_message
self.__uuid_subtype)
File "C:\Python27\Lib\site-packages\pymongo\helpers.py", line 99, in _unpack_response
cursor_id)
pymongo.errors.OperationFailure: cursor id '866472135294727793' not valid at server
Exception KeyError: KeyError(38556896,) in <module 'threading' from 'C:\Python27\lib\threading.pyc'> ignored
What does this mean and how do I fix it. I don't know if it matters but I did
use from gevent import monkey; monkey.patch_all() when I opened the connection
When the cursor has been open for a long time with no operations on it, it(the cursor) can timeout -> this leads to the error
you can set timeout=False in your find query to turn the timeout off
reference
Related
Inserting via debezium connector to mysql database brought up via docker container.
Trying to query and it is working fine until some number of hours. But, after that, same query is throwing below exception.
export JAVA_HOME=/tmp/tests/artifacts/java-17/jdk-17; export PATH=$PATH:/tmp/tests/artifacts/java-17/jdk-17/bin; docker exec -i mysql_be1e6a mysql --user=demo --password=demo -D demo -e "select count(k) from test_cdc_f0bf84 where uuid = 'd1e5cd6d-8f7a-457c-b2ea-880c2be52f69'"
2023-01-02 16:27:43,812:ERROR: failed to execute query MySQL rows count by uuid:
Traceback (most recent call last):
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 699, in recv
out = self.in_buffer.read(nbytes, self.timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/buffered_pipe.py", line 164, in read
raise PipeTimeout()
paramiko.buffered_pipe.PipeTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/abstract.py", line 667, in try_query
res = query_function()
^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/test_cdc.py", line 635, in <lambda>
query = lambda: self.mysql_query(
^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/abstract.py", line 544, in mysql_query
result = self.ssh.exec_on_host(host, [
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/main/connection.py", line 335, in exec_on_host
return self._exec_on_host(host, commands, fetch, timeout=timeout, limit_output=limit_output)[host]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/main/connection.py", line 321, in _exec_on_host
res = list(out)
^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/file.py", line 125, in __next__
line = self.readline()
^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/file.py", line 291, in readline
new_data = self._read(n)
^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 1361, in _read
return self.channel.recv(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 701, in recv
raise socket.timeout()
TimeoutError
After some time, logged manually to machine and tried to read, it still reads fine. Not sure, what does this issue mean.
As explained, tried querying from database via python. Expected it will return count of rows, which it was happening until certain time, but after that, it threw timeout error and socket error.
Trying to query and it is working fine until some number of hours. But, after that, same query is throwing below exception.
The default value for interactive_timeout and wait_timeout is 28880 seconds (8 hours). you can disable this behavior by setting this system variable to zero in your MySQL config.
source: Configuring session timeouts
Recently, I have been very frequently encountering the error pymongo.errors.AutoReconnect. What is strange is that I never really encountered this error before and it just started happening for some reason.
Here is the full stack trace:
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/command_cursor.py", line 259, in next
doc = self._try_next(True)
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/command_cursor.py", line 270, in _try_next
self._refresh()
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/command_cursor.py", line 196, in _refresh
self.__send_message(
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/command_cursor.py", line 139, in __send_message
response = client._run_operation_with_response(
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1342, in _run_operation_with_response
return self._retryable_read(
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1464, in _retryable_read
return func(session, server, sock_info, slave_ok)
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1334, in _cmd
return server.run_operation_with_response(
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/server.py", line 117, in run_operation_with_response
reply = sock_info.receive_message(request_id)
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/pool.py", line 646, in receive_message
self._raise_connection_failure(error)
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/pool.py", line 643, in receive_message
return receive_message(self.sock, request_id,
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/network.py", line 196, in receive_message
_receive_data_on_socket(sock, 16))
File "/home/ubuntu/.local/lib/python3.9/site-packages/pymongo/network.py", line 261, in _receive_data_on_socket
raise AutoReconnect("connection closed")
pymongo.errors.AutoReconnect: connection closed
This error is raised by various queries in various situations. I'm really not certain why this issue started when it wasn't a problem before.
I have tried correcting the issue at one place where the error was most common by utilizing an exponential backoff.
retry_count = 0
while retry_count < 10:
try:
collection.remove({"index" : {"$lt" : (theTime- datetime.timedelta(minutes=60)).timestamp()}})
break
except:
wait_t = 0.5 * pow(2, retry_count)
self.myLogger.error(f"An error occurred while trying to delete entries. Retry : {retry_count}, Wait Time : {wait_t}")
time.sleep( wait_t )
finally:
retry_count = retry_count + 1
So far, this appears to have worked. My question is as follows:
I have a lot of various MongoDB queries. It would be very tedious to track down each of these queries and wrap it into an exponential backoff block that I have above. Is there a way to apply this in all situations when MongoDB encounters this error, so that I don't have to manually add this code everywhere?
Adding this block of code to each MongoDB query would be excessively tedious.
I am facing the following issue:
We have configured failover DB nodes for our staging environment. When testing, sometimes the failover happens and Flask keeps open connections to some nodes which are now read-only -- any write operation then fails:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 210, in execute
return self.trace_sql(self.wrapped_.execute, sql, params)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 244, in _trace_sql
result = method(sql, params)
psycopg2.errors.ReadOnlySqlTransaction: cannot execute DELETE in a read-only transaction
I'd like to detect this somehow and close the connection to these nodes, so that any write operation succeeds. Is this possible?
You can import that error class into your module, and then use it in a try-except block:
from psycopg2.errors import ReadOnlySqlTransaction
try:
# your main stuff here
except ReadOnlySqlTransaction:
# terminate the connection
I am running a Pyramid + Zope transaction manager + SQLAlchemy + PostgreSQL. On some occasions, I have seen StaleDataError error on a Pyramid web application which should very trivial view of updating one row in a database. As the error happens outside the normal view boundary and is not repeatable, it is quite tricky to debug.
I guess this might have something to do with broken database connections or transaction lifecycle. However I don't know how to start debugging the system, so I am asking what could cause this and furthermore how one can pin down errors like this.
UPDATE statement on table 'xxx' expected to update 1 row(s); 0 were matched.
Stacktrace (most recent call last):
File "pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "pyramid_tm/__init__.py", line 94, in tm_tween
reraise(*exc_info)
File "pyramid_tm/compat.py", line 15, in reraise
raise value
File "pyramid_tm/__init__.py", line 82, in tm_tween
manager.commit()
File "transaction/_manager.py", line 111, in commit
return self.get().commit()
File "transaction/_transaction.py", line 280, in commit
reraise(t, v, tb)
File "transaction/_compat.py", line 55, in reraise
raise value
File "transaction/_transaction.py", line 271, in commit
self._commitResources()
File "transaction/_transaction.py", line 417, in _commitResources
reraise(t, v, tb)
File "transaction/_compat.py", line 55, in reraise
raise value
File "transaction/_transaction.py", line 389, in _commitResources
rm.tpc_begin(self)
File "/srv/pyramid/trees/venv/lib/python3.4/site-packages/zope/sqlalchemy/datamanager.py", line 90, in tpc_begin
self.session.flush()
File "sqlalchemy/orm/session.py", line 2004, in flush
self._flush(objects)
File "sqlalchemy/orm/session.py", line 2122, in _flush
transaction.rollback(_capture_exception=True)
File "sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "sqlalchemy/util/compat.py", line 182, in reraise
raise value
File "sqlalchemy/orm/session.py", line 2086, in _flush
flush_context.execute()
File "sqlalchemy/orm/unitofwork.py", line 373, in execute
rec.execute(self)
File "sqlalchemy/orm/unitofwork.py", line 532, in execute
uow
File "sqlalchemy/orm/persistence.py", line 170, in save_obj
mapper, table, update)
File "sqlalchemy/orm/persistence.py", line 692, in _emit_update_statements
(table.description, len(records), rows))
This is most likely scenario:
You have 2 requests that first select an object and try to update/delete it in the datastore and you end up with a "race condition".
Lets say you want to do something like fetch an object and then update it.
If transaction takes some time and you do not select the object with "for update" thus locking the rows - if the object gets deleted in first request and 2nd transaction tries to issue update to row that is not present in the db anymore you can end up with this exception.
You can try doing some row locking to prevent this from happening - subsequent transaction will "wait" for the first operation to finish. Before it gets executed.
http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html?highlight=for_update#sqlalchemy.orm.query.Query.with_for_update
and
http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html?highlight=with_lockmode#sqlalchemy.orm.query.Query.with_lockmode
Describe some of the sqlalchemy machinery you can use to resolve this.
Another option:
TL,DR: if you have "first()" in some cases, you need to remove this in alchemy if you update multiple records
db.session.query(xxx).filter_by(field=value).first()
This command expects the update to affect only one line. And it should if your table has only one record with field=value. This is particularly the case if the field is your ID.
HOWEVER - if your ID is not unique, you might have multiple records with the same ID.
In this case, your can update all by removing "first()"
BTW, use the following to debug your SQL queries (which wouldn't have helped this time...)
import logging
logging.basicConfig()
logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
I am using pandas.read_sql function with hive connection to extract a really large data. I have a script like this:
df = pd.read_sql(query_big, hive_connection)
df2 = pd.read_sql(query_simple, hive_connection)
The big query take a long time, and after it is executed, python returns the following error when trying to execute the second line:
raise NotSupportedError("Hive does not have transactions") # pragma: no cover
It seems there is something wrong with the connection.
Moreover, If I replace the second line with multirpocessing.Manager().Queue(), It returns the following error:
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 662, in temp
token, exp = self._create(typeid, *args, **kwds)
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 554, in _create
conn = self._Client(self._address, authkey=self._authkey)
File "/usr/lib64/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "/usr/lib64/python3.6/multiprocessing/connection.py", line 732, in answer_challenge
message = connection.recv_bytes(256) # reject large message
File "/usr/lib64/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib64/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib64/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
It seems this kind of error are related to exit function being messed up, in the connection.py. Moreover, when I changed the query in the first command to extract smaller data that doesn't take too long, Everything works fine. So I assume it may be that because it takes too long to execute the first query, something is improperly terminated. which caused the two error, both of which are so different in nature but both are related to broken connection issues.