A MySQL server is initialized in Flask (with connexion) on service startup.
service.app.datastore = DatastoreMySQL(service.config)
class DatastoreMySQL(Datastore):
def __init__(self, config):
...
self.connection_pool = pooling.MySQLConnectionPool(
database=self.database,
host=self.hostname,
username=self.username,
password=self.password,
pool_name="pool_name",
pool_size=self.pool_size,
autocommit=True
)
def exec_query(self, query, params=None):
try:
connection = self.connection_pool.get_connection()
connection.ping(reconnect=True)
with closing(connection.cursor(dictionary=True, buffered=True)) as cursor:
if params:
cursor.execute(query, params)
else:
cursor.execute(query)
finally:
connection.close()
The view functions use the database by passing the DB reference from current_app.
def new():
do_something_in_db(current_app.datastore, request.get_json())
def do_something_in_db(db, data):
db.create_new_item(data)
...
However, a background process (run with APScheduler) must also run do_something_in_db(), but when passed a datastore reference an mysql.connector.errors.OperationalError error is thrown.
My understanding is that this error comes from two sources:
The server timed out and closed the connection. However, in this service the exec_query() function obtains a connection and executes right away, so there should be no reason that it times out. The monitor is also initialized at service startup with a datastore reference, but I am not sure how that can time out given that a new connection is created each time exec_query() is called.
The server dropped an incorrect or too large packet. However, there are no packets here - the process is run by a local background scheduler.
The error in full:
Job "Monitor.monitor_running_queries" raised an exception
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py", line 509, in cmd_query
raw_as_string=raw_as_string)
_mysql_connector.MySQLInterfaceError: MySQL server has gone away
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/k8s-service/lib/datastore/datastore_mysql.py", line 88, in exec_query
cursor.execute(query, params)
File "/usr/local/lib/python3.6/site-packages/mysql/connector/cursor_cext.py", line 276, in execute
raw_as_string=self._raw_as_string)
File "/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py", line 512, in cmd_query
sqlstate=exc.sqlstate)
mysql.connector.errors.DatabaseError: 2006 (HY000): MySQL server has gone away
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/opt/k8s-service/lib/background.py", line 60, in monitor_running_queries
self.handle_process_state(query.id, datastore, hive)
File "/opt/k8s-service/lib/background.py", line 66, in handle_process_state
query = datastore.get_item(query_id)
File "/opt/k8s-service/lib/datastore/datastore.py", line 48, in get_item
return_results=True)
File "/opt/k8s-service/lib/datastore/datastore.py", line 97, in exec_query
connection.close()
File "/usr/local/lib/python3.6/site-packages/mysql/connector/pooling.py", line 131, in close
cnx.reset_session()
File "/usr/local/lib/python3.6/site-packages/mysql/connector/connection_cext.py", line 768, in reset_session
raise errors.OperationalError("MySQL Connection not available.")
mysql.connector.errors.OperationalError: MySQL Connection not available.
Related
Inserting via debezium connector to mysql database brought up via docker container.
Trying to query and it is working fine until some number of hours. But, after that, same query is throwing below exception.
export JAVA_HOME=/tmp/tests/artifacts/java-17/jdk-17; export PATH=$PATH:/tmp/tests/artifacts/java-17/jdk-17/bin; docker exec -i mysql_be1e6a mysql --user=demo --password=demo -D demo -e "select count(k) from test_cdc_f0bf84 where uuid = 'd1e5cd6d-8f7a-457c-b2ea-880c2be52f69'"
2023-01-02 16:27:43,812:ERROR: failed to execute query MySQL rows count by uuid:
Traceback (most recent call last):
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 699, in recv
out = self.in_buffer.read(nbytes, self.timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/buffered_pipe.py", line 164, in read
raise PipeTimeout()
paramiko.buffered_pipe.PipeTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/abstract.py", line 667, in try_query
res = query_function()
^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/test_cdc.py", line 635, in <lambda>
query = lambda: self.mysql_query(
^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/suites/cdc/abstract.py", line 544, in mysql_query
result = self.ssh.exec_on_host(host, [
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/main/connection.py", line 335, in exec_on_host
return self._exec_on_host(host, commands, fetch, timeout=timeout, limit_output=limit_output)[host]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/main/connection.py", line 321, in _exec_on_host
res = list(out)
^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/file.py", line 125, in __next__
line = self.readline()
^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/file.py", line 291, in readline
new_data = self._read(n)
^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 1361, in _read
return self.channel.recv(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/workspace/stress_tests/run_test_with_universe/src/env/lib/python3.11/site-packages/paramiko/channel.py", line 701, in recv
raise socket.timeout()
TimeoutError
After some time, logged manually to machine and tried to read, it still reads fine. Not sure, what does this issue mean.
As explained, tried querying from database via python. Expected it will return count of rows, which it was happening until certain time, but after that, it threw timeout error and socket error.
Trying to query and it is working fine until some number of hours. But, after that, same query is throwing below exception.
The default value for interactive_timeout and wait_timeout is 28880 seconds (8 hours). you can disable this behavior by setting this system variable to zero in your MySQL config.
source: Configuring session timeouts
I am facing the following issue:
We have configured failover DB nodes for our staging environment. When testing, sometimes the failover happens and Flask keeps open connections to some nodes which are now read-only -- any write operation then fails:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 210, in execute
return self.trace_sql(self.wrapped_.execute, sql, params)
File "/usr/local/lib/python3.7/site-packages/elasticapm/instrumentation/packages/dbapi2.py", line 244, in _trace_sql
result = method(sql, params)
psycopg2.errors.ReadOnlySqlTransaction: cannot execute DELETE in a read-only transaction
I'd like to detect this somehow and close the connection to these nodes, so that any write operation succeeds. Is this possible?
You can import that error class into your module, and then use it in a try-except block:
from psycopg2.errors import ReadOnlySqlTransaction
try:
# your main stuff here
except ReadOnlySqlTransaction:
# terminate the connection
I receive following output:
Traceback (most recent call last):
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/connection.py", line 1192, in get_connection
raise ConnectionError('Connection has data')
redis.exceptions.ConnectionError: Connection has data
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/hubs/hub.py", line 457, in fire_timers
timer()
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/hubs/timer.py", line 58, in __call__
cb(*args, **kw)
File "/home/ec2-user/env/lib64/python3.7/site-packages/eventlet/greenthread.py", line 214, in main
result = function(*args, **kwargs)
File "crawler.py", line 53, in fetch_listing
url = dequeue_url()
File "/home/ec2-user/WebCrawler/helpers.py", line 109, in dequeue_url
return redis.spop("listing_url_queue")
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/client.py", line 2255, in spop
return self.execute_command('SPOP', name, *args)
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/client.py", line 875, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/home/ec2-user/env/lib64/python3.7/site-packages/redis/connection.py", line 1197, in get_connection
raise ConnectionError('Connection not ready')
redis.exceptions.ConnectionError: Connection not ready
I couldn't find any issue related to this particular error. I emptied/flushed all redis databases, so there should be no data there. I assume it has something to do with eventlet and patching. But even when I put following code right at the beginning of the file, the error appears.
import eventlet
eventlet.monkey_path()
What does this error mean?
Finally, I came up with the answer to my problem.
When connecting to redis with python, I specified the database with the number 0.
redis = redis.Redis(host=example.com, port=6379, db=0)
After changing the dabase to number 1 it worked.
redis = redis.Redis(host=example.com, port=6379, db=1)
Another way is to set protected_mode to no in etc\redis\redis.conf. Recommended when running redis locally.
I have installed neo4j desktop and I am able to use neo4j database from it. The problem is when I try to connect to neo4j database from a Django project.
I have configured my settings.py with config.DATABASE_URL = 'bolt://neo4j:neo4j#localhost:7687', but when I run neomodel_install-labels, I get this error:
Connecting to bolt://neo4j:neo4j#localhost:7687
Traceback (most recent call last):
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/bolt/connection.py",
line 578, in _connect
s.connect(resolved_address)
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/bin/neomodel_install_labels", line 67, in <module>
main()
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/bin/neomodel_install_labels", line 62, in main
db.set_connection(bolt_url)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neomodel/util.py", line 65,
in set_connection
max_pool_size=config.MAX_POOL_SIZE)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/v1/api.py", line 94,
in driver
return Driver(uri, **config)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/v1/api.py", line 133,
in __new__
return subclass(uri, **config)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/v1/direct.py", line 7
3, in __new__
pool.release(pool.acquire())
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/v1/direct.py", line 4
4, in acquire
return self.acquire_direct(self.address)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/bolt/connection.py",
line 453, in acquire_direct
connection = self.connector(address, self.connection_error_handler)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/v1/direct.py", line 7
0, in connector
return connect(address, security_plan.ssl_context, error_handler, **config)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/bolt/connection.py",
line 707, in connect
raise last_error
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/bolt/connection.py",
line 697, in connect
s = _connect(resolved_address, **config)
File "/Users/hugovillalobos/Documents/Code/AttractoraProject/AttractoraVenv/lib/python3.6/site-packages/neo4j/bolt/connection.py",
line 587, in _connect
raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error.errno))
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('::1', 7687, 0, 0) (reason 61)
I know the database is running because I can connect from neo4j desktop, and I have installed neo4j-driver and neomodel. I don't know what I am missing.
Ok, I don't know if this apply to any case, and I don't know the reason for it, but I found that it was a matter of spacial characters in the password. I set my password with an exclamation character, and when I was trying to connect, it failed. I changed the password for the user in Neo4j removing the exclamation character and Bingo! I got a connection. Can anybody tell my why this happens?
I'm running a script to check if pymongo exceptions are successfully caught, but so far the only errors I get are Pycharm IDE errors. I turned the mongo daemon off to test this out. The relevant portion of the script is as follows, with IDE errors following after it:
import pymongo
from pymongo import MongoClient
from pymongo import errors
import os
from os.path import basename
def make_collection_name(path):
filename = os.path.splitext(basename(path))[0]
collection_name = filename
if collection_name in db.collection_names():
db[collection_name].drop()
return collection_name
if __name__ == '__main__':
try:
client = MongoClient()
except pymongo.errors.ConnectionFailure as e:
print("Could not connect to MongoDB: %s") % e
except pymongo.errors.ServerSelectionTimeoutError as e:
print("Could not connect to MongoDB: %s") % e
filepath = **hidden filepath**
db = client.TESTDB
collection_name = make_collection_name(filepath)
Instead of having the exceptions handled, I rather get the following errors from the IDE:
Traceback (most recent call last):
File "**hidden path**", line 278, in <module>
collection_name = make_collection_name(filepath)
File "**hidden path**", line 192, in make_collection_name
if collection_name in db.collection_names():
File "C:\Python34\lib\site-packages\pymongo\database.py", line 530, in collection_names
ReadPreference.PRIMARY) as (sock_info, slave_okay):
File "C:\Python34\lib\contextlib.py", line 59, in __enter__
return next(self.gen)
File "C:\Python34\lib\site-packages\pymongo\mongo_client.py", line 859, in _socket_for_reads
with self._get_socket(read_preference) as sock_info:
File "C:\Python34\lib\contextlib.py", line 59, in __enter__
return next(self.gen)
File "C:\Python34\lib\site-packages\pymongo\mongo_client.py", line 823, in _get_socket
server = self._get_topology().select_server(selector)
File "C:\Python34\lib\site-packages\pymongo\topology.py", line 214, in select_server
address))
File "C:\Python34\lib\site-packages\pymongo\topology.py", line 189, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [WinError 10061] No connection could be made because the target machine actively refused it
Process finished with exit code 1
Beginning in PyMongo 3 (not Python 3, PyMongo 3!), the MongoClient constructor no longer blocks trying to connect to the MongoDB server. Instead, the first actual operation you do will wait until the connection completes, and then throw an exception if connection fails.
http://api.mongodb.com/python/current/migrate-to-pymongo3.html#mongoclient-connects-asynchronously
As you can see from your stack trace, the exception is thrown from db.collection_names(), not from MongoClient(). So, wrap your call to make_collection_name in try / except, not the MongoClient call.