server starts to closed the connection unexpectedly - python

I have a project with 10+ parsers and at the end have this code:
`
cursor = conn.cursor()
my_file = open(r'csv\file.csv')
sql_statement = """
CREATE TEMP TABLE temp
(
LIKE vhcl
)
ON COMMIT DROP;
COPY temp FROM STDIN WITH
CSV
HEADER
DELIMITER AS ',';
INSERT INTO vhcl
SELECT *
FROM temp
ON CONFLICT (id) DO UPDATE SET name= EXCLUDED.name"""
cursor.copy_expert(sql=sql_statement, file=my_file)
conn.commit()
cursor.close()
`
Everything worked fine until a couple of weeks ago I started to get these errors:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I noticed, that if parsers works (for example) less, than 10 minutes, I won't get those errors
I tried to make a separate function, that adds data to the DB after the parser ends working.
It still gives me that error. The strange thing is that I ran my parsers on my home pc, and it works fine, also, if I add data manually with the same function, but in a different file, it also works fine.
I asked about banned IP for db, but it's okay. So I have no idea why I have this error.
PostgreSQL log

Finally, I found a solution. I still don't know what the problem was. It isn't a connection issue, cause some parsers with the same IP and same network connections work normally. And I'm was still able to add data with the same script, but in a separate project file.
My solution is to add 'keepalives' settings in connection:
conn = psycopg2.connect(
host=hostname,
dbname=database,
user=username,
password=password,
port=port_id,
keepalives=1,
keepalives_idle=30,
keepalives_interval=10,
keepalives_count=5)

You have a network problem.
Both the server and the client complain that the other side unexpectedly hung up on them. So it was some misconfigured network component in the middle that cut the line. You have two options:
fix the network
lower tcp_keepalives_idle on the PostgreSQL client or server, so that the operating system sends "keepalive packets" frequently and the network doesn't consider the connection idle
You might want to read this for more details.

Related

mysql.connector to AWS RDS database timing out

I have an RDS database that a program I created using python and Mysql connect to, in order to keep track of usage of the program. Anytime the program is used, it adds 1 to a counter on the RDS database. Just this week the program has started throwing an error connecting to the RDS SQL database after about an hour of use. Previous to this, I could leave the software running for days without ever timing out. Closing the software and re-opening it, to re-establish the connection allows me to connect for approx another hour or so before it times out again.
I am connecting using the following parameters:
awsConn = mysql.connector.connect(host='myDatabase.randomStringofChars.us-east-1.rds.amazonaws.com', database='myDatabase', port=3306, user='username', password='password')
Did something recently change with AWS/RDS, do I just need to pass a different parameter into the connection string, or do I just need to add somewhere into my program to attempt to re-establish the connection every so often?
Thanks

Django can't close persistent Mysql connection

Problem:
Our Django application includes zipping large folders, which takes too long (up to 48 hours) so the Django connection with the database gets timed out and throws: "MySQL server has gone away error".
Description:
We have a Django version==3.2.1 application whose CONN_MAX_AGE value is set to 1500(seconds). The default wait_timeout in Mysql(MariaDB) is 8 hours.
ExportStatus table has the following attributes:
package_size
zip_time
Our application works this way:
Zip the folders
set the attribute 'package_size' of ExportStatus table after zipping and save in the database.
set the attribute 'zip_time' of ExportStatus table after zipping and save in the database.
Notice the setting of the columns' values in database. These require django connection with database, which gets timedout after long zipping process. Thus throws the MySQL server gone away error.
What we have tried so far:
from django.db import close_old_connections`
close_old_connections()
This solution doesn't work.
Just after zipping, if the time taken is more than 25 minutes, we close all the connections and ensure new connection as:
from django.db import connections
for connection in connections.all():
try:
# hack to check if the connection still persists with the database.
with connection.cursor() as c:
c.execute("DESCRIBE auth_group;")
c.fetchall()
except:
connection.close()
connection.ensure_connection()
Upon printing the value of the length of connections.all(), it is 2. What we don't understand is how Django persists those old connections and retrieves connections from the connection pool. When we close connections from connections.all(), aren't we closing all the connections in the thread pool?
We first set the package_size and then set the zip_time. The problem with this solution is that occasionally (not always), it throws the same error when setting the zip_time attribute. Sometimes, this solution does seem to work. There is no problem in setting the package_size but throws an error occasionally when setting the 'zip_time' attribute. So our question is if we already reset connections after zipping, why does this still take a stale connection from the connection pool and throws the MySQL server gone away error? Do we have any way to close all the old persistent connections and recreate new ones?

psycopg2.OperationalError: server closed the connection unexpectedly (Airflow in AWS, connection drops on both sides)

We have an Airflow instance running in AWS Fargate. It connects to an on-premise Postgres server (on Windows) and tries to load data from a (complicated) view. It uses a PostgresHook for that. However, the task in the DAG fails in Airflow with this error:
File "/usr/local/lib/python3.7/site-packages/airflow/hooks/dbapi_hook.py", line 120, in get_records
cur.execute(sql)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
A while ago, the error occurred after some 10-15 minutes. Now, it occurs faster, after 5 minutes or even faster.
I have looked in the Postgres logs, that shows (confusingly) that it was the client that closed the connection:
LOG: could not send data to client: An existing connection was forcibly closed by the remote host.
FATAL: connection to client lost
I have tried a bunch of potential solutions already.
Without Airflow
Connnecting to the server outside of Airflow, using psycopg2 directly: works (using the complicated view).
Different table
Trying to load data from a different table from Airflow in the cloud: works, finishes quickly too. So this "timeout" only occurs because the query takes a while.
Running the Airflow container locally
At first I could reproduce this issue, but I (think I) solved it by adding some extra parameters in the postgres connection string: keepalives=1&keepalives_idle=60&keepalives_interval=60. However, I cannot reproduce this fix in the Airflow in the cloud, because when I add these parameters there, the error remains.
Increase timeouts
See above, I added keepalives, but I also tried to reason about other potential timeouts. I added a timeout execution_timeout to the DAG arguments, to no avail. We also checked networking timeouts, but given the irregular pattern of the connection failures, it doesn't really sound like such a hard timeout...
I am at a loss here. Any suggestions?
Update: we have solved this problem through a workaround. Instead of keeping the connection open while the complex view is being queried, we have turned the connection into an asynchronous connection (i.e., aconn = psycopg2.connect(database='test', async=1) from psycopg docs). Furthermore, we have turned the view into a materialized view, such that we only call a REFRESH MATERIALIZED VIEW through the asynchronous connection, and then we can just SELECT * on the materialized view a while later, which is very fast.

MySQLConnector (Python): New DB connection for each query vs. one single connection

I have this problem: I'm writing some Python scripts and while, up until now, I had no problems at all using a single MySQLConnector connection throughout the entire script (only closing it at the end of the script), lately I'm having some problems.
If I create a connection at the beginning of the script, something like (ignore the security concerns, I know):
db_conn = mysql.connector.connect(user='root', password='myPassword', host='127.0.0.1', database='my_db', autocommit=True)
and then always use it like:
db_conn.cursor(buffered=True).execute(...)
or fetch and other methods, I will get errors like:
Failed executing the SQL query: MySQL Connection not available.
OR
Failed executing the SQL query: No result set to fetch from.
OR
OperationalError: (2013, 'Lost connection to MySQL server during query')
The code is correct, I just don't understand why this happens. Maybe because I'm concurrently running the same function multiple times (tried with 2), in async, so maybe the concurrent access to the cursor causes this?
I found someone fixed it by using a different DB connection every time (here).
I tried to create a new connection for every single query to the DB and now there are no errors at all. It works fine but it seems an overkill. Imagine calling the async function 10 or 100 times...there would be a lot of DB connections created. Will it cause problems? Will it run out of memory? And, also, I guess it will slow down.
Is there a way to solve it by keeping the same connection for all the queries? Why does that happen?
MySQL is a stateful protocol (more like ftp than http in this way). This means if you are running multiple async threads that are sending and receiving packets on the same MySQL connection, the protocol can't handle that. The server and client will get confused, because messages will arrive in the wrong order.
What I mean is if different async routines are trying to use the database connection at the same time, you can easily get into trouble:
async1: sends query "select * from table1"
async2: sends query "insert into table2 ..."
async1: expects to fetch rows of result set, but receives only rows-affected and last insertid
It gets worse from there, for example, a query cannot execute while there's an existing query with a result set that hasn't closed its result set. Or even worse, you could prepare two queries that have parameters, then subsequently send parameters for the wrong query.
You can use the same database connection for many queries, but DO NOT share the same connection among concurrently executing async threads. To be safe, each async routine should open its own connection. Then the thread that opened a given connection can use that connection for multiple queries.
Think of it like a call center, where dozens of people each have their own phone line. They certainly should not try to share a single phone line and carry on multiple conversations! The only way that could work is if every word uttered on the phone carried some identifying information for which conversation it belonged to. "Hi this is Mr. Smith calling about case #1234, and the answer to the question you just asking me is..."
But MySQL's protocol doesn't do that. It assumes that each message is a continuation of the previous one, and both client and server remember what that is.

MySQL, should I stay connected or connect when needed?

I have been logging temperatures at home to a MySQL database (read 10 sensors in total every 5 minutes), and have been using Python, but I am wondering something...
Currently when I first run my program, I run the normal connect to MySQL, which is only run once.
db = MySQLdb.connect(mysql_server, mysql_username, mysql_passwd, mysql_db)
cursor = db.cursor()
Then I collect the data and publish it to the database successfully. The script then sleeps for 5 minutes, then starts again and collects and publishes the data again and so on. However, I only connect once, and I don't ever disconnect; it just keeps going in a loop. I only disconnect if I terminate the program.
Is this the best practice? That is, keeping the connection open all the time to the MySQL server, or should I disconnect after I have done a insert/commit?
The reason I ask: every now and then, I have to restart the script because maybe my MySQL server has gone offline or some other issue. Should I:
Keep doing what I am doing and just handle any MySQL database disconnections with a reconnect,
Put it in the crontab to collect data every five minutes and have no loop and no sleep, or
Something else?
MySQL servers are configured to handle a fixed limited number of connections. It's not a good practice to tie up a connection that you are not using constantly. So typically you should close the connection as soon as you are done with it, and reconnect only when you need it again. MySQLdb's connections are context mangagers, so you could use the with-statement syntax to make closing the connection automatic.
connection = MySQLdb.connect(
host=config.HOST, user=config.USER,
passwd=config.PASS, db=config.MYDB, )
with connection as cursor:
print(cursor)
# the connection is closed for you automatically
# when Python leaves the `with-suite`.
For robustness, you might want to use try..except to handle the case when (even on the first run) connect fails to make a connection.
Having said that, I would just put it in a crontab entry and dispense with sleeping.

Categories

Resources