Should I keep a db connection open in MySQLdb? - python

I am writing a script in python to listen to the twitter streaming api which will track specific keywords and insert them in mysql database using MySQLdb. I am not sure which way to choose:
For each incoming tweet, open a db connection, insert to db, then close the connection.
Open a db connection and execute insert command for incoming tweets and not closing the connection at all.
I think the script will receive 1-10 tweets per second.

It kind of depends on how your script is supposed to be run, but it should close the connection at some point - at least once when the process dies. Assuming it's a long-running process (daemon etc), the simplest strategy would be to use a "with" block to ensure the connection is closed, ie:
with MySQLdb.connect(**kw) as db:
while some_condition():
do_stuff_with(db)
but you'll probably need something a bit more involved since MySQL tends to close idle connections by itself

Related

Does the pre ping feature in SqlAlchemy db pools automatically reconnect and send the SQL command in case the pre ping check fails?

I want some clarification on how the pre ping feature exactly works with SqlAlchemy db pools. Let's say I try to make a SQL query to my database with the db pool. If the db pool sends a pre ping to check the connection and the connection is broken, does it automatically handle this? By handling I mean that it reconnects and then sends the SQL query? Or do I have to handle this myself in my code?
Thanks!
From the docs, yes stale connections are handled transparently:
The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
... unless:
If the database is still not available when “pre ping” runs, then the
initial connect will fail and the error for failure to connect will be
propagated normally. In the uncommon situation that the database is
available for connections, but is not able to respond to a “ping”, the
“pre_ping” will try up to three times before giving up, propagating
the database error last received.

pymysql SELECT * only detecting changes made externally after instantiating a new connection

I have two applications that access the same DB. One application inserts data into a table. The other sits in a loop and waits for the data to be available. If I add a new connection and close the connection before I run the SELECT query I find the data in the table without issues. I am trying to reduce the number of connections. I tried to leave the connection open then just loop through and send the query. When I do this, I do not get any of the updated data that was inserted into the table since the original connection was made. I get I can just re-connect and close, but this is a lot of overhead if I am connecting and closing every second or 2. Any ideas how to get data that was added to a DB from an external source with a SELECT query without having to connect and close every time in a loop?
Do you commit your insert?
normally the best way is you close your connection, and it is not generating very overhead if you open a connection for the select query.

MySQLConnector (Python): New DB connection for each query vs. one single connection

I have this problem: I'm writing some Python scripts and while, up until now, I had no problems at all using a single MySQLConnector connection throughout the entire script (only closing it at the end of the script), lately I'm having some problems.
If I create a connection at the beginning of the script, something like (ignore the security concerns, I know):
db_conn = mysql.connector.connect(user='root', password='myPassword', host='127.0.0.1', database='my_db', autocommit=True)
and then always use it like:
db_conn.cursor(buffered=True).execute(...)
or fetch and other methods, I will get errors like:
Failed executing the SQL query: MySQL Connection not available.
OR
Failed executing the SQL query: No result set to fetch from.
OR
OperationalError: (2013, 'Lost connection to MySQL server during query')
The code is correct, I just don't understand why this happens. Maybe because I'm concurrently running the same function multiple times (tried with 2), in async, so maybe the concurrent access to the cursor causes this?
I found someone fixed it by using a different DB connection every time (here).
I tried to create a new connection for every single query to the DB and now there are no errors at all. It works fine but it seems an overkill. Imagine calling the async function 10 or 100 times...there would be a lot of DB connections created. Will it cause problems? Will it run out of memory? And, also, I guess it will slow down.
Is there a way to solve it by keeping the same connection for all the queries? Why does that happen?
MySQL is a stateful protocol (more like ftp than http in this way). This means if you are running multiple async threads that are sending and receiving packets on the same MySQL connection, the protocol can't handle that. The server and client will get confused, because messages will arrive in the wrong order.
What I mean is if different async routines are trying to use the database connection at the same time, you can easily get into trouble:
async1: sends query "select * from table1"
async2: sends query "insert into table2 ..."
async1: expects to fetch rows of result set, but receives only rows-affected and last insertid
It gets worse from there, for example, a query cannot execute while there's an existing query with a result set that hasn't closed its result set. Or even worse, you could prepare two queries that have parameters, then subsequently send parameters for the wrong query.
You can use the same database connection for many queries, but DO NOT share the same connection among concurrently executing async threads. To be safe, each async routine should open its own connection. Then the thread that opened a given connection can use that connection for multiple queries.
Think of it like a call center, where dozens of people each have their own phone line. They certainly should not try to share a single phone line and carry on multiple conversations! The only way that could work is if every word uttered on the phone carried some identifying information for which conversation it belonged to. "Hi this is Mr. Smith calling about case #1234, and the answer to the question you just asking me is..."
But MySQL's protocol doesn't do that. It assumes that each message is a continuation of the previous one, and both client and server remember what that is.

How to handle AWS-RDS connections within my application?

I'm looking for some input on how to handle my connection to AWS-RDS. Should I open and close the connection each time I execute a query? Should I use a lambda function, and why?
I currently have it setup so the connection remains open and executions are handled from there. I have no connection closes or timeouts.
conn = pymysql.connect(db=dbname, host=host, port=port, user=user,
password=password)
cur = conn.cursor()
I then have query executions throughout the code like such.
cur.execute("SELECT product, amount, total " +
"FROM " + table +
" WHERE po_date BETWEEN %s AND %s",
(cur_month, next_month))
This depends on your application preferences.
Global Connection- If you create the connection at the global level, you save on the cost of opening the connection at each time you need to access the database, but you are using more memory on the database as it maintains the open connection. If the application does not close the connection on exit, the database must manually timeout this idle connection and kill it. You will need to add retry logic to the database to ensure the connection is still alive.
Connect Each Time - Added overhead of creating and closing the connection. Uses extra cpu on the client and db side to open and close the connection, but will keep the connection count lower.
As for using lambda, that completely depends on the application design. But, I would say yes, use it when you can!
If you want to use lambda to connect to a database, you will need to build a deployment package or a lambda layer to include the SQL client. Here are some links with step by step instructions to create these for python with pymysql. If needed, you can substitute the pymysql library with another SQL client using these same instructions.
https://geektopia.tech/post.php?blogpost=Create_Lambda_Package_Python
https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python

MySQL, should I stay connected or connect when needed?

I have been logging temperatures at home to a MySQL database (read 10 sensors in total every 5 minutes), and have been using Python, but I am wondering something...
Currently when I first run my program, I run the normal connect to MySQL, which is only run once.
db = MySQLdb.connect(mysql_server, mysql_username, mysql_passwd, mysql_db)
cursor = db.cursor()
Then I collect the data and publish it to the database successfully. The script then sleeps for 5 minutes, then starts again and collects and publishes the data again and so on. However, I only connect once, and I don't ever disconnect; it just keeps going in a loop. I only disconnect if I terminate the program.
Is this the best practice? That is, keeping the connection open all the time to the MySQL server, or should I disconnect after I have done a insert/commit?
The reason I ask: every now and then, I have to restart the script because maybe my MySQL server has gone offline or some other issue. Should I:
Keep doing what I am doing and just handle any MySQL database disconnections with a reconnect,
Put it in the crontab to collect data every five minutes and have no loop and no sleep, or
Something else?
MySQL servers are configured to handle a fixed limited number of connections. It's not a good practice to tie up a connection that you are not using constantly. So typically you should close the connection as soon as you are done with it, and reconnect only when you need it again. MySQLdb's connections are context mangagers, so you could use the with-statement syntax to make closing the connection automatic.
connection = MySQLdb.connect(
host=config.HOST, user=config.USER,
passwd=config.PASS, db=config.MYDB, )
with connection as cursor:
print(cursor)
# the connection is closed for you automatically
# when Python leaves the `with-suite`.
For robustness, you might want to use try..except to handle the case when (even on the first run) connect fails to make a connection.
Having said that, I would just put it in a crontab entry and dispense with sleeping.

Categories

Resources