I init mariadb connection while webapp initialization like this:
con = MySQLdb.connect('localhost', 'user', 'pass', 'db')
Now I've found it doesn't work since there is a timeout to this connection.
What is the best practice to set/keep connection to db? Increase timeout, create connection in each request or something more tuned?
Recommend you build a singleton object that returns the db connection (your con). The first time it is called, it performs the MySQLDB.connect(). Then it keeps a static copy of the connection for returning to the caller on subsequent calls.
In order to recover from disconnects... First, note that a timeout is not the only reason for losing a connection; a network glitch could cause it. So forget about timeouts and simply plan to reconnect when needed.
One way is to do a ping to see if its still open, and if it is not, reconnect.
Another way is to have "autoreconnect" turned on. But this has negative impact on transactions, #variables, and other thing that could bite you.
Related
I want some clarification on how the pre ping feature exactly works with SqlAlchemy db pools. Let's say I try to make a SQL query to my database with the db pool. If the db pool sends a pre ping to check the connection and the connection is broken, does it automatically handle this? By handling I mean that it reconnects and then sends the SQL query? Or do I have to handle this myself in my code?
Thanks!
From the docs, yes stale connections are handled transparently:
The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
... unless:
If the database is still not available when “pre ping” runs, then the
initial connect will fail and the error for failure to connect will be
propagated normally. In the uncommon situation that the database is
available for connections, but is not able to respond to a “ping”, the
“pre_ping” will try up to three times before giving up, propagating
the database error last received.
I have a question and I hope that someone could help me.
To give you some context, imagine a loop like this:
while True:
conn = mysql.connector.connect(**args) #args without specifying poolname
conn.cursor().execute(something)
conn.commit()
conn.cursor.close()
#at this point what is better:
conn.close()
#or
conn.disconnect()
#or
conn.shutdown()
In my case, I'm using conn.close() but after a long time of execution, the script I always get an error:
mysql.connector.errors.OperationalError: 2013 (HY000): Lost connection to MySQL server during query
Aparently I'm exceeding the time-out of the mysql connection which is by default 8 hours. But looking at the loop, it's creating and closing new connections on each iteration. I'm pretty sure that the cursor execution takes no more than an hour.
So the question is: doesn't the close() method close the connection? Should I use disconnect() or shutdown() instead? What are the differences between using one or the other.
I hope I've explained myself well, best regards!
There might be a problem inside your code.
Normally, close() will work everytime even if you are using loop.
But still try to trial and error those three command and see what suits your code.
The doc say that clearly
close() is a synonym for disconnect().
For a connection obtained from a connection pool, close() does not
actually close it but returns it to the pool and makes it available
for subsequent connection requests
disconnect() tries to send a QUIT command and close the socket. It raises no exceptions. MySQLConnection.close() is a synonymous method name and more commonly used.
To shut down the connection without sending a QUIT command first, use
shutdown().
For shutdown
Unlike disconnect(), shutdown() closes the client connection without
attempting to send a QUIT command to the server first. Thus, it will
not block if the connection is disrupted for some reason such as
network failure.
But I do not figure out why you get Lost connection to MySQL server during query You may check this discussion Lost connection to MySQL server during query
I am using SolrClient for python with Solr 6.6.2. It works as expected but I cannot find anything in the documentation for closing the connection after opening it.
def getdocbyid(docidlist):
for id in docidlist:
solr = SolrClient('http://localhost:8983/solr', auth=("solradmin", "Admin098"))
doc = solr.get('Collection_Test',doc_id=id)
print(doc)
I do not know if the client closes it automatically or not. If it doesn't, wouldn't it be a problem if several connections are left open? I just want to know if it there is any way to close the connection. Here is the link to the documentation:
https://solrclient.readthedocs.io/en/latest/
The connections are not kept around indefinitely. The standard timeout for any persistent http connection in Jetty is five seconds as far as I remember, so you do not have to worry about the number of connections being kept alive exploding.
The Jetty server will also just drop the connection if required, as it's not required to keep it around as a guarantee for the client. solrclient uses a requests session internally, so it should do pipelining for subsequent queries. If you run into issues with this you can keep a set of clients available as a pool in your application instead, then request an available client instead of creating a new one each time.
I'm however pretty sure you won't run into any issues with the default settings.
My db is SQL Server on a remote machine and I am encountering quite a bit of latency.
I have a method in a controller that is structured like this:
def submitData():
parameters = db.site.select(...)
results = some_http_post() # posts data to remote server,
if results:
rec = db.status_table.insert(...)
rec.status_tabl.update(...)
What tends to happen is that some_http_post() takes several seconds to get a response and I run out of threads
When I hit web2py with more than 6 concurrent requests to submitData, I am encountering freezes in requests, rather than getting DB error.
This has the effect of stopping any further web2py requests.
I would ideally like to close the db connection before the call to some_http_post and start another db connection after it, but I don't see a simple way to do this with the DAL API. Is this possible or is there a better optimisation that I could be trying?
Connection pooling is enabled by default.
If you add "pool_size=0" to the connection string this disables connection pooling and I assume the behavior to forcibly open/close each conn. instead of leaving them open.
If you need more threads (sounds like), increase your pool_size and see what happens.
OR, yes, you can use the DAL and do a db.close() and the connection is auto-reopened on first request
Try enabling connection pooling: http://web2py.com/books/default/chapter/29/06#Connection-pooling
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end).
This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.
So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.
If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.
If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.
You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).