Realtime Database connection limit - python

Played around with a python script and got my connection limit to full by a loop lol.
Any way to clear it? I've restarted PC already.
image

In case of a so-called dirty disconnect, where the client simply disappears without information the server, the Realtime Database server depends on the socket timeout to detect when the client is gone. This may take a few minutes, but aside from that should take no action from your side.

Related

How to close a SolrClient connection?

I am using SolrClient for python with Solr 6.6.2. It works as expected but I cannot find anything in the documentation for closing the connection after opening it.
def getdocbyid(docidlist):
for id in docidlist:
solr = SolrClient('http://localhost:8983/solr', auth=("solradmin", "Admin098"))
doc = solr.get('Collection_Test',doc_id=id)
print(doc)
I do not know if the client closes it automatically or not. If it doesn't, wouldn't it be a problem if several connections are left open? I just want to know if it there is any way to close the connection. Here is the link to the documentation:
https://solrclient.readthedocs.io/en/latest/
The connections are not kept around indefinitely. The standard timeout for any persistent http connection in Jetty is five seconds as far as I remember, so you do not have to worry about the number of connections being kept alive exploding.
The Jetty server will also just drop the connection if required, as it's not required to keep it around as a guarantee for the client. solrclient uses a requests session internally, so it should do pipelining for subsequent queries. If you run into issues with this you can keep a set of clients available as a pool in your application instead, then request an available client instead of creating a new one each time.
I'm however pretty sure you won't run into any issues with the default settings.

Is it standard practice to keep a FIX connection connected all day long, or relogin periodically?

I wrote a program in Python using the quickfix package which connects to a vendor via FIX. We login in the morning, but don't actually send messages through the connection until the end of the day. The issue is, we don't want to keep the program open for the entirety of the day, but would rather relogin in the afternoon when we need to send the messages.
The vendor is requesting we stay logged in for the full duration between our start and stop times specified in our configurations. This is only possible by leaving my program on for the entirety of the day, because if I close it then the messages the vendor sends aren't registered as received by me. I don't send a logout message though.
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day?
Any design or best practice advice would be helpful here.
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day?
I don't know what others have done, but I used QuickFIX with Python for years and never had any problem running my system all day, OR shutting it down periodically for whatever reason and reconnecting. In the end I wound up leaving the system connected for weeks at a time, since that allowed me to record data.
I would say that the answer to both of your questions is YES. It is common to leave it running. Also, it is acceptable to just close the program.
There can always be edge cases and idiosyncratic features of your implementation and your counterparty, so you should seek to understand more why they have asked you not to disconnect. That sounds very strange to me. Is their FIX engine not capable of something very simple and standard?
Yes it is common to keep the FIX sessions running for a long time. That should not be an issue.
You can't just shutdown your program your end, as Session-level FIX.Heartbeat(35=0) messages, sent periodically (usually 30s), as meant to keep the underlying TCP connection "open", and check that both ends are still up and running properly.
By the details you gave, if your vendor (which is likely the acceptor side) requests it, it might be because they need to send you messages, with no delay as they occur.
If you (the initiator side) are not logged in at that time, they won't be able to send those messages, as they won't be able to initiate a session with you.
The vendor might monitor sessions as well, but as an initiator, it sounds odd.
as initiators are waiting for connections.
More likely they will monitor unexpected sessions drops.
All in all, it very depends of your vendor anyway, you have to follow what they say...

How to get instance of database and close it? Tornado

I'm having trouble with MySQL timing out and going away after 8 hours. I am using google app engine as a host. My Python script uses the Tornado framework.
Right now I instantiate my MySQL db connection before any functions right at the top of the main server script. Once I deploy that, the clock starts ticking and 8 hours or so later, MySQL will go away and I will have to deploy my script again.
I haven't been using db.close() at all because I hear that restarting the database connection takes a long time. Is this true? Or is there a proper way to use db.close()?
One of my friends suggested I try getting the database instance and then closing it after each function.. is that recommended and where might I find some tutorials on that?
I'm mostly looking for resources here, but if someone wants to lay it out for me that would be awesome.
Thank you all in advance.
The connection is going away because of the wait_timeout session variable which
is the number of seconds the server waits for activity on a noninteractive connection
before closing it.
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Good way is to close the connection each time and create a new one if you are not reusing the same connection so frequently, otherwise you can increase the value of wait_timeout
Establishing a connection to a MySQL database should be quite fast and it is certainly good practice to keep the connection open only for as long as you need it.
I am not certain why your connection should be non-responsive for 8 hours - have you tried checking your settings?
The correct command in Python is connection.close().

Recover from dropped connection in redis pub/sub

I am running client that is connecting to a redis db. The client is on a WiFi connection and will drop the connection at times. Unfortunately, when this happens, the program just keeps running without throwing any type of warning.
r = redis.StrictRedis(host=XX, password=YY...)
ps = r.pubsub()
ps.subscribe("12345")
for items in ps.listen():
if items['type'] == 'message':
data = items['data']
Ideally, what I am looking for is a catch an event when the connection is lost, try and reestablish the connection, do some error correcting, then get things back up and running. Should this be done in the python program? Should I have an external watchdog?
Unfortunately, one have to 'ping' Redis to check if it is available. If You try to put a value to Redis storage, it will raise an ConnectionError exception if connection is lost. But the listen() generator will not close automatically when connection is lost.
I think that hacking Redis' connection pool could help, give it a try.
P.S. In is very insecure to connect to redis in an untrusted network environment.
This is an old, old question but I linked one of my own questions to it and happened to run across it again. It turned out there was a bug in the redis library that caused the client to enter an infinite loop attempting to reconnect if it lost connection to the redis server. I debugged the issue and PR'd the change. it was merged a long time ago now. Once surfaced the maintainer also knew of a second location that had the same issue.
This problem shouldn't occur anymore.
To fully answer the question, I can't remember which error it is given the time since I fixed this but there is now a specific error returned you can catch and reconnect on.

MySQLdb execute timeout

Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end).
This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.
So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.
If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.
If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.
You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).

Categories

Resources