How I can close the connection in py2neo? - python

Just I want to know how to close the connection in py2neo.
graph = py2neo.Graph(password = 'xxxxx',host = 'xxxx')
I try to use
graph.close()
But I receive the next msg.
AttributeError: 'Graph' object has no attribute 'close'
Lib version : py2neo==3.1.2
Regards.

There is no close method. I was wondering the same thing, and having seen no other answer, I started using netstat and tcpdump to watch the behavior of neo4j when connecting via p2neo.
Here's what I learned...
(1) It seems that neo4j (when connecting via HTTP) makes requests very restfully (no persistent connection as with other databases--e.g. postgres). This means there is actually no need for a .close() method.
(2) The down-side is that you may end up building a list of connections in TIME_WAIT status. This is because no 'Connection':'Close' header is sent. Under low load, this should not be a problem. However, at scale, this will need some tuning at the operating system level (I'll forgo how Java programmers seem to be notorious about not cleaning up after themselves and leaving this to someone else to do. I rant about this too much on too many applications).
Hopefully this helps. Happy Hacking!

To just free up the object, I used:
del graph
so far, no issues. This was because I didn't want a graph and OGM repo connection at the same time...which doesn't appear to be an issue anyways.

Related

How to close a SolrClient connection?

I am using SolrClient for python with Solr 6.6.2. It works as expected but I cannot find anything in the documentation for closing the connection after opening it.
def getdocbyid(docidlist):
for id in docidlist:
solr = SolrClient('http://localhost:8983/solr', auth=("solradmin", "Admin098"))
doc = solr.get('Collection_Test',doc_id=id)
print(doc)
I do not know if the client closes it automatically or not. If it doesn't, wouldn't it be a problem if several connections are left open? I just want to know if it there is any way to close the connection. Here is the link to the documentation:
https://solrclient.readthedocs.io/en/latest/
The connections are not kept around indefinitely. The standard timeout for any persistent http connection in Jetty is five seconds as far as I remember, so you do not have to worry about the number of connections being kept alive exploding.
The Jetty server will also just drop the connection if required, as it's not required to keep it around as a guarantee for the client. solrclient uses a requests session internally, so it should do pipelining for subsequent queries. If you run into issues with this you can keep a set of clients available as a pool in your application instead, then request an available client instead of creating a new one each time.
I'm however pretty sure you won't run into any issues with the default settings.

Autobahn Twisted WebSocket memory leak

I am working on a websocket server and am trying to use python twisted + autobahn but I believe I am hitting a memory leak. In fact I was able to reproduce it with the echo code on https://github.com/crossbario/autobahn-python/tree/master/examples/twisted/websocket/echo
The symptom I see is that on the server side the protocol instances are never freed after connection is closed.
I have tried to examine this in various ways - simplest being to add a print in del method, more complex is examining with pdb and gc. And yes - observing the memory use of the process climbing steadily as connections are made and closed over and over.
What I expect to happen is - after onClose completes the protocol instance should go away for good. In fact I have other server implementations based on twisted (but without autobahn websockets) and I have confirmed that's how it works there (Although I use connectionLost instead).
Does anyone have a clue what is happening?
I faced the issue of memory overflow with an autobahn web socket server that distributed realtime data to clients. The issue was however with clients that keep the connection open but is not able to consumer the data.
This caused the memory to keep on accumulating at the server side. I was able to address the issue by finding the variable responsible keeping the buffer data. Its the transport._tempDataBuffer variable from transport layer in twisted. By defining a maximum size limit on the buffer and clearing it when full, solved the issue for me.
Don't know if you are referring to the same issue, see if this helps.

DataStax Cassandra cassandra.cluster.NoHostAvailable

I am consistently getting this error under normal conditions. I am using the Python Cassandra driver (v3.11) to connect locally with RPC enabled. The issue presents itself after a period of time. y assumption was that it was related to max number of connections or queries. Any pointers on where to begin troubleshooting would be greatly appreciated.
Please check if your nodes are really listening by opening up a separate connection from say cqlsh terminal, as you say it is running locally so probably a single node. If that connects, you might want to see how many file handles are open, maybe it is running out of those. We had a similar problem couple of years back, that was attributed to available file handles.

ZeroRPC auto-assign free port number

I am using ZeroRPC for a project, where there may be multiple instances running on the same machine. For this reason, I need to abe able to auto-assign unused port numbers. I know how to accomplish this with regular sockets using socket.bind(('', 0)) or with ZeroMQ using the bind_to_random_port method, but I cannot figure out how to do this with ZeroRPC.
Since ZeroRPC is based on ZeroMQ, it must be possible.
Any ideas?
Having read details about ZeroRPC-python current state, the safest option to solve the task would be to create a central LotterySINGLETON, that would receive <-REQ/REP-> send a next free port# upon an instance's request.
This approach is isolated from ZeroRPC-dev/mod(s) modifications of use of the otherwise stable ZeroMQ API and gives you the full control over the port#-s pre-configured/included-in/excluded-from LotterySINGLETON's draws.
The other way aroung would be to try to by-pass the ZeroRPC layer and ask ZeroMQ directly about the next random port, but the ZeroRPC documentation discourages from bypassing their own controls imposed on (otherwise pure) ZeroMQ framework elements ( which is quite a reasonable to be emphasised, as it erodes the ZeroRPC-layer consistency of it's add-on operations & services, so it shall rather be "obeyed" than "challenged" in trial/error ... )
The following will let ZMQ choose a free port:
s = zerorpc.Server(SomeClass())
s.bind('tcp://127.0.0.1:0')
The problem with this is that now you don't know which port it bound to. I managed to find the port with netstat and successfully connected to it, but that's probably not what you want to do. I made a seperate question out of this: Find out bound ports of zerorpc server

MySQLdb execute timeout

Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end).
This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.
So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.
If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.
If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.
You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).

Categories

Resources