I am consistently getting this error under normal conditions. I am using the Python Cassandra driver (v3.11) to connect locally with RPC enabled. The issue presents itself after a period of time. y assumption was that it was related to max number of connections or queries. Any pointers on where to begin troubleshooting would be greatly appreciated.
Please check if your nodes are really listening by opening up a separate connection from say cqlsh terminal, as you say it is running locally so probably a single node. If that connects, you might want to see how many file handles are open, maybe it is running out of those. We had a similar problem couple of years back, that was attributed to available file handles.
Related
Im trying to get an RPC connection to my bitcoin core to work, but no matter what I try, it keeps failing.
I'm running Win 10 and have bitcoin core qt V0.21 running.
I have tried several options to get the RPC connection to work. I tried several docker container like btc-rpc-explorer but those keep on failing with a ECONNREFUSED error. Worried about some IP problem with docker, I also tried running different python scripts (like this on: https://pypi.org/project/bitcoinrpc/) but that also gives an exception indicating that no rpc connection is possible.
So, it must be my bitcore node then, right? So I tried many different bitcoin.conf configurations without luck. My latest:
server=1
rpcallowip=0.0.0.0/0
rpcbind=127.0.0.1
rpcbind=0.0.0.0
rpcport=8332
rpcuser=myuser
rpcpass=mypass
txindex=1
Just trying to open it up as much as possible.
I also tried running bitcoind on commandline in stead of bitcoin-qt gui. The commandline output shows me that it takes the correct bitcoin.conf file. So thats okay. But what is wrong???
server=1
rpcallowip=127.0.0.1
rpcport=8332
rpcuser=myuser
rpcpass=mypass
txindex=1
If you are doing your rpc calls within localhost this conf file should be enough.
I would bind to 0.0.0.0 only if you need to query from an external ip.
rpcallowip=0.0.0.0/0 is also insecure
Just I want to know how to close the connection in py2neo.
graph = py2neo.Graph(password = 'xxxxx',host = 'xxxx')
I try to use
graph.close()
But I receive the next msg.
AttributeError: 'Graph' object has no attribute 'close'
Lib version : py2neo==3.1.2
Regards.
There is no close method. I was wondering the same thing, and having seen no other answer, I started using netstat and tcpdump to watch the behavior of neo4j when connecting via p2neo.
Here's what I learned...
(1) It seems that neo4j (when connecting via HTTP) makes requests very restfully (no persistent connection as with other databases--e.g. postgres). This means there is actually no need for a .close() method.
(2) The down-side is that you may end up building a list of connections in TIME_WAIT status. This is because no 'Connection':'Close' header is sent. Under low load, this should not be a problem. However, at scale, this will need some tuning at the operating system level (I'll forgo how Java programmers seem to be notorious about not cleaning up after themselves and leaving this to someone else to do. I rant about this too much on too many applications).
Hopefully this helps. Happy Hacking!
To just free up the object, I used:
del graph
so far, no issues. This was because I didn't want a graph and OGM repo connection at the same time...which doesn't appear to be an issue anyways.
I have found this problem in Python but I was also able to reproduce it with a basic C program.
I am in CentOS 6 (tested also on 7), I have not tested on other Linux distributions.
I have an application on 2 VMs. One has IP address 10.0.13.30 and the other is 10.0.13.56. They have a shared FQDN to allow load-balancing (and high availability) DNS based using gethostbyname or getaddrinfo (it is what is suggested in Python doc).
If my client application is on a different sub-net (10.0.12.x for example), I have no problem: the socket.gethostbyname(FQDN) is returning randomly 10.0.13.30 and 10.0.13.56.
But if my client application is on the same sub-network, it returns always the same entry. And it seems to be always the "closest": I have depoyed it on 10.0.13.31 and it returns always 10.0.13.30 and on 10.0.13.59 it returns always 10.0.13.56.
On these servers CLI commands such as ping and dig are returning the result in different orders almost each time
I have searched many subjects and I concluded that it seems to be a kind of "prioritization to improve the success chances done by glibc" but I have not found any way to disable it.
Because clearly in my case the 2 clients and the 2 servers VMs are on the VMware connected to a single router, so I do not see how the fact that the last byte of the IP of the server is closest to the last byte of the IP of the client is taken into account.
This is a replication of a problem that I have at customer side so it is not an option for me to just move the VMs to a different sub-net :-( ....
Anybody has an idea to have correct load-balancing in the same sub-network? I can partially control the VM config so if a settings has to be changed I can do it.
Instead of hoping that the standard library will do load balancing for you, use socket.getaddrinfo() and choose one of the resulting hosts at random explicitly. This will also make it easy to fail over to a different host if the first one you try is not available.
I'm using Locust.io to load test an application. I will get a random error that I am unable to pinpoint the problem:
1)
ConnectionError(ProtocolError(\'Connection aborted.\', BadStatusLine("\'\'",)),)
2)
ConnectionError(ProtocolError('Connection aborted.', error(104, 'Connection reset by peer')),)
The first one is the one that happens a few times every 1,000,000 requests or so and seems to happen in groups where there will be 5-20 all at once and then it is fine. the second only happens every couple days or so.
The CPU and memory are well below all the servers max load for the database server, app server, and the machine running locust.io.
The servers are medium-sized Linode servers running Ubuntu 14.04. The app is Django and the database in PostgreSQL. I have already increased the maximum open file limit but am wondering if something else needs to be increased on the server that could be leading to the occasional errors.
From what I have been able to gather from searching the error is that it might have something to do with the python requests library.
-Any help would be greatly appreciated.
BadStatusLine is most likely a server side issue. See for example this answer https://stackoverflow.com/a/1767954/1591921 It could be some sort of flood/DoS protection on the server.
Connection reset by peer could also be any number of things, but it is most likely a server/network issue, not an issue on the loadgen side (perhaps connections are idle for too long, or there is a max connection age somewhere)
I dont think there are any general answers to this question, it all depends on your system under test.
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server.
EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like:
psycopg2.connect(connectionString)
Thanks
Final Edit:
It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
This error means what it says, there are too many clients connected to postgreSQL.
Questions you should ask yourself:
Are you the only one connected to this database?
Are you running a graphical IDE?
What method are you using to connect?
Are you testing queries at the same time that you running the code?
Any of these things could be the problem. If you are the admin, you can up the number of clients, but if a program is hanging it open, then that won't help for long.
There are many reasons why you could be having too many clients running at the same time.
Make sure your db connection command isn't in any kind of loop. I was getting the same error from my script until I moved my db.database() out of my programs repeating execution loop.
It simple means many clients are making transaction to PostgreSQL at same time.
I was running Postgis container and Django in different docker container. Hence for my case restarting both db and system container solved the problem.