Im trying to get an RPC connection to my bitcoin core to work, but no matter what I try, it keeps failing.
I'm running Win 10 and have bitcoin core qt V0.21 running.
I have tried several options to get the RPC connection to work. I tried several docker container like btc-rpc-explorer but those keep on failing with a ECONNREFUSED error. Worried about some IP problem with docker, I also tried running different python scripts (like this on: https://pypi.org/project/bitcoinrpc/) but that also gives an exception indicating that no rpc connection is possible.
So, it must be my bitcore node then, right? So I tried many different bitcoin.conf configurations without luck. My latest:
server=1
rpcallowip=0.0.0.0/0
rpcbind=127.0.0.1
rpcbind=0.0.0.0
rpcport=8332
rpcuser=myuser
rpcpass=mypass
txindex=1
Just trying to open it up as much as possible.
I also tried running bitcoind on commandline in stead of bitcoin-qt gui. The commandline output shows me that it takes the correct bitcoin.conf file. So thats okay. But what is wrong???
server=1
rpcallowip=127.0.0.1
rpcport=8332
rpcuser=myuser
rpcpass=mypass
txindex=1
If you are doing your rpc calls within localhost this conf file should be enough.
I would bind to 0.0.0.0 only if you need to query from an external ip.
rpcallowip=0.0.0.0/0 is also insecure
Related
Preconditions:
I want to execute dyamic multiple commands via ssh from python on one remote machine at a time
I couldn't find any existing modules matching my "flavour" (If you care why, see below (*) ;))
Python scripts are running local on a Ubuntu machine
In general for single "one action calls" I simply do a native ssh call using subprocess.Popen and it works fine.
But for multiple subsequent dynamic calls, I don't want to create a new ssh connection for every command, even if the remote host might allow it. I thought of the following solution:
1) Configure my local ssh on Ubuntu to use multiplexing, so as long as a connection is open, it is used instead of creating a new one (https://www.admin-magazin.de/News/Tipps/Mit-SSH-Multiplexing-schneller-einloggen (Sorry, in german))
2) Creating an ssh connection by opening it in a running background thread, where in itself nothing is done, besides maybe a "keepalive" if necessary, or things like that, and keep the connection open till it's closed (i.e. by stopping the thread). (http://sebastiandahlgren.se/2014/06/27/running-a-method-as-a-background-thread-in-python/ )
3) Still executing ssh calls simply via subprocess.Popen, but now automatically using the open connection due to the ssh multiplexing config.
Should this work, or is there a fallacy alert?
(*)What I don't want:
Most solutions/examples I found used paramiko. On my first "happy path" it worked like charm, but the first failure test resulted in an internal AttributeError (https://github.com/paramiko/paramiko/issues/1617) and I don't want to build anything on this.
Other Libs i found like i.e. http://robotframework.org/SSHLibrary/SSHLibrary.html don't seem to have a real community using them.
pexpect....the whole "expect" concept gives me the creeps and should in my opinion only by used if there's absolutly no other reasonable reason ;)
What you've proposed is fine, but you don't even need to keep an ssh connection running in a background thread. If you configure ControlMaster (for reusing an existing connection) and ControlPerist (for keeping the master connection open even when all other connections have closed), then new ssh connections will continue to use the shared connection (as long as they happen before the ControlPersist timeout).
This means that if you set up the ControlMaster configuration external to your code (e.g., in ~/.ssh/ssh_config), your code doesn't even need to be aware of the configuration: it can just continue to call ssh normally, and ssh will take care of reusing the connection.
I am consistently getting this error under normal conditions. I am using the Python Cassandra driver (v3.11) to connect locally with RPC enabled. The issue presents itself after a period of time. y assumption was that it was related to max number of connections or queries. Any pointers on where to begin troubleshooting would be greatly appreciated.
Please check if your nodes are really listening by opening up a separate connection from say cqlsh terminal, as you say it is running locally so probably a single node. If that connects, you might want to see how many file handles are open, maybe it is running out of those. We had a similar problem couple of years back, that was attributed to available file handles.
I'm using Locust.io to load test an application. I will get a random error that I am unable to pinpoint the problem:
1)
ConnectionError(ProtocolError(\'Connection aborted.\', BadStatusLine("\'\'",)),)
2)
ConnectionError(ProtocolError('Connection aborted.', error(104, 'Connection reset by peer')),)
The first one is the one that happens a few times every 1,000,000 requests or so and seems to happen in groups where there will be 5-20 all at once and then it is fine. the second only happens every couple days or so.
The CPU and memory are well below all the servers max load for the database server, app server, and the machine running locust.io.
The servers are medium-sized Linode servers running Ubuntu 14.04. The app is Django and the database in PostgreSQL. I have already increased the maximum open file limit but am wondering if something else needs to be increased on the server that could be leading to the occasional errors.
From what I have been able to gather from searching the error is that it might have something to do with the python requests library.
-Any help would be greatly appreciated.
BadStatusLine is most likely a server side issue. See for example this answer https://stackoverflow.com/a/1767954/1591921 It could be some sort of flood/DoS protection on the server.
Connection reset by peer could also be any number of things, but it is most likely a server/network issue, not an issue on the loadgen side (perhaps connections are idle for too long, or there is a max connection age somewhere)
I dont think there are any general answers to this question, it all depends on your system under test.
I wrote a simple python program to play and pause banshee music player.
While its working on my own machine, I have trouble doing it to a remote computer, connected to the same router (LAN).
I edited the session.conf of the remote machine, to add this line:
<listen>tcp:host=localhost,port=12434</listen>
and here is my program:
import dbus
bus_obj=dbus.bus.BusConnection("tcp:host=localhost,port=12434")
proxy_object=bus_obj.get_object('org.bansheeproject.Banshee',
'/org/bansheeproject/Banshee/PlayerEngine')
playerengine_iface=dbus.Interface(proxy_object,
dbus_interface='org.bansheeproject.Banshee.PlayerEngine')
var=0
while (var!="3"):
var=raw_input("\nPress\n1 to play\n2 to pause\n3 to exit\n")
if var=="1":
print "playing..."
playerengine_iface.Play()
elif var=="2":
print "pausing"
playerengine_iface.Pause()
This is what i get when i try to execute it
Traceback (most recent call last):
File "dbus3.py", line 4, in <module>
bus_obj=dbus.bus.BusConnection("tcp:host=localhost,port=12434")
File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 125, in __new__
bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket "localhost:12434" Connection refused
What am I doing wrong here?
should i edit /usr/lib/python2.7/dist-packages/dbus/bus.py
UPDATE:
ok, here is the deal
when i add
<listen>tcp:host=192.168.1.7,port=12434</listen>
to to /etc/dbus-1/session.conf, then reboot, hoping it would start listening on reboot,
It never boots. It gets stuck on loading screen and occasionally, a black screen with the following text flashes:
Pulseaudio Configured For Per-user Sessions Saned Disabled;edit/etc/default/saned
so, when i go ctrl+alt+f1 , change session.conf to original state and reboot, it boots properly.
Whats all that about?
How can I make dbus daemon listen for tcp connections, without encountering problems?
I recently needed to set this up, and discovered that the trick is: order matters for the <listen> elements in session.conf. You should make sure the TCP element occurs first. Bizarre, I know, but true, at least for my case. (I see exactly the same black screen behavior if I reverse the order and put the UNIX socket <listen> element first.)
Also, prepending the TCP <listen> tag is necessary, but not sufficient. To make remote D-Bus connections via TCP work, you need to do three things:
Add a <listen> tag above the UNIX one, similar to this:
<listen>tcp:host=localhost,bind=*,port=55556,family=ipv4</listen>
<listen>unix:tmpdir=/tmp</listen>
Add a line (right below the <listen> tags is fine) that says:
<auth>ANONYMOUS</auth>
Add another line below these that says:
<allow_anonymous/>
The <auth> tag should be added in addition to any other <auth> tags that may be contained in your session.conf. In summary, your session.conf should contain a snippet that looks like this:
<listen>tcp:host=localhost,bind=*,port=55556,family=ipv4</listen>
<listen>unix:tmpdir=/tmp</listen>
<auth>ANONYMOUS</auth>
<allow_anonymous/>
After doing these three things, you should be able to connect to the session bus remotely. Here's how it looks when specifying a remote connection in D-Feet:
Note that, if you want to connect to the system bus, too, you need to make similar changes to /etc/dbus-1/system.conf, but specify a different TCP port, for example 55557. (Oddly enough, the element order appears not to matter in this case.)
The only weird behavior I've noticed in this configuration is that running Desktop apps with sudo (e.g., sudo gvim) tends to generate errors or fail outright saying "No D-BUS daemon running". But this is something I need to do so rarely that it hardly matters.
If you want to send to a remote machine using dbus-send, you need to set DBUS_SESSION_BUS_ADDRESS accordingly, e.g., to something like:
export DBUS_SESSION_BUS_ADDRESS=tcp:host=localhost,bind=*,port=55556,family=ipv4
This works even if the bus you want to send to is actually the system bus of the remote machine, as long as the setting matches the TCP <listen> tag in /etc/dbus-1/system.conf on the target. (Thanks to Martin Vidner for this tip. Until I stumbled across his answer to this question, I didn't believe dbus-send supported remote operation.)
UPDATE: If you're using systemd (and want to access the system bus), you might also need to add a line saying ListenStream=55557 to /lib/systemd/system/dbus.socket, like so:
[Socket]
ListenStream=/var/run/dbus/system_bus_socket
ListenStream=55557 # <-- Add this line
UPDATE2: Thanks to #altagir for pointing out that recent versions of D-Bus will enable AppArmor mediation on systems where it's available, so you may also need to add <apparmor mode="disabled"/> to session.conf/system.conf for these instructions to work.
since dbus 1.6.12 (e.g. kubuntu 13.10), your connection will also be rejected unless you add to your dbus config file (either /etc/dbus-1/mybus.conf or the interface requiring remote access i.e. system.d/my.interface.conf)
<apparmor mode="disabled"/>
UPDATE: After struggling to create a apparmor profile allowing the service to connect to the custom dbus-daemon, it seems the connection is always rejected due to a bug in DBUS... So for now we MUST disable apparmor whenever you use tcp=... Bug fix targetted for 14.04
I opened a bug at bugs.launchpad.net following discussion here with Tyler Hicks:
The AppArmor mediation code only has the ability to check peer labels
over UNIX domain sockets. It is most likely seeing an error when
getting the label and then refusing the connection.
Note:
the disable flag is not recognized by dbus < 1.6.12, so you need to package different versions of mydaemon.conf depending on systen), else dbus-daemon will fail on launch if no apparmor... I used for now in my CMakeLists.txt :
IF(EXISTS "/usr/sbin/apparmor_status")
install(FILES dbus_daemon-apparmordisabled.conf RENAME dbus_daemon.conf DESTINATION /etc/dbus-1/ )
ELSE (EXISTS "/usr/sbin/apparmor_status")
install(FILES dbus_daemon.conf DESTINATION /etc/dbus-1/ )
ENDIF(EXISTS "/usr/sbin/apparmor_status")
Another thanks #Shorin, and another FYI - I had to do something like this to make mine work:
<listen>tcp:host=localhost,bind=0.0.0.0,port=55884</listen>
Note the bind=0.0.0.0 - the bind=* didn't work for me, and I left out the family=ipv4 part. I'm on Ubuntu 12.04. I did use netstat on the remote machine to confirm dbus was listening on the port and telnet from local to confirm the port was open.
netstat -plntu | grep 55884
tcp 0 0 0.0.0.0:55884 0.0.0.0:* LISTEN 707/dbus-daemon
You have to see something like 0 0.0.0.0:55884 and not something like 0 127.0.0.1:55884.
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server.
EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like:
psycopg2.connect(connectionString)
Thanks
Final Edit:
It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
This error means what it says, there are too many clients connected to postgreSQL.
Questions you should ask yourself:
Are you the only one connected to this database?
Are you running a graphical IDE?
What method are you using to connect?
Are you testing queries at the same time that you running the code?
Any of these things could be the problem. If you are the admin, you can up the number of clients, but if a program is hanging it open, then that won't help for long.
There are many reasons why you could be having too many clients running at the same time.
Make sure your db connection command isn't in any kind of loop. I was getting the same error from my script until I moved my db.database() out of my programs repeating execution loop.
It simple means many clients are making transaction to PostgreSQL at same time.
I was running Postgis container and Django in different docker container. Hence for my case restarting both db and system container solved the problem.