How to pass SSH options with Fabric? - python

We are trying to improve automation of some server processes; we use Fabric. I anticipate having to manage multiple hosts, and that means that SSH connections must be made to servers that haven't been SSH'd into before. If that happens, SSH always asks for verification of connection, which will break automation.
I have worked around this issue, in the same process, using the -o stricthostkeychecking=no option on an SSH command that I use to synchronize code with rsync, but I will also need to use it on calls with Fabric.
Is there a way to pass ssh-specific options to Fabric, in particular the one I mentioned above?

The short answer is:
For new hosts, nothing is needed. env.reject_unknown_hosts defaults to False
For known hosts with changed keys, env.disable_known_hosts = True will decide to proceed connecting to changed hosts.
Read ye olde docs: http://docs.fabfile.org/en/1.5/usage/ssh.html#unknown-hosts
The paramiko library is capable of loading up your known_hosts file,
and will then compare any host it connects to, with that mapping.
Settings are available to determine what happens when an unknown host
(a host whose username or IP is not found in known_hosts) is seen:
Reject: the host key is rejected and the connection is not made. This results in a Python exception, which will terminate your Fabric session with a message that the host is unknown.
Add: the new host key is added to the in-memory list of known hosts, the connection is made, and things continue normally. Note that this does not modify your on-disk known_hosts file!
Ask: not yet implemented at the Fabric level, this is a paramiko library option which would result in the user being prompted about the unknown key and whether to accept it.
Whether to reject or add hosts, as above, is controlled in Fabric via
the env.reject_unknown_hosts option, which is False by default for
convenience’s sake. We feel this is a valid tradeoff between
convenience and security; anyone who feels otherwise can easily modify
their fabfiles at module level to set env.reject_unknown_hosts = True.
http://docs.fabfile.org/en/1.5/usage/ssh.html#known-hosts-with-changed-keys
Known hosts with changed keys
The point of SSH’s key/fingerprint tracking is so that
man-in-the-middle attacks can be detected: if an attacker redirects
your SSH traffic to a computer under his control, and pretends to be
your original destination server, the host keys will not match. Thus,
the default behavior of SSH (and its Python implementation) is to
immediately abort the connection when a host previously recorded in
known_hosts suddenly starts sending us a different host key.
In some edge cases such as some EC2 deployments, you may want to
ignore this potential problem. Our SSH layer, at the time of writing,
doesn’t give us control over this exact behavior, but we can sidestep
it by simply skipping the loading of known_hosts – if the host list
being compared to is empty, then there’s no problem. Set
env.disable_known_hosts to True when you want this behavior; it is
False by default, in order to preserve default SSH behavior.
Warning Enabling env.disable_known_hosts will leave you wide open to
man-in-the-middle attacks! Please use with caution.

Related

ssh multiplexing & python: Will it work like this?

Preconditions:
I want to execute dyamic multiple commands via ssh from python on one remote machine at a time
I couldn't find any existing modules matching my "flavour" (If you care why, see below (*) ;))
Python scripts are running local on a Ubuntu machine
In general for single "one action calls" I simply do a native ssh call using subprocess.Popen and it works fine.
But for multiple subsequent dynamic calls, I don't want to create a new ssh connection for every command, even if the remote host might allow it. I thought of the following solution:
1) Configure my local ssh on Ubuntu to use multiplexing, so as long as a connection is open, it is used instead of creating a new one (https://www.admin-magazin.de/News/Tipps/Mit-SSH-Multiplexing-schneller-einloggen (Sorry, in german))
2) Creating an ssh connection by opening it in a running background thread, where in itself nothing is done, besides maybe a "keepalive" if necessary, or things like that, and keep the connection open till it's closed (i.e. by stopping the thread). (http://sebastiandahlgren.se/2014/06/27/running-a-method-as-a-background-thread-in-python/ )
3) Still executing ssh calls simply via subprocess.Popen, but now automatically using the open connection due to the ssh multiplexing config.
Should this work, or is there a fallacy alert?
(*)What I don't want:
Most solutions/examples I found used paramiko. On my first "happy path" it worked like charm, but the first failure test resulted in an internal AttributeError (https://github.com/paramiko/paramiko/issues/1617) and I don't want to build anything on this.
Other Libs i found like i.e. http://robotframework.org/SSHLibrary/SSHLibrary.html don't seem to have a real community using them.
pexpect....the whole "expect" concept gives me the creeps and should in my opinion only by used if there's absolutly no other reasonable reason ;)
What you've proposed is fine, but you don't even need to keep an ssh connection running in a background thread. If you configure ControlMaster (for reusing an existing connection) and ControlPerist (for keeping the master connection open even when all other connections have closed), then new ssh connections will continue to use the shared connection (as long as they happen before the ControlPersist timeout).
This means that if you set up the ControlMaster configuration external to your code (e.g., in ~/.ssh/ssh_config), your code doesn't even need to be aware of the configuration: it can just continue to call ssh normally, and ssh will take care of reusing the connection.

EC2 fails to connect via FTPS, but works locally

I'm running Python 2.6.5 on ec2 and I've replaced the old ftplib with the newer one from Python2.7 that allows importing of FTP_TLS. Yet the following hangs up on me:
from ftplib import FTP_TLS
ftp = FTP_TLS('host', 'username', 'password')
ftp.retrlines('LIST') (Times out after 15-20 min)
I'm able to run these three lines successfully in a matter of seconds on my local machine, but it fails on ec2. Any idea as to why this is?
Thanks.
It certainly sounds like a problem related to whether or not you're in PASSIVE mode on your FTP connection, and whether both ends of the connection can support it.
The ftplib documentations suggests that it is on by default, which is a shame, because I was going to suggest that you turn it on. Instead, I'll suggest that you set_debuglevel to where you can see the lower levels of the protocol happening and see what mode you're in. That should give you information on how to proceed. Either you're in passive mode and the other end can't deal with it properly, or (hopefully) you'd not, but you should be.
FTP and FTPS (but not SFTP) can be configured so that the server makes a backwards connection to the client for the actual transfers or so that the client makes a second forward connection to the server for the transfers. The former, especially, is prone to complications whenever network address translation is involved. Without the TLS, some firewalls can actually rewrite the FTP session traffic to make it magically work, but with TLS that's impossible due to encryption.
The fact that are presumably authenticating and then timing out when you try to transfer data (LIST requires a 2nd connection in one direction or the other) is the classic symptom, usually, of a setup that either needs passive mode, OR, there's this:
Connect as usual to port 21 implicitly securing* the FTP control connection before authenticating. Securing the data connection requires the user to explicitly ask for it by calling the prot_p() method.
ftps.prot_p() # switch to secure data connection
ftps.retrlines('LIST') # list directory content securely
I don't work with FTPS often, since SFTP is so much less problematic, but if you're not doing that, the far end server might not be cooperating.
*note, I suspect this sentence is trying to say that FTP_TLS "implicitly secures the FTP control connection" in contrast with the explicit securing of the data connection.
If you're still having trouble could you try ruling out Amazon firewall problems. (I'm assuming you're not using a host based firewall.)
If your EC2 instance is in a VPC then in the AWS Management Console could you:
ensure you have an internet gateway
ensure that the subnet your EC2 instance is in has a default route (0.0.0.0/0) configured pointing at the internet gateway
in the Security Group for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
in the Network ACLs for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
If your EC2 instance is NOT in a VPC then in the AWS Management Console could you:
in the Security Group for inbound allow All Traffic from all sources (0.0.0.0/0)
Only do this in a test environment! (obviously)
This will open your EC2 instance up to all traffic from the internet. Hopefully you'll find that your FTPS is now working. Then you can gradually reapply the security rules until you find out the cause of the problem. If it's still not working then the AWS firewall is not the cause of the problem (or you have more than one problem).

python: interact with shell command using Popen

I need to execute several shell commands using python, but I couldn't resolve one of the problems. When I scp to another machine, usually it prompts and asks whether to add this machine to known host. I want the program to input "yes" automatically, but I couldn't get it to work. My program so far looks like this:
from subprocess import Popen, PIPE, STDOUT
def auto():
user = "abc"
inst_dns = "example.com"
private_key = "sample.sem"
capFile = "/home/ubuntu/*.cap"
temp = "%s#%s:~" %(user, inst_dns)
scp_cmd = ["scp", "-i", private_key, capFile, temp]
print ( "The scp command is: %s" %" ".join(scp_cmd) )
scpExec = Popen(scp_cmd, shell=False, stdin=PIPE, stdout=PIPE)
# this is the place I tried to write "yes"
# but doesn't work
scpExec.stdin.write("yes\n")
scpExec.stdin.flush()
while True:
output = scpExec.stdout.readline()
print ("output: %s" %output)
if output == "":
break
If I run this program, it still prompt and ask for input. How can I response to the prompt automatically? Thanks.
You're being prompted to add the host key to your know hosts file because ssh is configured for StrictHostKeyChecking. From the man page:
StrictHostKeyChecking
If this flag is set to “yes”, ssh(1) will never automatically add host keys to the ~/.ssh/known_hosts
file, and refuses to connect to hosts whose host key has changed. This provides maximum protection
against trojan horse attacks, though it can be annoying when the /etc/ssh/ssh_known_hosts file is
poorly maintained or when connections to new hosts are frequently made. This option forces the user
to manually add all new hosts. If this flag is set to “no”, ssh will automatically add new host keys
to the user known hosts files. If this flag is set to “ask”, new host keys will be added to the user
known host files only after the user has confirmed that is what they really want to do, and ssh will
You can set StrictHostKeyChecking to "no" if you want ssh/scp to automatically accept new keys without prompting. On the command line:
scp -o StrictHostKeyChecking=no ...
You can also enable batch mode:
BatchMode
If set to “yes”, passphrase/password querying will be disabled. This option is useful in scripts and
other batch jobs where no user is present to supply the password. The argument must be “yes” or
“no”. The default is “no”.
With BatchMode=yes, ssh/scp will fail instead of prompting (which is often an improvement for scripts).
Best way I know to avoid being asked about fingerprint matches is to pre-populate the relevant keys in .ssh/known_hosts. In most cases, you really should already know what the remote machines' public keys are, and it is straightforward to put them in a known_hosts that ssh can find.
In the few cases where you don't, and can't, know the remote public key, then the most correct solution depends on why you don't know. If, say, you're writing software that needs to be run on arbitrary user boxes and may need to ssh on the user's behalf to other arbitrary boxes, it may be best for your software to run ssh-keyscan on its own to acquire the ostensible remote public key, let the user approve or reject it explicitly if at all possible, and if approved, append the key to known_hosts and then invoke ssh.

Connecting to dbus over tcp

I wrote a simple python program to play and pause banshee music player.
While its working on my own machine, I have trouble doing it to a remote computer, connected to the same router (LAN).
I edited the session.conf of the remote machine, to add this line:
<listen>tcp:host=localhost,port=12434</listen>
and here is my program:
import dbus
bus_obj=dbus.bus.BusConnection("tcp:host=localhost,port=12434")
proxy_object=bus_obj.get_object('org.bansheeproject.Banshee',
'/org/bansheeproject/Banshee/PlayerEngine')
playerengine_iface=dbus.Interface(proxy_object,
dbus_interface='org.bansheeproject.Banshee.PlayerEngine')
var=0
while (var!="3"):
var=raw_input("\nPress\n1 to play\n2 to pause\n3 to exit\n")
if var=="1":
print "playing..."
playerengine_iface.Play()
elif var=="2":
print "pausing"
playerengine_iface.Pause()
This is what i get when i try to execute it
Traceback (most recent call last):
File "dbus3.py", line 4, in <module>
bus_obj=dbus.bus.BusConnection("tcp:host=localhost,port=12434")
File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 125, in __new__
bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket "localhost:12434" Connection refused
What am I doing wrong here?
should i edit /usr/lib/python2.7/dist-packages/dbus/bus.py
UPDATE:
ok, here is the deal
when i add
<listen>tcp:host=192.168.1.7,port=12434</listen>
to to /etc/dbus-1/session.conf, then reboot, hoping it would start listening on reboot,
It never boots. It gets stuck on loading screen and occasionally, a black screen with the following text flashes:
Pulseaudio Configured For Per-user Sessions Saned Disabled;edit/etc/default/saned
so, when i go ctrl+alt+f1 , change session.conf to original state and reboot, it boots properly.
Whats all that about?
How can I make dbus daemon listen for tcp connections, without encountering problems?
I recently needed to set this up, and discovered that the trick is: order matters for the <listen> elements in session.conf. You should make sure the TCP element occurs first. Bizarre, I know, but true, at least for my case. (I see exactly the same black screen behavior if I reverse the order and put the UNIX socket <listen> element first.)
Also, prepending the TCP <listen> tag is necessary, but not sufficient. To make remote D-Bus connections via TCP work, you need to do three things:
Add a <listen> tag above the UNIX one, similar to this:
<listen>tcp:host=localhost,bind=*,port=55556,family=ipv4</listen>
<listen>unix:tmpdir=/tmp</listen>
Add a line (right below the <listen> tags is fine) that says:
<auth>ANONYMOUS</auth>
Add another line below these that says:
<allow_anonymous/>
The <auth> tag should be added in addition to any other <auth> tags that may be contained in your session.conf. In summary, your session.conf should contain a snippet that looks like this:
<listen>tcp:host=localhost,bind=*,port=55556,family=ipv4</listen>
<listen>unix:tmpdir=/tmp</listen>
<auth>ANONYMOUS</auth>
<allow_anonymous/>
After doing these three things, you should be able to connect to the session bus remotely. Here's how it looks when specifying a remote connection in D-Feet:
Note that, if you want to connect to the system bus, too, you need to make similar changes to /etc/dbus-1/system.conf, but specify a different TCP port, for example 55557. (Oddly enough, the element order appears not to matter in this case.)
The only weird behavior I've noticed in this configuration is that running Desktop apps with sudo (e.g., sudo gvim) tends to generate errors or fail outright saying "No D-BUS daemon running". But this is something I need to do so rarely that it hardly matters.
If you want to send to a remote machine using dbus-send, you need to set DBUS_SESSION_BUS_ADDRESS accordingly, e.g., to something like:
export DBUS_SESSION_BUS_ADDRESS=tcp:host=localhost,bind=*,port=55556,family=ipv4
This works even if the bus you want to send to is actually the system bus of the remote machine, as long as the setting matches the TCP <listen> tag in /etc/dbus-1/system.conf on the target. (Thanks to Martin Vidner for this tip. Until I stumbled across his answer to this question, I didn't believe dbus-send supported remote operation.)
UPDATE: If you're using systemd (and want to access the system bus), you might also need to add a line saying ListenStream=55557 to /lib/systemd/system/dbus.socket, like so:
[Socket]
ListenStream=/var/run/dbus/system_bus_socket
ListenStream=55557 # <-- Add this line
UPDATE2: Thanks to #altagir for pointing out that recent versions of D-Bus will enable AppArmor mediation on systems where it's available, so you may also need to add <apparmor mode="disabled"/> to session.conf/system.conf for these instructions to work.
since dbus 1.6.12 (e.g. kubuntu 13.10), your connection will also be rejected unless you add to your dbus config file (either /etc/dbus-1/mybus.conf or the interface requiring remote access i.e. system.d/my.interface.conf)
<apparmor mode="disabled"/>
UPDATE: After struggling to create a apparmor profile allowing the service to connect to the custom dbus-daemon, it seems the connection is always rejected due to a bug in DBUS... So for now we MUST disable apparmor whenever you use tcp=... Bug fix targetted for 14.04
I opened a bug at bugs.launchpad.net following discussion here with Tyler Hicks:
The AppArmor mediation code only has the ability to check peer labels
over UNIX domain sockets. It is most likely seeing an error when
getting the label and then refusing the connection.
Note:
the disable flag is not recognized by dbus < 1.6.12, so you need to package different versions of mydaemon.conf depending on systen), else dbus-daemon will fail on launch if no apparmor... I used for now in my CMakeLists.txt :
IF(EXISTS "/usr/sbin/apparmor_status")
install(FILES dbus_daemon-apparmordisabled.conf RENAME dbus_daemon.conf DESTINATION /etc/dbus-1/ )
ELSE (EXISTS "/usr/sbin/apparmor_status")
install(FILES dbus_daemon.conf DESTINATION /etc/dbus-1/ )
ENDIF(EXISTS "/usr/sbin/apparmor_status")
Another thanks #Shorin, and another FYI - I had to do something like this to make mine work:
<listen>tcp:host=localhost,bind=0.0.0.0,port=55884</listen>
Note the bind=0.0.0.0 - the bind=* didn't work for me, and I left out the family=ipv4 part. I'm on Ubuntu 12.04. I did use netstat on the remote machine to confirm dbus was listening on the port and telnet from local to confirm the port was open.
netstat -plntu | grep 55884
tcp 0 0 0.0.0.0:55884 0.0.0.0:* LISTEN 707/dbus-daemon
You have to see something like 0 0.0.0.0:55884 and not something like 0 127.0.0.1:55884.

Python - How to use Conch to create a Virtual SSH server

I'm looking at creating a server in python that I can run, and will work as an SSH server. This will then let different users login, and act as if they'd logged in normally, but only had access to one command.
I want to do this so that I can have a system where I can add users to without having to create a system wide account, so that they can then, for example, commit to a VCS branch, or similar.
While I can work out how to do this with conch to get it to a "custom" shell... I can't figure out how to make it so that the SSH stream works as if it were a real one (I'm preferably wanting to limit to /bin/bzr so that bzr+ssh will work.
It needs to be in python (which i can get to do the authorisation) but don't know how to do the linking to the app.
This needs to be in python to work within the app its designed for, and to be able to be used for those without access to add new users
When you write a Conch server, you can control what happens when the client makes a shell request by implementing ISession.openShell. The Conch server will request IConchUser from your realm and then adapt the resulting avatar to ISession to call openShell on it if necessary.
ISession.openShell's job is to take the transport object passed to it and associate it with a protocol to interpret the bytes received from it and, if desired, to write bytes to it to be sent to the client.
In an unfortunate twist, the object passed to openShell which represents the transport is actually an IProcessProtocol provider. This means that you need to call makeConnection on it, passing an IProcessTransport provider. When data is received from the client, the IProcessProtocol will call writeToChild on the transport you pass to makeConnection. When you want to send data to the client, you should call childDataReceived on it.
To see the exact behavior, I suggest reading the implementation of the IProcessProtocol that is passed in. Don't depend on anything that's not part of IProcessProtocol, but seeing the implementation can make it easier to understand what's going on.
You may also want to look at the implementation of the normal shell-creation to get a sense of what you're aiming for. This will give you a clue about how to associate the stdio of the bzr child process you launch with the SSH channel.
While Python really is my favorite language, I think you need not create you own server for this. When you look at the OpenSSH Manualpage for sshd you'll find the "command" options for the authorized keys file that lets you define a specific command to run on login.
Using keys, you can use one system account to allow many user to log in, just put their public keys in the account's authorized keys file.
We are using this to create SSH tunnels for SVN and it works just great.

Categories

Resources