Preconditions:
I want to execute dyamic multiple commands via ssh from python on one remote machine at a time
I couldn't find any existing modules matching my "flavour" (If you care why, see below (*) ;))
Python scripts are running local on a Ubuntu machine
In general for single "one action calls" I simply do a native ssh call using subprocess.Popen and it works fine.
But for multiple subsequent dynamic calls, I don't want to create a new ssh connection for every command, even if the remote host might allow it. I thought of the following solution:
1) Configure my local ssh on Ubuntu to use multiplexing, so as long as a connection is open, it is used instead of creating a new one (https://www.admin-magazin.de/News/Tipps/Mit-SSH-Multiplexing-schneller-einloggen (Sorry, in german))
2) Creating an ssh connection by opening it in a running background thread, where in itself nothing is done, besides maybe a "keepalive" if necessary, or things like that, and keep the connection open till it's closed (i.e. by stopping the thread). (http://sebastiandahlgren.se/2014/06/27/running-a-method-as-a-background-thread-in-python/ )
3) Still executing ssh calls simply via subprocess.Popen, but now automatically using the open connection due to the ssh multiplexing config.
Should this work, or is there a fallacy alert?
(*)What I don't want:
Most solutions/examples I found used paramiko. On my first "happy path" it worked like charm, but the first failure test resulted in an internal AttributeError (https://github.com/paramiko/paramiko/issues/1617) and I don't want to build anything on this.
Other Libs i found like i.e. http://robotframework.org/SSHLibrary/SSHLibrary.html don't seem to have a real community using them.
pexpect....the whole "expect" concept gives me the creeps and should in my opinion only by used if there's absolutly no other reasonable reason ;)
What you've proposed is fine, but you don't even need to keep an ssh connection running in a background thread. If you configure ControlMaster (for reusing an existing connection) and ControlPerist (for keeping the master connection open even when all other connections have closed), then new ssh connections will continue to use the shared connection (as long as they happen before the ControlPersist timeout).
This means that if you set up the ControlMaster configuration external to your code (e.g., in ~/.ssh/ssh_config), your code doesn't even need to be aware of the configuration: it can just continue to call ssh normally, and ssh will take care of reusing the connection.
Related
I have a python application("App1") that uses serial port /dev/ttyUSB0. This application is running as Linux service. It is running very well as it is perfectly automated for the task that I need it to perform. However, I have recently came to realize that sometimes I accidentally use the same serial port for another python application that I am developing which causes unwanted interference with "App1".
I did try to lock down "App1" as follows:
ser=serial.Serial(PORT, BAUDRATE)
fcntl.lockf(ser, fcntl.LOCK_EX | fcntl.LOCK_NB)
However, for other applications I sometimes unknowingly using
ser=serial.Serial(PORT, BAUDRATE)
without checking for ser.isOpen()
In order to prevent this I was wondering during the times I work on other applications, is there a way for ser=serial.Serial(PORT, BAUDRATE) to notify me that the serial port is already in use when I try to access it?
A solution I came up with is to create a cronjob that runs forever which essentially checks the following:
fuser -k /dev/ttyUSB0 #to get the PID of activated services that uses /dev/ttyUSB0
pkill -f <PID of second application shown in output above> #kill the application belonging to the second PID given by the above command
The above would ensure that whenever two applications use the same serial port, the second PID will get killed(I am aware there are some leaks in this logic). What do you guys think? If this is not a good solution, is there any way for ser=serial.Serial(PORT, BAUDRATE) to notify me that /dev/ttyUSB0 is already in use when I try to access it or do I need to implement the logic at driver level? Any help would be highly appreciated!
I think the simpler thing to do there is to create a different user for the "stable process", and use plain file permissions to give that user exclusive access to /dev/ttyUSB0.
With Unix groups and still plain permissions, that other process can still access any other resource it needs that would be under your main user - so it should be plain simple
If you are not aware of them, check the documentation for the commands chown and chmod.
I would like to be able to gather the values for number of CPUs on a server and stuff like storage space etc and assign them to local variables in a python script. I have paramiko set up, so I can SSH to remote Linux nodes and run arbitrary commands on them, and then have the output returned to the script. However, many commands are very verbose "such as df -h", when all I want to assign is a single integer or value.
For the case of number of CPUs, there is Python functionality such as through the psutil module to get this value. Such as 'psutil.NUM_CPUS' which returns an integer. However, while I can run this locally, I can't exactly execute it on remote nodes as they don't have the python environment configured.
I am wondering how common it is to manually parse output of linux commands (such as df -h etc) and then grab an integer from it (similar to how bash has a "cut" function). Or whether it is somehow better to set up an environment on each remote server (or a better way).
Unfortunately it is very common to manually parse output of linux commands, but you shouln't. This is a really common server admin task and you shouldn't re invent the wheel.
You can use something like sar to log remote stats and retrieve the reports over ssh.
http://www.ibm.com/developerworks/aix/library/au-unix-perfmonsar.html
You should also look at salt. It lets you run the same command on multiple machines and get their output.
http://www.saltstack.com/
These are some of the options but remember to keep it DRY ;)
Like #Floris, I believe the simplest way is to design your commands so that the result is simple to parse. However, parsing the result of one command is not at all uncommon, bash scripts are full of grep, sed, wc or awk commands that do exactly that.
The same approach is used by psutils itself, see how it reads /proc/cpuinfo for cpu_count. You can implement the same parsing, only reading the distant /proc/cpuinfo, or counting the lines in the output of ls -1 /sys/bus/cpu/devices/.
Actually, the best way to get information will be from /proc and /sys, they are specially designed to ease access to internal information from simple programs, with minimal parsing needed.
If you can put your own programs or scripts on the remote machine there are a couple of things you can do:
Write a script on the remote machine that outputs just what you want, and execute that over ssh.
Use ssh to tunnel a port on the other machine and communicate with a server on the remote machine which will respond to requests for information with the data you want over a socket.
Since you already have an SSH connection I would suggest to wrap your commands with
Python's sh library. It's really nice for this kind of task and you get results really fast.
from sh import ssh
myserver = ssh.bake("myserver.com", p=1393)
print(myserver) # "/usr/bin/ssh myserver.com -p 1393"
# resolves to "/usr/bin/ssh myserver.com -p 1393 whoami"
iam2 = myserver.whoami()
We are trying to improve automation of some server processes; we use Fabric. I anticipate having to manage multiple hosts, and that means that SSH connections must be made to servers that haven't been SSH'd into before. If that happens, SSH always asks for verification of connection, which will break automation.
I have worked around this issue, in the same process, using the -o stricthostkeychecking=no option on an SSH command that I use to synchronize code with rsync, but I will also need to use it on calls with Fabric.
Is there a way to pass ssh-specific options to Fabric, in particular the one I mentioned above?
The short answer is:
For new hosts, nothing is needed. env.reject_unknown_hosts defaults to False
For known hosts with changed keys, env.disable_known_hosts = True will decide to proceed connecting to changed hosts.
Read ye olde docs: http://docs.fabfile.org/en/1.5/usage/ssh.html#unknown-hosts
The paramiko library is capable of loading up your known_hosts file,
and will then compare any host it connects to, with that mapping.
Settings are available to determine what happens when an unknown host
(a host whose username or IP is not found in known_hosts) is seen:
Reject: the host key is rejected and the connection is not made. This results in a Python exception, which will terminate your Fabric session with a message that the host is unknown.
Add: the new host key is added to the in-memory list of known hosts, the connection is made, and things continue normally. Note that this does not modify your on-disk known_hosts file!
Ask: not yet implemented at the Fabric level, this is a paramiko library option which would result in the user being prompted about the unknown key and whether to accept it.
Whether to reject or add hosts, as above, is controlled in Fabric via
the env.reject_unknown_hosts option, which is False by default for
convenience’s sake. We feel this is a valid tradeoff between
convenience and security; anyone who feels otherwise can easily modify
their fabfiles at module level to set env.reject_unknown_hosts = True.
http://docs.fabfile.org/en/1.5/usage/ssh.html#known-hosts-with-changed-keys
Known hosts with changed keys
The point of SSH’s key/fingerprint tracking is so that
man-in-the-middle attacks can be detected: if an attacker redirects
your SSH traffic to a computer under his control, and pretends to be
your original destination server, the host keys will not match. Thus,
the default behavior of SSH (and its Python implementation) is to
immediately abort the connection when a host previously recorded in
known_hosts suddenly starts sending us a different host key.
In some edge cases such as some EC2 deployments, you may want to
ignore this potential problem. Our SSH layer, at the time of writing,
doesn’t give us control over this exact behavior, but we can sidestep
it by simply skipping the loading of known_hosts – if the host list
being compared to is empty, then there’s no problem. Set
env.disable_known_hosts to True when you want this behavior; it is
False by default, in order to preserve default SSH behavior.
Warning Enabling env.disable_known_hosts will leave you wide open to
man-in-the-middle attacks! Please use with caution.
I wrote a simple python program to play and pause banshee music player.
While its working on my own machine, I have trouble doing it to a remote computer, connected to the same router (LAN).
I edited the session.conf of the remote machine, to add this line:
<listen>tcp:host=localhost,port=12434</listen>
and here is my program:
import dbus
bus_obj=dbus.bus.BusConnection("tcp:host=localhost,port=12434")
proxy_object=bus_obj.get_object('org.bansheeproject.Banshee',
'/org/bansheeproject/Banshee/PlayerEngine')
playerengine_iface=dbus.Interface(proxy_object,
dbus_interface='org.bansheeproject.Banshee.PlayerEngine')
var=0
while (var!="3"):
var=raw_input("\nPress\n1 to play\n2 to pause\n3 to exit\n")
if var=="1":
print "playing..."
playerengine_iface.Play()
elif var=="2":
print "pausing"
playerengine_iface.Pause()
This is what i get when i try to execute it
Traceback (most recent call last):
File "dbus3.py", line 4, in <module>
bus_obj=dbus.bus.BusConnection("tcp:host=localhost,port=12434")
File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 125, in __new__
bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket "localhost:12434" Connection refused
What am I doing wrong here?
should i edit /usr/lib/python2.7/dist-packages/dbus/bus.py
UPDATE:
ok, here is the deal
when i add
<listen>tcp:host=192.168.1.7,port=12434</listen>
to to /etc/dbus-1/session.conf, then reboot, hoping it would start listening on reboot,
It never boots. It gets stuck on loading screen and occasionally, a black screen with the following text flashes:
Pulseaudio Configured For Per-user Sessions Saned Disabled;edit/etc/default/saned
so, when i go ctrl+alt+f1 , change session.conf to original state and reboot, it boots properly.
Whats all that about?
How can I make dbus daemon listen for tcp connections, without encountering problems?
I recently needed to set this up, and discovered that the trick is: order matters for the <listen> elements in session.conf. You should make sure the TCP element occurs first. Bizarre, I know, but true, at least for my case. (I see exactly the same black screen behavior if I reverse the order and put the UNIX socket <listen> element first.)
Also, prepending the TCP <listen> tag is necessary, but not sufficient. To make remote D-Bus connections via TCP work, you need to do three things:
Add a <listen> tag above the UNIX one, similar to this:
<listen>tcp:host=localhost,bind=*,port=55556,family=ipv4</listen>
<listen>unix:tmpdir=/tmp</listen>
Add a line (right below the <listen> tags is fine) that says:
<auth>ANONYMOUS</auth>
Add another line below these that says:
<allow_anonymous/>
The <auth> tag should be added in addition to any other <auth> tags that may be contained in your session.conf. In summary, your session.conf should contain a snippet that looks like this:
<listen>tcp:host=localhost,bind=*,port=55556,family=ipv4</listen>
<listen>unix:tmpdir=/tmp</listen>
<auth>ANONYMOUS</auth>
<allow_anonymous/>
After doing these three things, you should be able to connect to the session bus remotely. Here's how it looks when specifying a remote connection in D-Feet:
Note that, if you want to connect to the system bus, too, you need to make similar changes to /etc/dbus-1/system.conf, but specify a different TCP port, for example 55557. (Oddly enough, the element order appears not to matter in this case.)
The only weird behavior I've noticed in this configuration is that running Desktop apps with sudo (e.g., sudo gvim) tends to generate errors or fail outright saying "No D-BUS daemon running". But this is something I need to do so rarely that it hardly matters.
If you want to send to a remote machine using dbus-send, you need to set DBUS_SESSION_BUS_ADDRESS accordingly, e.g., to something like:
export DBUS_SESSION_BUS_ADDRESS=tcp:host=localhost,bind=*,port=55556,family=ipv4
This works even if the bus you want to send to is actually the system bus of the remote machine, as long as the setting matches the TCP <listen> tag in /etc/dbus-1/system.conf on the target. (Thanks to Martin Vidner for this tip. Until I stumbled across his answer to this question, I didn't believe dbus-send supported remote operation.)
UPDATE: If you're using systemd (and want to access the system bus), you might also need to add a line saying ListenStream=55557 to /lib/systemd/system/dbus.socket, like so:
[Socket]
ListenStream=/var/run/dbus/system_bus_socket
ListenStream=55557 # <-- Add this line
UPDATE2: Thanks to #altagir for pointing out that recent versions of D-Bus will enable AppArmor mediation on systems where it's available, so you may also need to add <apparmor mode="disabled"/> to session.conf/system.conf for these instructions to work.
since dbus 1.6.12 (e.g. kubuntu 13.10), your connection will also be rejected unless you add to your dbus config file (either /etc/dbus-1/mybus.conf or the interface requiring remote access i.e. system.d/my.interface.conf)
<apparmor mode="disabled"/>
UPDATE: After struggling to create a apparmor profile allowing the service to connect to the custom dbus-daemon, it seems the connection is always rejected due to a bug in DBUS... So for now we MUST disable apparmor whenever you use tcp=... Bug fix targetted for 14.04
I opened a bug at bugs.launchpad.net following discussion here with Tyler Hicks:
The AppArmor mediation code only has the ability to check peer labels
over UNIX domain sockets. It is most likely seeing an error when
getting the label and then refusing the connection.
Note:
the disable flag is not recognized by dbus < 1.6.12, so you need to package different versions of mydaemon.conf depending on systen), else dbus-daemon will fail on launch if no apparmor... I used for now in my CMakeLists.txt :
IF(EXISTS "/usr/sbin/apparmor_status")
install(FILES dbus_daemon-apparmordisabled.conf RENAME dbus_daemon.conf DESTINATION /etc/dbus-1/ )
ELSE (EXISTS "/usr/sbin/apparmor_status")
install(FILES dbus_daemon.conf DESTINATION /etc/dbus-1/ )
ENDIF(EXISTS "/usr/sbin/apparmor_status")
Another thanks #Shorin, and another FYI - I had to do something like this to make mine work:
<listen>tcp:host=localhost,bind=0.0.0.0,port=55884</listen>
Note the bind=0.0.0.0 - the bind=* didn't work for me, and I left out the family=ipv4 part. I'm on Ubuntu 12.04. I did use netstat on the remote machine to confirm dbus was listening on the port and telnet from local to confirm the port was open.
netstat -plntu | grep 55884
tcp 0 0 0.0.0.0:55884 0.0.0.0:* LISTEN 707/dbus-daemon
You have to see something like 0 0.0.0.0:55884 and not something like 0 127.0.0.1:55884.
We are attempting to use the paramiko module for creating SSH tunnels on demand to arbitrary servers for purposes of querying remote databases. We attempted to use the forward.py demo that ships with paramiko but the big limitation is there does not seem to be an easy way to close an SSH tunnel and the SSH connection once the socket server is started up.
The limitation we have is that we cannot activate this from a shell and then kill the shell manually to stop the listner. We need to open the SSH connection, tunnel, perform some actions through the tunnel, close the tunnel, and close the SSH connection within python.
I've seen references to a server.shutdown() method but it isn't clear how to implement it correctly.
I'm not sure what you mean by "implement it correctly" -- you just need to keep track of the server object and call shutdown on it when you want. In forward.py, the server isn't kept track of, because the last line of forward_tunnel is
ForwardServer(('', local_port), SubHander).serve_forever()
so the server object is not easily reachable any more. But you can just change that to, e.g.:
global theserver
theserver = ForwardServer(('', local_port), SubHander)
theserver.serve_forever()
and run the forward_tunnel function in a separate thread, so that the main function gets control back (while the serve_forever is running in said separate thread) and can call theserver.shutdown() whenever that's appropriate and needed.