I'm trying to write a wrapper Python script that automatically sets up port forwards to a remote host based on some parameters, and then gives me that shell. Everything works great, up until I want to exit the shell -- at which point, the session hangs and never returns me back to Python. Here's a toy example that does the same thing:
>>> import os
>>> os.system('ssh -L8080:localhost:80 fooserver.net')
user#fooserver.net password:
[fooserver.net]$ hostname
fooserver.net
[fooserver.net]$ exit
(hangs)
I believe this has something to do with the forwarded TCP port being in "TIME_WAIT" and keeping the SSH session alive until it closes, because this doesn't happen if I never request that forwarded port locally. What's the right way to handle this? Can I capture the "exit" from inside Python and then kill the os.system() pipe or something?
I'm using motion to run a rudimentary livestream. It works perfectly when i start it in server side with:
sudo motion -c livestream.conf
This starts a video server in 8081 port and i can access perfectly from wherever i want inside my network.
The issue comes when i want to write a little script which will ssh using paramiko to server, start motion with the same command and open default browser directly in video stream url. Here the sample code:
import paramiko
import subprocess
import time
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('192.168.1.111', username = 'pi', password = 'raspberry')
ssh.exec_command('sudo motion -c livestream.conf')
time.sleep(4)
subprocess.call('xdg-open "http://192.168.1.111:8081"', shell= True)
ssh.close()
a pidof motion in server shows that the service is running,but i can't access it!!! Because motion is running, i think is not the common problem with sudo/paramiko, but i don't have any idea why this does not work.
WORKAROUND
Motion has a daemon mode. Enabling it from
/etc/default/motion
it starts on boot and i can call it perfectly with:
subprocess.call('xdg-open "http://192.168.1.111:8081"', shell= True)
But is not exactly what i'm looking for, because i'd like to launch(and close, but this will be another thread sure!!) the daemon, not just access the stream.
This workaround executes
/etc/motion/motion.conf
as daemon.I copied my motion script in there and everything good.
But when i try to start the script as daemon (not on boot, with the code above), it tells me that it can't create PID file. Everything done as root. I'm getting close to the answer by myself, just a little more.
I have written a script that establishes an SSH tunnel and connects to a database over that tunnel.
Extremely simplified nutshell (obvious parameters and extra logic omitted):
sshTunnelCmd = "ssh -N -p %s -L %s:127.0.0.1:%s -i %s %s#%s" % (
sshport, localport, remoteport, identityfile, user, server
)
args = shlex.split(sshTunnelCmd)
tunnel = subprocess.Popen(args)
time.sleep(2)
con = MySQLdb.connect(host="127.0.0.1", port=localport, user=user, passwd=pw, db=db)
## DO THE STUFF ##
con.close()
tunnel.kill()
The shell-equivalent commands are below, and I have tested both the commands and the script to work in "clean client" conditions, i.e. after a reboot.
ssh -N -p 22 -L 5000:127.0.0.1:3306 user#server
mysql --port 5000 -h 127.0.0.1 -u dbuser -p
SSH login is with keys and in ~/.ssh/config the server is configured as
Host server
Hostname F.Q.D.N
Port 22
User user
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
ControlPersist 600
IdentityFile ~/.ssh/id_rsa
In the ## DO THE STUFF ## section there is code that tries to connect to the database as regular users. If an exception is raised, it asks for manual input of root credentials, creates the regular users and continue to do the stuff (ordinary queries, all tested manually and working in the python code under clean client conditions).
ruz = raw_input('root user? ')
print (ruz)
rup = raw_input('root password? ')
print (rup)
print ("Root connecting to database.")
try:
cxroot = MySQLdb.connect(host=host, port=port, user=ruz, passwd=rup)
cur = cxroot.cursor()
except MySQLdb.Error, e:
print ("Root failed, sorry.")
print "Error %d: %s" % (e.args[0],e.args[1])
print ("GAME OVER.")
return -1
Under clean client, the first and some subsequent executions work well, including when I try to test the script robustness and remove the user server-side. However, at some point, it hangs in a weird way after the second raw_input in the code block above. Output:
root user? root
root
root password? s3cReTsTr1n9
-bash: line 1: s3cReTsTr1n9: command not found
The only thing I can do at this point is kill the process or hit CTRL+C, which is followed by the following traceback:
^CTraceback (most recent call last):
File "./initdb.py", line 571, in <module>
main()
File "./initdb.py", line 526, in main
connection = connectDB ('127.0.0.1', localport, dbuser, dbpw, db)
File "./initdb.py", line 128, in connectDB
rup = raw_input('root password? ')
KeyboardInterrupt
Another unexpected symptom I noticed is that keyboard input to the terminal window (I am running this in a bash terminal within Xubuntu 14.04LTS) becomes spuriously unresponsive, so I have to close the terminal tab and start a new tab. This clears keyboard input, but not script behaviour.
I have tried to search for a solution but the usual search engines are not helpful in my case, probably because I do not completely understand what is going on. I suspect that keyboard input is somehow redirected to a process, possibly the tunnel subprocess, but I cannot explain why the first raw_input works as expected and the second one does not.
I am also uncomfortable with the way I create the tunnel, so any advice for a more robust tunnel creation is welcome. Specifically, I would like to have more fine grained control over the tunnel creation, rather than waiting an arbitrary two seconds for the tunnel to be established because I have no feedback from that subprocess.
Thanks for sharing your time and expertise.
There's two sections to my answer: how I'd go about diagnosing this, and how I would go about doing this.
To begin with, I'd suggest using the prompt that's failing as an opportunity to do some exploration.
There's two approaches you could take here:
Just enter hostname (or whatever) to find out where it's running
Enter bash, or if the remote end has an X server add -X to your ssh command then type a terminal program (xterm, gnome-terminal, etc). In your new shell you can poke around to see what's going on.
If you determine it's running on the client side you could diagnose it with strace:
strace -f -o blah.log yourscript.py
... where you'd enter an easy to search string for the password then search for that in blah.log. Because of the -f flag it will print the PID of the process that attempted to execute it; backtracking from there you'll probably find that PID started with a fork from another PID. That PID is what tried to execute it, so you should be able to investigate from there.
As for how I'd do this: I'm still fairly new to python so I would've been inclined to use perl or expect. Down the perl path you might look at:
Net::SSH::Tunnel; this is probably the first one I'd look at using.
Use open or open3 then do something hacky like:
wait for stdout on the process to have text available; you'd have to get rid of -N for that, and you'd be at the mercy of remote auto-logout.
One of the various responses to ssh-check-if-a-tunnel-is-alive
Net::SSH::Expect (eg this post, though I didn't look at his implementation so you'd have to make your own choice on that). This or the "real" expect are probably overkill but you could find a way I'm sure.
Although ruby has a gem like perl's Net::SSH::Tunnel, I don't see a pip for python. This question and this one both discuss it and they seem to indicate you're limited to either starting it as a sub-process or using paramiko.
Are you free to configure the server as you like?
Then try a vpn connection instead of ssh port forwarding. This will easier reconnect without affecting your application, so the tunnel may be more stable.
For the raw_input problem i cannot see why it happens, but maybe the ssh command in a shell interferes with your terminal? If you really want to integrate the ssh tunnel you may want to look at some python modules for handling ssh.
-bash: line 1: s3cReTsTr1n9: command not found
I got same error as -bash command not found even though I was just accepting raw_input() / input(). I tried this in both 2.7 and 3.7 version.
I was trying run a client server program on same mac machine. I had two files server.py and client.py. Everytime in the terminal, I first ran the server.py in background and then ran client.py.
Terminal 1: python server.py &
Terminal 2: python client.py
Each time I got the error "-bash: xxxx: command not found". xxxx here is whatever input I gave.
Finally after spending 5 hours on this I stopped running server.py in background.
Terminal 1: python server.py
Terminal 2: python client.py
And viola it worked. raw_input and input did not give me this error again.
I am not sure if this helps. But this is the only post I found on internet which had exactly the same issue as mine. And thought maybe this would help.
So I wrote this script called py_script.py that I ran over an ssh session on a school machine:
import time
import os
while True:
os.system("echo still_alive")
time.sleep(60)
... by doing:
bash $ python py_script.py &.
Is this going to prevent the dreaded broken pipe message from happening?
The problem is, after a period of inactivity when I am over an ssh connection, my connection will be dropped. To prevent this, I wrote the above script that automatically writes a message to the console to count for an "action" so that I don't have to press enter every 5 minutes. (I'm idle on a machine and need to run a process for a good amount of time.)
If your connection is timing out then it is more advisable to look at SSH configuration options which can keep your connection alive.
As a starter example, put the following in a file called ~/.ssh/config:
Host *
ServerAliveInterval 20
TCPKeepAlive=yes
You can read more here.
I am writing an application that interacts with numerous systems, specifically with switches,
i am trying to implement a function that will enable me to retreive logs from a specific switch using Fabric (python)
in a real session to the switch i would need to first run "enable" (and press enter key) and then run "debug generate dump" command.
using fabric.operations.run() i can only issue one command at a time,
using fabric.operations.open_shell() is not an option since i need to parse the output and also close the connection once it finishes.
Can someone assist on this?
THANKS!!
Here is an example of the code:
def getSwitchLog(self, host, port, username, password):
env.host_string = "%s:%s" % (host, port)
env.user = username
env.password = password
command = 'enable \r debug generate dump'
run(command, shell=cli, pty=True, combine_stderr=True, timeout=120)
shell=cli - because the switch does not run bash and 'cli' is the appropriate value in this case
\r should have sent "enter" key essentially sending 1. enable 2. enter 3. debug generate dump
this method works if i switch run with open_shell
but it seems run ignores \r
I was able to achieve what i need using:
command = 'sshpass -p admin ssh admin#switchIP cli \"enable\" \"show version\"'
fabric.api.local(command, capture=True, shell=None)
however this method is not robust as fabric.api.run() and also requires the running node to have sshpass installed
This is an example of the output from the switch CLI as the commands entered interactively (keyboard) without fabric
[standalone: master] > enable
[standalone: master] # debug generate dump
[standalone: master] # debug generate dump Generated dump sysdump-SX6036-1-20130630-104051.tgz
[standalone: master] #
thanks.
So working with a session state isn't something Fabric does. Every call is a new session. There are some other project that try and get around this, one being fexpect, but since you're attempting to query a switch I don't believe that will work. Since fexpect (last i knew) uploads a expect script which it then runs to the remote machine.
What you might have better luck with though is pxssh from the pexpect module. It allows ssh+expect like work simple enough. It's outside Fabric, but more likely to work for you right out of the gate I think.
work with robot-framework based on 'paramiko'. It has more simple API (write/read_unitl/read_all) to interact with your switch shell.
http://robotframework.org/SSHLibrary/latest/SSHLibrary.html