I am try to use Python to set up a reverse SSH tunnel. Some software that starts with the system is going to manage it and kill it or start it based on commands it receives.
I have written a class to manage the reverse tunnel as follows:
# imports omitted for brevity
class SshProcess():
def __init__(self):
self.process = None
def start(self, port):
if self.process is not None:
return None
command = [
# 'sudo',
'ssh',
'-R {port}:127.0.0.1:22'.format(port=port),
'{username}#{host}'.format(username=config.USERNAME, host=config.HOST),
'-o StrictHostKeyChecking=no'
]
def threaded_popen():
self.process = subprocess.Popen(
(' '.join(command)), # command, # shlex.split(command),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True
)
self.process.wait()
logger.info('Reverse SSH to {username}#{host} has exited'.format(username=config.USERNAME, host=config.HOST))
logger.debug('command raw: {command}'.format(command=command))
logger.debug('command joined: {command}'.format(command=(' '.join(command))))
self.thread = Thread(target=threaded_popen)
self.thread.start()
def stop(self):
if self.process is not None:
try:
self.process.communicate(input="exit\n")
self.process.terminate()
except (ValueError, OSError) as e:
logger.warning('Closing reverse SSH raised {error}'.format(error=e.__class__.__name__))
logger.warning(e)
self.process = None
if self.thread is not None:
self.thread.join()
Now whenever I call start I receive the following log statements:
2017-06-28 14:32:46,343 - module - DEBUG - command raw: ['ssh', '-R 4000:127.0.0.1:22', 'tich#192.168.0.88', '-o StrictHostKeyChecking=no']
2017-06-28 14:32:46,344 - module - DEBUG - command joined: ssh -R 4000:127.0.0.1:22 tich#192.168.0.88 -o StrictHostKeyChecking=no
2017-06-28 14:32:46,797 - module - INFO - Reverse SSH to tich#192.168.0.88 has exited
The issue is the ssh tunnel exits nearly instantly after starting. performing a simple pidof ssh in Linux gives no output as if the process does not even exist.
I have also tried using communicate() after starting the process and you can see it establishes the connection and receives output. However shortly after the function exits, the subprocess exits as well.
I have set up RSA keypairs for both the root and the regular user. Copying and pasting the command into a terminal does not produce the instant exit bug.
The purpose is setting up a reverse SSH session so a remote user can log in. But I currently have not found an existing packaged solution that offers this functionality.
You done some weird ssh connection.My advice is to use paramiko a great ssh package.
on the other hand, you are sub-processioning only for a linux commamd so if u like it like that use:
install sshpass (yum install or apt-get)
sshpass -p your_password ssh user#hostname
and change this setting instead of the flag u sent:
change ssh_config
vi /etc/ssh/ssh_config
change the below key from "ask" to "no"
StrictHostKeyChecking no
Related
I am writing a program in python on Ubuntu, to execute a command ls -l on RaspberryPi, connect with Network.
Can anybody guide me on how do I do that?
Sure, there are several ways to do it!
Let's say you've got a Raspberry Pi on a raspberry.lan host and your username is irfan.
subprocess
It's the default Python library that runs commands.
You can make it run ssh and do whatever you need on a remote server.
scrat has it covered in his answer. You definitely should do this if you don't want to use any third-party libraries.
You can also automate the password/passphrase entering using pexpect.
paramiko
paramiko is a third-party library that adds SSH-protocol support, so it can work like an SSH-client.
The example code that would connect to the server, execute and grab the results of the ls -l command would look like that:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('raspberry.lan', username='irfan', password='my_strong_password')
stdin, stdout, stderr = client.exec_command('ls -l')
for line in stdout:
print line.strip('\n')
client.close()
fabric
You can also achieve it using fabric.
Fabric is a deployment tool which executes various commands on remote servers.
It's often used to run stuff on a remote server, so you could easily put your latest version of the web application, restart a web-server and whatnot with a single command. Actually, you can run the same command on multiple servers, which is awesome!
Though it was made as a deploying and remote management tool, you still can use it to execute basic commands.
# fabfile.py
from fabric.api import *
def list_files():
with cd('/'): # change the directory to '/'
result = run('ls -l') # run a 'ls -l' command
# you can do something with the result here,
# though it will still be displayed in fabric itself.
It's like typing cd / and ls -l in the remote server, so you'll get the list of directories in your root folder.
Then run in the shell:
fab list_files
It will prompt for an server address:
No hosts found. Please specify (single) host string for connection: irfan#raspberry.lan
A quick note: You can also assign a username and a host right in a fab command:
fab list_files -U irfan -H raspberry.lan
Or you could put a host into the env.hosts variable in your fabfile. Here's how to do it.
Then you'll be prompted for a SSH password:
[irfan#raspberry.lan] run: ls -l
[irfan#raspberry.lan] Login password for 'irfan':
And then the command will be ran successfully.
[irfan#raspberry.lan] out: total 84
[irfan#raspberry.lan] out: drwxr-xr-x 2 root root 4096 Feb 9 05:54 bin
[irfan#raspberry.lan] out: drwxr-xr-x 3 root root 4096 Dec 19 08:19 boot
...
Simple example from here:
import subprocess
import sys
HOST="www.example.org"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
It does exactly what you want: connects over ssh, executes command, returns output. No third party library needed.
You may use below method with linux/ Unix 's built in ssh command.
import os
os.system('ssh username#ip bash < local_script.sh >> /local/path/output.txt 2>&1')
os.system('ssh username#ip python < local_program.py >> /local/path/output.txt 2>&1')
Paramiko module can be used to run multiple commands by invoking shell. Here I created class to invoke ssh shell
class ShellHandler:
def __init__(self, host, user, psw):
logger.debug("Initialising instance of ShellHandler host:{0}".format(host))
try:
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect(host, username=user, password=psw, port=22)
self.channel = self.ssh.invoke_shell()
except:
logger.error("Error Creating ssh connection to {0}".format(host))
logger.error("Exiting ShellHandler")
return
self.psw=psw
self.stdin = self.channel.makefile('wb')
self.stdout = self.channel.makefile('r')
self.host=host
time.sleep(2)
while not self.channel.recv_ready():
time.sleep(2)
self.initialprompt=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
self.initialprompt=self.initialprompt+str(tmp.decode())
def __del__(self):
self.ssh.close()
logger.info("closed connection to {0}".format(self.host))
def execute(self, cmd):
cmd = cmd.strip('\n')
self.stdin.write(cmd + '\n')
#self.stdin.write(self.psw +'\n')
self.stdin.flush()
time.sleep(1)
while not self.stdout.channel.recv_ready():
time.sleep(2)
logger.debug("Waiting for recv_ready")
output=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
output=output+str(tmp.decode())
return output
If creating different shell each time does not matter to you then you can use method as below.
def run_cmd(self,cmd):
try:
cmd=cmd+'\n'
#self.ssh.settimeout(60)
stdin,stdout,stderr=self.ssh.exec_command(cmd)
while not stdout.channel.eof_received:
time.sleep(3)
logger.debug("Waiting for eof_received")
out=""
while stdout.channel.recv_ready():
err=stderr.read()
if err:
print("Error: ",my_hostname, str(err))
return False
out=out+stdout.read()
if out:
return out
except:
error=sys.exc_info()
logger.error(error)
return False
I am writing a program in python on Ubuntu, to execute a command ls -l on RaspberryPi, connect with Network.
Can anybody guide me on how do I do that?
Sure, there are several ways to do it!
Let's say you've got a Raspberry Pi on a raspberry.lan host and your username is irfan.
subprocess
It's the default Python library that runs commands.
You can make it run ssh and do whatever you need on a remote server.
scrat has it covered in his answer. You definitely should do this if you don't want to use any third-party libraries.
You can also automate the password/passphrase entering using pexpect.
paramiko
paramiko is a third-party library that adds SSH-protocol support, so it can work like an SSH-client.
The example code that would connect to the server, execute and grab the results of the ls -l command would look like that:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('raspberry.lan', username='irfan', password='my_strong_password')
stdin, stdout, stderr = client.exec_command('ls -l')
for line in stdout:
print line.strip('\n')
client.close()
fabric
You can also achieve it using fabric.
Fabric is a deployment tool which executes various commands on remote servers.
It's often used to run stuff on a remote server, so you could easily put your latest version of the web application, restart a web-server and whatnot with a single command. Actually, you can run the same command on multiple servers, which is awesome!
Though it was made as a deploying and remote management tool, you still can use it to execute basic commands.
# fabfile.py
from fabric.api import *
def list_files():
with cd('/'): # change the directory to '/'
result = run('ls -l') # run a 'ls -l' command
# you can do something with the result here,
# though it will still be displayed in fabric itself.
It's like typing cd / and ls -l in the remote server, so you'll get the list of directories in your root folder.
Then run in the shell:
fab list_files
It will prompt for an server address:
No hosts found. Please specify (single) host string for connection: irfan#raspberry.lan
A quick note: You can also assign a username and a host right in a fab command:
fab list_files -U irfan -H raspberry.lan
Or you could put a host into the env.hosts variable in your fabfile. Here's how to do it.
Then you'll be prompted for a SSH password:
[irfan#raspberry.lan] run: ls -l
[irfan#raspberry.lan] Login password for 'irfan':
And then the command will be ran successfully.
[irfan#raspberry.lan] out: total 84
[irfan#raspberry.lan] out: drwxr-xr-x 2 root root 4096 Feb 9 05:54 bin
[irfan#raspberry.lan] out: drwxr-xr-x 3 root root 4096 Dec 19 08:19 boot
...
Simple example from here:
import subprocess
import sys
HOST="www.example.org"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
It does exactly what you want: connects over ssh, executes command, returns output. No third party library needed.
You may use below method with linux/ Unix 's built in ssh command.
import os
os.system('ssh username#ip bash < local_script.sh >> /local/path/output.txt 2>&1')
os.system('ssh username#ip python < local_program.py >> /local/path/output.txt 2>&1')
Paramiko module can be used to run multiple commands by invoking shell. Here I created class to invoke ssh shell
class ShellHandler:
def __init__(self, host, user, psw):
logger.debug("Initialising instance of ShellHandler host:{0}".format(host))
try:
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect(host, username=user, password=psw, port=22)
self.channel = self.ssh.invoke_shell()
except:
logger.error("Error Creating ssh connection to {0}".format(host))
logger.error("Exiting ShellHandler")
return
self.psw=psw
self.stdin = self.channel.makefile('wb')
self.stdout = self.channel.makefile('r')
self.host=host
time.sleep(2)
while not self.channel.recv_ready():
time.sleep(2)
self.initialprompt=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
self.initialprompt=self.initialprompt+str(tmp.decode())
def __del__(self):
self.ssh.close()
logger.info("closed connection to {0}".format(self.host))
def execute(self, cmd):
cmd = cmd.strip('\n')
self.stdin.write(cmd + '\n')
#self.stdin.write(self.psw +'\n')
self.stdin.flush()
time.sleep(1)
while not self.stdout.channel.recv_ready():
time.sleep(2)
logger.debug("Waiting for recv_ready")
output=""
while self.channel.recv_ready():
rl, wl, xl = select.select([ self.stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = self.stdout.channel.recv(24)
output=output+str(tmp.decode())
return output
If creating different shell each time does not matter to you then you can use method as below.
def run_cmd(self,cmd):
try:
cmd=cmd+'\n'
#self.ssh.settimeout(60)
stdin,stdout,stderr=self.ssh.exec_command(cmd)
while not stdout.channel.eof_received:
time.sleep(3)
logger.debug("Waiting for eof_received")
out=""
while stdout.channel.recv_ready():
err=stderr.read()
if err:
print("Error: ",my_hostname, str(err))
return False
out=out+stdout.read()
if out:
return out
except:
error=sys.exc_info()
logger.error(error)
return False
I've just started using the psycopg2 module to access a local PostgreSQL server and I'll like a way to progammatically check if the server is already started so my program can handle the error when I try to start the server:
psqldir = 'C:/Program Files/PostgreSQL/9.2/bin' # Windows
username = os.environ.get('USERNAME')
dbname = 'mydb'
os.chdir(psqldir)
os.system('pg_ctl start -D C:/mydatadir') # Error here if server already started
conn = psycopg2.connect('dbname=' + dbname + ' user=' + username)
cur = conn.cursor()
I experimented a little and it seems that this returns a "0" or "3", which would solve my problem, but I didn't find any information on the PostgreSQL/psycopg2 manual that confirms if this is a documented behavior:
server_state = os.system('pg_ctl status -D C:/mydatadir')
What's the best way? Thanks!
From the pg_ctl documentation:
-w
Wait for the startup or shutdown to complete. Waiting is the default option for shutdowns, but not startups. When waiting for
startup, pg_ctl repeatedly attempts to connect to the server. When
waiting for shutdown, pg_ctl waits for the server to remove its PID
file. pg_ctl returns an exit code based on the success of the startup
or shutdown.
You will also find pg_ctl status useful:
status mode checks whether a server is running in the specified data
directory. If it is, the PID and the command line options that were
used to invoke it are displayed. If the server is not running, the
process returns an exit status of 3.
Try to use pgrep command.
proc = subprocess.Popen(["pgrep -u postgres -f -- -D"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
try:
if int(out) > 0:
return True
except Exception as e:
return False
I need to create an ssh tunnel, then do something, then tear the tunnel down.
I have been trying to do it like this:
def runCmd(self,cmd):
args = shlex.split(cmd)
return subprocess.Popen(args)
def openTunnel
cmd = 'ssh -f -N -L 1313:localhost:1313 userid#server.com'
self.TunnelObj = self.runCmd(cmd)
That creates my Tunnel.
I can then do the stuff I need to do. Now I want to tear down the tunnel.
def closeSocket(self):
print '\nClosing Tunnel\n'
if self.TunnelObj.returncode == None:
print '\nabout to kill\n'
self.TunnelObj.kill()
But the tunnel is still open. An ssh session still exists, and the port is still assigned.
How can I shut this tunnel down?
Part of the problem is that the tunnel process is a subprocess of self.TunnelObj. You can try to omit the -f flag so you hold the tunnel process directly.
Another option would be to look at the paramiko library and this question.
I'm using a simple pexpect script to ssh to a remote machine and grab a value returned by a command.
Is there any way, pexpect or sshwise I can use to ignore the unix greeting?
That is, from
child = pexpect.spawn('/usr/bin/ssh %s#%s' % (rem_user, host))
child.expect('[pP]assword: ', timeout=5)
child.sendline(spass)
child.expect([pexpect.TIMEOUT, prompt])
child.before = '0'
child.sendline ('%s' % cmd2exec)
child.expect([pexpect.EOF, prompt])
# Collected data processing
result = child.before
# logon to the machine returns a lot of garbage, the returned executed command is at the 57th position
print result.split('\r\n') [57]
result = result.split('\r\n') [57]
How can I simply get the returned value, ignoring,
the "Last successful login" and "(c)Copyright" stuff
and without having to concern with the value correct position?
Thanks !
If you have access to the server to which you are logging in, you can try creating a file named .hushlogin in the home directory. The presence of this file silences the standard MOTD greeting and similar stuff.
Alternatively, try ssh -T, which will disable terminal allocation entirely; you won't get a shell prompt, but you may still issue commands and read the response.
There is also a similar thread on ServerFault which may be of some use to you.
If the command isn't interactive, you can just run ssh HOST COMMAND to run the command without all the login excitement happening at all. If the command is interactive, you can frequently use the ssh -t option (ssh -t HOST COMMAND) to force pseudo-tty allocation and trick the remote process to think that it's running attached to a TTY.
I have used paramiko to automate ssh connection and I have found it useful. It can deal with greetings and silent execution.
http://www.lag.net/paramiko/
Hey there you kann kill all that noise by using the sys module and a small class:
class StreamToLogger(object):
"""
Fake file-like stream object that redirects writes to a logger instance.
"""
def __init__(self, logger, log_level=logging.INFO):
self.logger = logger
self.log_level = log_level
self.linebuf = ''
def write(self, buf):
for line in buf.rstrip().splitlines():
self.logger.log(self.log_level, line.rstrip())
#Mak
stdout_logger = logging.getLogger('STDOUT')
sl = StreamToLogger(stdout_logger, logging.INFO)
sys.stdout = sl
stderr_logger = logging.getLogger('STDERR')
sl = StreamToLogger(stderr_logger, logging.ERROR)
sys.stderr = sl
Can't remember where i found that snippet but it works for me :)