Paramiko: nest ssh session to another machine while preserving paramiko functionality (ProxyJump) - python

I'm trying to use paramiko to bounce an SSH session via netcat:
MyLocalMachine ----||----> MiddleMachine --(netcat)--> AnotherMachine
('localhost') (firewall) ('1.1.1.1') ('2.2.2.2')
There is no direct connection from MyLocalMachine to
AnotherMachine
The SSH server on MiddleMachine will not accept any attempts to open a direct-tcpip channel connected to AnotherMachine
I can't use SSH keys. I can only connect via given username and password.
I can't use sshpass
I can't use PExpect
I want to connect automatically
I want to preserve all of paramiko functionality
I can achieve this partially using the following code:
cli = paramiko.SSHClient()
cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
proxy = paramiko.ProxyCommand('ssh user#1.1.1.1 nc 2.2.2.2 22')
cli.connect(hostname='2.2.2.2', username='user', password='pass', sock=proxy)
The thing is, that because ProxyCommand is using subprocess.Popen to run the given command, it is asking me to give the password "ad-hoc", from user input (also, it requires the OS on MyLocalMachine to have ssh installed - which isn't always the case).
Since ProxyCommand's methods (recv, send) are a simple bindings to apropriate POpen methods, I was wondering if it would be possible to trick paramiko client into using another client's session as the proxy?

Update 15.05.18: added the missing code (copy-paste gods haven't been favorable to me).
TL;DR: I managed to do it using simple exec_command call and a class that pretends to be a sock.
To summarize:
This solution does not use any other port than 22. If you can manually connect to the machine by nesting ssh clients - it will work. It doesn't require any port forwarding nor configuration changes.
It works without prompting for password (everything is automatic)
It nests ssh sessions while preserving paramiko functionality.
You can nest sessions as many times as you want
It requires netcat (nc) installed on the proxy host - although anything that can provide basic netcat functionality (moving data between a socket and stdin/stdout) will work.
So, here be the solution:
The masquerader
The following code defines a class that can be used in place of paramiko.ProxyCommand. It supplies all the methods that a standard socket object does. The init method of this class takes the 3-tupple that exec_command() normally returns:
Note: It was tested extensively by me, but you shouldn't take anything for granted. It is a hack.
import paramiko
import time
import socket
from select import select
class ParaProxy(paramiko.proxy.ProxyCommand):
def __init__(self, stdin, stdout, stderr):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.timeout = None
self.channel = stdin.channel
def send(self, content):
try:
self.stdin.write(content)
except IOError as exc:
raise socket.error("Error: {}".format(exc))
return len(content)
def recv(self, size):
try:
buffer = b''
start = time.time()
while len(buffer) < size:
select_timeout = self._calculate_remaining_time(start)
ready, _, _ = select([self.stdout.channel], [], [],
select_timeout)
if ready and self.stdout.channel is ready[0]:
buffer += self.stdout.read(size - len(buffer))
except socket.timeout:
if not buffer:
raise
except IOError as e:
return ""
return buffer
def _calculate_remaining_time(self, start):
if self.timeout is not None:
elapsed = time.time() - start
if elapsed >= self.timeout:
raise socket.timeout()
return self.timeout - elapsed
return None
def close(self):
self.stdin.close()
self.stdout.close()
self.stderr.close()
self.channel.close()
The usage
The following shows how I used the above class to solve my problem:
# Connecting to MiddleMachine and executing netcat
mid_cli = paramiko.SSHClient()
mid_cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
mid_cli.connect(hostname='1.1.1.1', username='user', password='pass')
io_tupple = mid_cli.exec_command('nc 2.2.2.2 22')
# Instantiate the 'masquerader' class
proxy = ParaProxy(*io_tupple)
# Connecting to AnotherMachine and executing... anything...
end_cli = paramiko.SSHClient()
end_cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
end_cli.connect(hostname='2.2.2.2', username='user', password='pass', sock=proxy)
end_cli.exec_command('echo THANK GOD FINALLY')
Et voila.

Better to post this as a proposed answer, you can do the following:
Code is not tested nor will work as it is very incomplete. I would recommend to check this amazing tut for reference http://www.revsys.com/writings/quicktips/ssh-tunnel.html
From the middle machine
"ssh -f user#anothermachine -L 2000:localhost:22 -N"
From localmachine:
paramiko.connect(middlemachine, 2000)

Related

Writing an external program to interface with wpa_supplicant

I need to interact directly with wpa_supplicant from Python. As I understand it one can connect to wpa_supplicant using Unix sockets and wpa_supplicant control interface (https://w1.fi/wpa_supplicant/devel/ctrl_iface_page.html).
I wrote a simple program that sends a PING command:
import socket
CTRL_SOCKETS = "/home/victor/Research/wpa_supplicant_python/supplicant_conf"
INTERFACE = "wlx84c9b281aa80"
SOCKETFILE = "{}/{}".format(CTRL_SOCKETS, INTERFACE)
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
s.connect(SOCKETFILE)
s.send(b'PING')
while 1:
data = s.recv(1024)
if data:
print(repr(data))
But when I run it, wpa_supplicant reports an error:
wlx84c9b281aa80: ctrl_iface sendto failed: 107 - Transport endpoint is not connected
Could someone please provide an example, how you would do a 'scan' and then print 'scan_results'.
Apparently, the type of socket that wpa_supplicant uses (UNIX datagram) does not provide any way for the server to reply. There are a few ways to get around that. wpa_supplicant in particular seems to support replies through a separate socket (found at a path appended at the end of each message).
Weirdly enough, this seems to be a relatively common practice in Linux: /dev/log seems to work in the same way.
Here's a program that does what you asked for:
import socket, os
from time import sleep
def sendAndReceive(outmsg, csock, ssock_filename):
'''Sends outmsg to wpa_supplicant and returns the reply'''
# the return socket object can be used to send the data
# as long as the address is provided
csock.sendto(str.encode(outmsg), ssock_filename)
(bytes, address) = csock.recvfrom(4096)
inmsg = bytes.decode('utf-8')
return inmsg
wpasock_file = '/var/run/wpa_supplicant/wlp3s0'
retsock_file = '/tmp/return_socket'
if os.path.exists(retsock_file):
os.remove(retsock_file)
retsock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
retsock.bind(retsock_file)
replyToScan = sendAndReceive('SCAN', retsock, wpasock_file)
print(f'SCAN: {replyToScan}')
sleep(5)
replyToScanResults = sendAndReceive('SCAN_RESULTS', retsock, wpasock_file)
print(f'SCAN_RESULTS: {replyToScanResults}')
retsock.close()
os.remove(retsock_file)

Pass input/variables to command/script over SSH using Python Paramiko

I am having issues passing responses to a bash script on a remote server over SSH.
I am writing a program in Python 3.6.5 that will SSH to a remote Linux server.
On this remote Linux server there is a bash script that I am running which requires user input to fill in. For whatever reason I cannot pass a user input from my original python program over SSH and have it fill in the bash script user input questions.
main.py
from tkinter import *
import SSH
hostname = 'xxx'
username = 'xxx'
password = 'xxx'
class Connect:
def module(self):
name = input()
connection = SSH.SSH(hostname, username, password)
connection.sendCommand(
'cd xx/{}/xxxxx/ && source .cshrc && ./xxx/xxxx/xxxx/xxxxx'.format(path))
SSH.py
from paramiko import client
class SSH:
client = None
def __init__(self, address, username, password):
print("Login info sent.")
print("Connecting to server.")
self.client = client.SSHClient() # Create a new SSH client
self.client.set_missing_host_key_policy(client.AutoAddPolicy())
self.client.connect(
address, username=username, password=password, look_for_keys=False) # connect
def sendCommand(self, command):
print("Sending your command")
# Check if connection is made previously
if (self.client):
stdin, stdout, stderr = self.client.exec_command(command)
while not stdout.channel.exit_status_ready():
# Print stdout data when available
if stdout.channel.recv_ready():
# Retrieve the first 1024 bytes
alldata = stdout.channel.recv(1024)
while stdout.channel.recv_ready():
# Retrieve the next 1024 bytes
alldata += stdout.channel.recv(1024)
# Print as string with utf8 encoding
print(str(alldata, "utf8"))
else:
print("Connection not opened.")
The final /xxxxxx in class Connect is the remote script that is launched.
It will open a text response awaiting a format such as
What is your name:
and I cannot seem to find a way to properly pass the response to the script from my main.py file within the class Connect.
Every way I have tried to pass name as an argument or a variable the answer seems to just disappear (likely since it is trying to print it at the Linux prompt and not within the bash script)
I think using the read_until function to look for the : at the end of the question may work.
Suggestions?
Write the input that your command needs to the stdin:
stdin, stdout, stderr = self.client.exec_command(command)
stdin.write(name + '\n')
stdin.flush()
(You will of course need to propagate the name variable from module to sendCommand, but I assume you know how to do that part).

How to feed information to a Python daemon?

I have a Python daemon running on a Linux system.
I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file.
This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something.
What would be the best way to achieve this?
I was thinking about a named pipe or the Queue library but there could be better solutions.
Here is how you can do it with a fifo:
# receiver.py
import os
import sys
import atexit
# Set up the FIFO
thefifo = 'comms.fifo'
os.mkfifo(thefifo)
# Make sure to clean up after ourselves
def cleanup():
os.remove(thefifo)
atexit.register(cleanup)
# Go into reading loop
while True:
with open(thefifo, 'r') as fifo:
for line in fifo:
print "Hello", line.strip()
You can use it like this from a shell session
$ python receiver.py &
$ echo "Alice" >> comms.fifo
Hello Alice
$ echo "Bob" >> comms.fifo
Hello Bob
There are several options
1) If the daemon should accept messages from other systems, make the daemon an RPC server - Use xmlrpc/jsonrpc.
2) If it is all local, you can use either TCP sockets or Named PIPEs.
3) If there will be a huge set of clients connecting concurrently, you can use select.epoll.
python has a built-in rpc library (using xml for data encoding). the documentation is well written; there is a complete example there:
https://docs.python.org/2.7/library/xmlrpclib.html
(python 2.7) or
https://docs.python.org/3.3/library/xmlrpc.server.html#module-xmlrpc.server
(python 3.3)
that may be worth considering.
Everyone mentioned FIFO-s (that's named pipes in Linux terminology) and XML-RPC, but if you learning these things right now, you have to check TCP/UDP/Unix sockets as well, since they are platform independent (at least, TCP/UDP sockets are). You can check this tutorial for a working example or the Python documentation if you want to go deper in this direction. It's also useful since most of the modern communication platforms (XML-RPC, SOAP, REST) uses these basic things.
There are a few mechanisms you could use, but everything boils down to using IPC (inter-process communication).
Now, the actual mechanism you will use depends on the details of what you can achieve, a good solution though would be to use something like zmq.
Check the following example on pub/sub on zmq
http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/patterns/pubsub.html
also this
http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/multisocket/zmqpoller.html
for the non-blocking way.
I'm not good in python so I would like to share
**Universal Inter process communcation **
nc a.k.a netcat is a server client model program which allow to send data such as text,files over network.
Advantages of nc
Very easy to use
IPC even between different programming langauges
Inbuilt on most linux OS
Example
On deamon
nc -l 1234 > output.txt
From other program or shell/terminal/script
echo HELLO | nc 127.0.0.1 1234
nc can be python by using the system command calling function ( may be os.system ) and read the stdout.
Why not use signals?
I am not a python programmer but presumably you can register a signal handler within your daemon and then signal it from the terminal. Just use SIGUSR or SIGHUP or similar.
This is the usual method you use to rotate logfiles or similar.
One solution could be to use the asynchat library which simplify calls between a server and a client.
Here is an example you could use (adapted from this site)
In deamon.py, a ChatServer object is created. Each time a connection is done, a ChatHandler object is created, inherited from asynchat.async_chat. This object collects data and fills it in self.buffer.
When a special string call the terminator is encountered, data is supposed to be complete and method found_terminator is called. It is in this method that you write your own code.
In sender.py, you create a ChatClient object, inherited from asynchat.async_chat, setup the connection in the constructor, define the terminator (in case the server answers !) and call the push method to send your data. You must append your terminator string to your data for the server to know when it can stop reading data.
daemon.py :
import asynchat
import asyncore
import socket
# Terminator string can be changed here
TERMINATOR = '\n'
class ChatHandler(asynchat.async_chat):
def __init__(self, sock):
asynchat.async_chat.__init__(self, sock=sock)
self.set_terminator(TERMINATOR)
self.buffer = []
def collect_incoming_data(self, data):
self.buffer.append(data)
def found_terminator(self):
msg = ''.join(self.buffer)
# Change here what the daemon is supposed to do when a message is retrieved
print 'Hello', msg
self.buffer = []
class ChatServer(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind((host, port))
self.listen(5)
def handle_accept(self):
pair = self.accept()
if pair is not None:
sock, addr = pair
print 'Incoming connection from %s' % repr(addr)
handler = ChatHandler(sock)
server = ChatServer('localhost', 5050)
print 'Serving on localhost:5050'
asyncore.loop()
sender.py :
import asynchat
import asyncore
import socket
import threading
# Terminator string can be changed here
TERMINATOR = '\n'
class ChatClient(asynchat.async_chat):
def __init__(self, host, port):
asynchat.async_chat.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
self.set_terminator(TERMINATOR)
self.buffer = []
def collect_incoming_data(self, data):
pass
def found_terminator(self):
pass
client = ChatClient('localhost', 5050)
# Data sent from here
client.push("Bob" + TERMINATOR)
client.push("Alice" + TERMINATOR)

How to achieve tcpflow functionality (follow tcp stream) purely within python

I am writing a tool in python (platform is linux), one of the tasks is to capture a live tcp stream and to
apply a function to each line. Currently I'm using
import subprocess
proc = subprocess.Popen(['sudo','tcpflow', '-C', '-i', interface, '-p', 'src', 'host', ip],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
do_something(line)
This works quite well (with the appropriate entry in /etc/sudoers), but I would like to avoid calling an external program.
So far I have looked into the following possibilities:
flowgrep: a python tool which looks just like what I need, BUT: it uses pynids
internally, which is 7 years old and seems pretty much abandoned. There is no pynids package
for my gentoo system and it ships with a patched version of libnids
which I couldn't compile without further tweaking.
scapy: this is a package manipulation program/library for python,
I'm not sure if tcp stream
reassembly is supported.
pypcap or pylibpcap as wrappers for libpcap. Again, libpcap is for packet
capturing, where I need stream reassembly which is not possible according
to this question.
Before I dive deeper into any of these libraries I would like to know if maybe someone
has a working code snippet (this seems like a rather common problem). I'm also grateful if
someone can give advice about the right way to go.
Thanks
Jon Oberheide has led efforts to maintain pynids, which is fairly up to date at:
http://jon.oberheide.org/pynids/
So, this might permit you to further explore flowgrep. Pynids itself handles stream reconstruction rather elegantly.See http://monkey.org/~jose/presentations/pysniff04.d/ for some good examples.
Just as a follow-up: I abandoned the idea to monitor the stream on the tcp layer. Instead I wrote a proxy in python and let the connection I want to monitor (a http session) connect through this proxy. The result is more stable and does not need root privileges to run. This solution depends on pymiproxy.
This goes into a standalone program, e.g. helper_proxy.py
from multiprocessing.connection import Listener
import StringIO
from httplib import HTTPResponse
import threading
import time
from miproxy.proxy import RequestInterceptorPlugin, ResponseInterceptorPlugin, AsyncMitmProxy
class FakeSocket(StringIO.StringIO):
def makefile(self, *args, **kw):
return self
class Interceptor(RequestInterceptorPlugin, ResponseInterceptorPlugin):
conn = None
def do_request(self, data):
# do whatever you need to sent data here, I'm only interested in responses
return data
def do_response(self, data):
if Interceptor.conn: # if the listener is connected, send the response to it
response = HTTPResponse(FakeSocket(data))
response.begin()
Interceptor.conn.send(response.read())
return data
def main():
proxy = AsyncMitmProxy()
proxy.register_interceptor(Interceptor)
ProxyThread = threading.Thread(target=proxy.serve_forever)
ProxyThread.daemon=True
ProxyThread.start()
print "Proxy started."
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='some_secret_password')
while True:
Interceptor.conn = listener.accept()
print "Accepted Connection from", listener.last_accepted
try:
Interceptor.conn.recv()
except: time.sleep(1)
finally:
Interceptor.conn.close()
if __name__ == '__main__':
main()
Start with python helper_proxy.py. This will create a proxy listening for http connections on port 8080 and listening for another python program on port 6000. Once the other python program has connected on that port, the helper proxy will send all http replies to it. This way the helper proxy can continue to run, keeping up the http connection, and the listener can be restarted for debugging.
Here is how the listener works, e.g. listener.py:
from multiprocessing.connection import Client
def main():
address = ('localhost', 6000)
conn = Client(address, authkey='some_secret_password')
while True:
print conn.recv()
if __name__ == '__main__':
main()
This will just print all the replies. Now point your browser to the proxy running on port 8080 and establish the http connection you want to monitor.

Paramiko SSH Tunnel Shutdown Issue

I'm working on a python script to query a few remote databases over an established ssh tunnel every so often. I'm fairly familiar with the paramiko library, so that was my choice of route. I'd prefer to keep this in complete python so I can use paramiko to deal with key issues, as well as uses python to start, control, and shutdown the ssh tunnels.
There have been a few related questions around here about this topic, but most of them seemed incomplete in answers. My solution below is a hacked together of the solutions I've found so far.
Now for the problem: I'm able to create the first tunnel quite easily (in a separate thread) and do my DB/python stuff, but when attempting to close the tunnel the localhost won't release the local port I binded to. Below, I've included my source and the relevant netstat data through each step of the process.
#!/usr/bin/python
import select
import SocketServer
import sys
import paramiko
from threading import Thread
import time
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
class Handler (SocketServer.BaseRequestHandler):
def handle(self):
try:
chan = self.ssh_transport.open_channel('direct-tcpip', (self.chain_host, self.chain_port), self.request.getpeername())
except Exception, e:
print('Incoming request to %s:%d failed: %s' % (self.chain_host, self.chain_port, repr(e)))
return
if chan is None:
print('Incoming request to %s:%d was rejected by the SSH server.' % (self.chain_host, self.chain_port))
return
print('Connected! Tunnel open %r -> %r -> %r' % (self.request.getpeername(), chan.getpeername(), (self.chain_host, self.chain_port)))
while True:
r, w, x = select.select([self.request, chan], [], [])
if self.request in r:
data = self.request.recv(1024)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(1024)
if len(data) == 0:
break
self.request.send(data)
chan.close()
self.request.close()
print('Tunnel closed from %r' % (self.request.getpeername(),))
class DBTunnel():
def __init__(self,ip):
self.c = paramiko.SSHClient()
self.c.load_system_host_keys()
self.c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.c.connect(ip, username='someuser')
self.trans = self.c.get_transport()
def startTunnel(self):
class SubHandler(Handler):
chain_host = '127.0.0.1'
chain_port = 5432
ssh_transport = self.c.get_transport()
def ThreadTunnel():
global t
t = ForwardServer(('', 3333), SubHandler)
t.serve_forever()
Thread(target=ThreadTunnel).start()
def stopTunnel(self):
t.shutdown()
self.trans.close()
self.c.close()
Although I will end up using a stopTunnel() type method, I've realize that code isn't entirely correct, but more so an experimentation of trying to get the tunnel to shutdown properly and test my results.
When I first call create the DBTunnel object and call startTunnel(), netstat yields the following:
tcp4 0 0 *.3333 *.* LISTEN
tcp4 0 0 MYIP.36316 REMOTE_HOST.22 ESTABLISHED
tcp4 0 0 127.0.0.1.5432 *.* LISTEN
Once I call stopTunnel(), or even delete the DBTunnel object itself..I'm left with this connection until I exit python all together, and what I assume to be the garbage collector takes care of it:
tcp4 0 0 *.3333 *.* LISTEN
It would be nice to figure out why this open socket is hanging around independent of the DBConnect object, and how to close it properly from within my script. If I try and bind a different connection to different IP using the same local port before completely exiting python (time_wait is not the issue), then I get the infamous bind err 48 address in use. Thanks in advance :)
It appears the SocketServer's shutdown method isn't properly shutting down/closing the socket. With the below changes in my code, I retain access to the SocketServer object and access the socket directly to close it. Note that socket.close() works in my case, but others might be interested in socket.shutdown() followed by a socket.close() if other resources are accessing that socket.
[Ref: socket.shutdown vs socket.close
def ThreadTunnel():
self.t = ForwardServer(('127.0.0.1', 3333), SubHandler)
self.t.serve_forever()
Thread(target=ThreadTunnel).start()
def stopTunnel(self):
self.t.shutdown()
self.trans.close()
self.c.close()
self.t.socket.close()
Note that you don't have do the Subhandler hack as shown in the demo code. The comment is wrong. Handlers do have access to their Server's data. Inside a handler you can use self.server.instance_data.
If you use the following code, in your Handler, you would use
self.server.chain_host
self.server.chain_port
self.server.ssh_transport
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
def __init__(
self, connection, handler, chain_host, chain_port, ssh_transport):
SocketServer.ThreadingTCPServer.__init__(self, connection, handler)
self.chain_host = chain_host
self.chain_port = chain_port
self.ssh_transport = ssh_transport
...
server = ForwardServer(('', local_port), Handler,
remote_host, remote_port, transport)
server.serve_forever()
You may want to add some synchronization between the spawned thread and the caller so that you don't try to use the tunnel before it is ready. Something like:
from threading import Event
def startTunnel(self):
class SubHandler(Handler):
chain_host = '127.0.0.1'
chain_port = 5432
ssh_transport = self.c.get_transport()
mysignal = Event()
mysignal.clear()
def ThreadTunnel():
global t
t = ForwardServer(('', 3333), SubHandler)
mysignal.set()
t.serve_forever()
Thread(target=ThreadTunnel).start()
mysignal.wait()
You can also try sshtunnel it has two cases to close tunnel .stop() if you want to wait until the end of all active connections or .stop(force=True) to close all active connections.
If you don't want to use it you can check the source code for this logic here: https://github.com/pahaz/sshtunnel/blob/090a1c1/sshtunnel.py#L1423-L1456

Categories

Resources