Provide remote shell for Python script - python

I want to create a convenient simple way to connect to my running Python script remotely (via file sockets, TCP or whatever) to get a remote interactive shell.
I thought that this would be easy via IPython or so. However, I didn't really found any good example. I tried to start IPython.embed_kernel(), but that is blocking. So I tried to run that in another thread but that had many strange side effects on the rest of my script and I don't want any side effects (no replacement of sys.stdout, sys.stderr, sys.excepthook or whatever) and it also didn't worked - I could not connect. I found this related bug report and this code snippet which suggest to use mock.patch('signal.signal') but that also didn't worked. Also, why do I need that - I also don't want IPython to register any signal handlers.
There are also hacks such as pyringe and my own pydbattach to attach to some running Python instance but they seem to be too hacky.
Maybe QdbRemotePythonDebugger can help me?

My current solution is to setup an IPython ZMQ kernel. I don't just use
IPython.embed_kernel()
because that has many side effects, such as messing around with sys.stdout, sys.stderr, sys.excepthook, signal.signal, etc and I don't want these side effects. Also, embed_kernel() is blocking and doesn't really work out-of-the-box in a separate thread (see here).
So, I came up with this code, which is far too complicated in my opinion. (That is why I created a feature request here.)
def initIPythonKernel():
# You can remotely connect to this kernel. See the output on stdout.
try:
import IPython.kernel.zmq.ipkernel
from IPython.kernel.zmq.ipkernel import Kernel
from IPython.kernel.zmq.heartbeat import Heartbeat
from IPython.kernel.zmq.session import Session
from IPython.kernel import write_connection_file
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
IPython.kernel.zmq.ipkernel.signal = lambda sig, f: None # Overwrite.
except ImportError, e:
print "IPython import error, cannot start IPython kernel. %s" % e
return
import atexit
import socket
import logging
import threading
# Do in mainthread to avoid history sqlite DB errors at exit.
# https://github.com/ipython/ipython/issues/680
assert isinstance(threading.currentThread(), threading._MainThread)
try:
connection_file = "kernel-%s.json" % os.getpid()
def cleanup_connection_file():
try:
os.remove(connection_file)
except (IOError, OSError):
pass
atexit.register(cleanup_connection_file)
logger = logging.Logger("IPython")
logger.addHandler(logging.NullHandler())
session = Session(username=u'kernel')
context = zmq.Context.instance()
ip = socket.gethostbyname(socket.gethostname())
transport = "tcp"
addr = "%s://%s" % (transport, ip)
shell_socket = context.socket(zmq.ROUTER)
shell_port = shell_socket.bind_to_random_port(addr)
iopub_socket = context.socket(zmq.PUB)
iopub_port = iopub_socket.bind_to_random_port(addr)
control_socket = context.socket(zmq.ROUTER)
control_port = control_socket.bind_to_random_port(addr)
hb_ctx = zmq.Context()
heartbeat = Heartbeat(hb_ctx, (transport, ip, 0))
hb_port = heartbeat.port
heartbeat.start()
shell_stream = ZMQStream(shell_socket)
control_stream = ZMQStream(control_socket)
kernel = Kernel(session=session,
shell_streams=[shell_stream, control_stream],
iopub_socket=iopub_socket,
log=logger)
write_connection_file(connection_file,
shell_port=shell_port, iopub_port=iopub_port, control_port=control_port, hb_port=hb_port,
ip=ip)
print "To connect another client to this IPython kernel, use:", \
"ipython console --existing %s" % connection_file
except Exception, e:
print "Exception while initializing IPython ZMQ kernel. %s" % e
return
def ipython_thread():
kernel.start()
try:
ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
pass
thread = threading.Thread(target=ipython_thread, name="IPython kernel")
thread.daemon = True
thread.start()
Note that this code is outdated now. I have made a package here which should contain a more recent version, and which can be installed via pip.
Other alternatives to attach to running CPython process without having it prepared beforehand. Those usually use the OS debugging capabilities (or use gdb/lldb) to attach to the native CPython process and then inject some code or just analyze the native CPython thread stacks.
pyringe
pyrasite
pystuck
pdb-clone
Here are other alternatives where you prepare your Python script beforehand to listen on some (tcp/file) socket to provide an interface for remote debugging and/or just a Python shell / REPL.
winpdb (cross platform) remote debugger
PyCharm IDE remote debugger,
doc
PyDev IDE remote debugger
Twisted Conch Manhole,
official example,
lothar.com example,
lysator.liu.se example,
related StackOverflow question,
blog.futurefoundries.com (2013)
very simple manhole, has also some overview over related projects
ispyd
Eric IDE
Trepan (based on pydb)
rpdb
rconsole
(part of rfoo)
Some overviews and collected code examples:
(QGIS) Example code for PyDev, Winpdb, Eric
Python Wiki: Python debugging tools,
Python Wiki: Python debuggers
(This overview is from here.)

Related

Multiple server login using Putty with Pywinauto python script

How to logged in to multiple server using pywinauto application? I am using below program for access putty and run the command, This is work for single server when i have defined like
app = Application ().Start (cmd_line = 'C:\Program Files\PuTTY\putty.exe -ssh user#10.55.167.4')
But while i am passing with for loop to do the same task for another server it does not work.
from pywinauto.application import Application
import time
server = [ "10.55.167.4", "10.56.127.23" ]
for inser in server:
print(inser)
app = Application ().Start (cmd_line = 'C:\Program Files\PuTTY\putty.exe -ssh user#inser')
putty = app.PuTTY
putty.Wait ('ready')
time.sleep (1)
putty.TypeKeys ("aq#wer")
putty.TypeKeys ("{ENTER}")
putty.TypeKeys ("ls")
putty.TypeKeys ("{ENTER}")
pywinauto is a Python module to automatize graphical user interface (GUI) operations for example to simulate mouse clicking and everything else humans used to do on GUIs to interact with the computer. SSH is a remote access protocol designed primarily for command line and having excellent programmatic support in Python. Putty is a little GUI tool to manage SSH and Telnet connections. Although, based on quick check, I think it is possible to read the console output (you mean in cmd.com?) by pywinauto, I think your approach is unnecessarily complicated: you don't need to use a GUI tool designed for human interaction to access a protocol with excellent command line and programmatic library support. I suggest you to use paramiko which gives a very simple and convenient interface to SSH:
import paramiko
def ssh_connect(server, **kwargs):
client = paramiko.client.SSHClient()
client.load_system_host_keys()
# note: here you can give other paremeters
# like user, port, password, key file, etc.
# also you can wrap the call below into
# try/except, you can expect many kinds of
# errors here, if the address is unreachable,
# times out, authentication fails, etc.
client.connect(server, **kwargs)
return client
servers = ['10.55.167.4', '10.56.127.23']
# connect to all servers
clients = dict((server, ssh_connect(server)) for server in servers)
# execute commands on the servers by the clients, 2 examples:
stdin, stdout, stderr = clients['10.55.167.4'].exec_command('pwd')
print(stdout.read())
# b'/home/denes\n'
stdin, stdout, stderr = clients['10.56.127.23'].exec_command('rm foobar')
print(stderr.read())
# b"rm: cannot remove 'foobar': No such file or directory\n"
# once you finished close the connections
_ = [cl.close() for cl in clients.values()]

Paramiko: Error with windows file paths when transfering file over SFTP

I'm currently working a server-client setup in which I have two separate server scripts. One python script is responsible for running a SSH listener with Paramiko, and that script runs on one machine. I have another server script specifically acting as an SFTP server on another, separate machine, within the same range and subnet as the other one.
My client code is running on a windows 10 system. Both servers are running in unix environments (macOS and Ubuntu 16.04 respectively).
The SFTP server that I am running is aptly titled sftpserver, and is available at https://github.com/rspivak/sftpserver/.
The below code is actually the entirety of my client.py as it stands, minus the import statements.
key = paramiko.RSAKey.from_private_key_file('testkey.key')
transport = paramiko.Transport(('192.168.1.116', 10000))
transport.connect(username='root', password='toor', pkey=key)
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('192.168.1.107', username='root', password='toor')
chan = client.get_transport().open_session()
chan.send("Hey man! I'm connected!")
print(chan.recv(1024))
def sftp(localpath, name):
try:
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put(localpath, '/root/uploads/' + name)
sftp.close()
transport.close()
return "<+> Done uploading"
except Exception as e:
return str(e)
while True:
command = chan.recv(1024).decode()
ipdb.set_trace() // <-- debugging purposes only
if 'grab' in command:
_, path, name = command.split(' ')
chan.send(sftp(path, name))
else:
try:
CMD = subprocess.check_output(command, shell=True)
chan.send(CMD)
except Exception as e:
chan.send(str(e))
client.close()
Executing the grab command in my script looks like this:
grab C:\Users\xxx\testing.txt testing.txt
Now, if I write a path exactly like that (with the back slashes), it will append a second back slash after each one. So, the path I supplied now looks like C:\\Users\xxx\\testing.txt, and this is what I imagine is causing me to receive File not found errors. Thanks to pdb I was able to find this issue, but I am unsure how to continue. In all honesty, I am completely unsure if this problem is paramiko related or if it's some weird python behavior that I haven't encountered yet.
Also, sorry for no stack trace. I'll try to obtain one if possible, but I'm a bit pressed for time right this second.

Remote executing of program via xterm run using paramiko python ssh library

Flow of the program is:
Connect to OpenSSH server on Linux machine using Paramiko library
Open X11 session
Run xterm executable
Run some other program (e.g. Firefox) by typing executable name in the terminal and running it.
I would be grateful if someone can explain how to cause some executable to run in a terminal which was open by using the following code and provide sample source code (source):
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command('xterm')
transport.accept()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I was thinking about running the program in child thread, while the thread running the xterm is waiting for the child to terminate.
Well, this is a bit of a hack, but hey.
What you can do on the remote end is the following: Inside the xterm, you run netcat, listen to any data coming in on some port, and pipe whatever you get into bash. It's not quite the same as typing it into xterm direclty, but it's almost as good as typing it into bash directly, so I hope it'll get you a bit closer to your goal. If you really want to interact with xterm directly, you might want to read this.
For example:
terminal 1:
% nc -l 3333 | bash
terminal 2 (type echo hi here):
% nc localhost 3333
echo hi
Now you should see hi pop out of the first terminal. Now try it with xterm&. It worked for me.
Here's how you can automate this in Python. You may want to add some code that enables the server to tell the client when it's ready, rather than using the silly time.sleeps.
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# for connecting to netcat running remotely
from multiprocessing import Process
import time
# data
import getpass
SSHServerPort=22
SSHServerIP = "localhost"
# get username/password interactively, or use some other method..
user = getpass.getuser()
pwd = getpass.getpass("enter pw for '" + user + "': ")
NETCAT_PORT = 3333
FIREFOX_CMD="/path/to/firefox &"
#FIREFOX_CMD="xclock&"#or this :)
def run_stuff_in_xterm():
time.sleep(5)
s = socket.socket(socket.AF_INET6 if ":" in SSHServerIP else socket.AF_INET, socket.SOCK_STREAM)
s.connect((SSHServerIP, NETCAT_PORT))
s.send("echo \"Hello there! Are you watching?\"\n")
s.send(FIREFOX_CMD + "\n")
time.sleep(30)
s.send("echo bye bye\n")
time.sleep(2)
s.close()
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command("xterm -e \"nc -l 0.0.0.0 %d | /bin/bash\"" % NETCAT_PORT)
p = Process(target=run_stuff_in_xterm)
transport.accept()
p.start()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
p.join()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I tested this on a Mac, so I commented out the XmingProc bits and used /Applications/Firefox.app/Contents/MacOS/firefox as FIREFOX_CMD (and xclock).
The above isn't exactly a secure setup, as anyone connecting to the port at the right time could run arbitrary code on your remote server, but it sounds like you're planning to use this for testing purposes anyway. If you want to improve the security, you could make netcat bind to 127.0.0.1 rather than 0.0.0.0, setup an ssh tunnel (run ssh -L3333:localhost:3333 username#remote-host.com to tunnel all traffic received locally on port 3333 to remote-host.com:3333), and let Python connect to ("localhost", 3333).
Now you can combine this with selenium for browser automation:
Follow the instructions from this page, i.e. download the selenium standalone server jar file, put it into /path/to/some/place (on the server), and pip install -U selenium (again, on the server).
Next, put the following code into selenium-example.py in /path/to/some/place:
#!/usr/bin/env python
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://www.yahoo.com") # Load page
assert "Yahoo" in browser.title
elem = browser.find_element_by_name("p") # Find the query box
elem.send_keys("seleniumhq" + Keys.RETURN)
time.sleep(0.2) # Let the page load, will be added to the API
try:
browser.find_element_by_xpath("//a[contains(#href,'http://docs.seleniumhq.org')]")
except NoSuchElementException:
assert 0, "can't find seleniumhq"
browser.close()
and change the firefox command:
FIREFOX_CMD="cd /path/to/some/place && python selenium-example.py"
And watch firefox do a Yahoo search. You might also want to increase the time.sleep.
If you want to run more programs, you can do things like this before or after running firefox:
# start up xclock, wait for some time to pass, kill it.
s.send("xclock&\n")
time.sleep(1)
s.send("XCLOCK_PID=$!\n") # stash away the process id (into a bash variable)
time.sleep(30)
s.send("echo \"killing $XCLOCK_PID\"\n")
s.send("kill $XCLOCK_PID\n\n")
time.sleep(5)
If you want to do perform general X11 application control, I think you might need to write similar "driver applications", albeit using different libraries. You might want search for "x11 send {mouse|keyboard} events" to find more general approaches. That brings up these questions, but I'm sure there's lots more.
If the remote end isn't responding instantaneously, you might want to sniff your network traffic in Wireshark, and check whether or not TCP is batching up the data, rather than sending it line by line (the \n seems to help here, but I guess there's no guarantee). If this is the case, you might be out of luck, but nothing is impossible. I hope you don't need to go that far though ;-)
One more note: if you need to communicate with CLI programs' STDIN/STDOUT, you might want to look at expect scripting (e.g. using pexpect, or for simple cases you might be able to use subprocess.Popen.communicate](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate)).

Twisted web service - sql connection drops

I am working on a web service with Twisted that is responsible for calling up several packages I had previously used on the command line. The routines these packages handle were being prototyped on their own but now are ready to be integrated into our webservice.
In short, I have several different modules that all create a mysql connection property internally in their original command line forms. Take this for example:
class searcher:
def __init__(self,lat,lon,radius):
self.conn = getConnection()[1]
self.con=self.conn.cursor();
self.mgo = getConnection(True)
self.lat = lat
self.lon = lon
self.radius = radius
self.profsinrange()
self.cache = memcache.Client(["173.220.194.84:11211"])
The getConnection function is just a helper that returns a mongo or mysql cursor respectively. Again, this is all prototypical :)
The problem I am experiencing is when implemented as a consistently running server using Twisted's WSGI resource, the sql connection created in init times out, and subsequent requests don't seem to regenerate it. Example code for small server app:
from twisted.web import server
from twisted.web.wsgi import WSGIResource
from twisted.python.threadpool import ThreadPool
from twisted.internet import reactor
from twisted.application import service, strports
import cgi
import gnengine
import nn
wsgiThreadPool = ThreadPool()
wsgiThreadPool.start()
# ensuring that it will be stopped when the reactor shuts down
reactor.addSystemEventTrigger('after', 'shutdown', wsgiThreadPool.stop)
def application(environ, start_response):
start_response('200 OK', [('Content-type','text/plain')])
params = cgi.parse_qs(environ['QUERY_STRING'])
try:
lat = float(params['lat'][0])
lon = float(params['lon'][0])
radius = int(params['radius'][0])
query_terms = params['query']
s = gnengine.searcher(lat,lon,radius)
query_terms = ' '.join( query_terms )
json = s.query(query_terms)
return [json]
except Exception, e:
return [str(e),str(params)]
return ['error']
wsgiAppAsResource = WSGIResource(reactor, wsgiThreadPool, application)
# Hooks for twistd
application = service.Application('Twisted.web.wsgi Hello World Example')
server = strports.service('tcp:8080', server.Site(wsgiAppAsResource))
server.setServiceParent(application)
The first few requests work fine, but after mysqls wait_timeout expires, the dread error 2006 "Mysql has gone away" error surfaces. It had been my understanding that every request to the WSGI Twisted resource would run the application function, thereby regenerating the searcher object and re-leasing the connection. If this isn't the case, how can I make the requests processed as such? Is this kind of Twisted deployment not transactional in this sense? Thanks!
EDIT: Per request, here is the prototype helper function calling up the connection:
def getConnection(mong = False):
if mong == False:
connection = mysql.connect(host = db_host,
user = db_user,
passwd = db_pass,
db = db,
cursorclass=mysql.cursors.DictCursor)
cur = connection.cursor();
return (cur,connection)
else:
return pymongo.Connection('173.220.194.84',27017).gonation_test
i was developing a piece of software with twisted where i had to utilize a constant MySQL database connection. i did run into this problem and digging through the twisted documentation extensively and posting a few questions i was unable to find a proper solution.There is a boolean parameter you can pass when you are instantiating the adbapi.connectionPool class; however it never seemed to work and i kept getting the error irregardless. However, what i am guessing the reconnect boolean represents is the destruction of the connection object when SQL disconnect does occur.
adbapi.ConnectionPool("MySQLdb", cp_reconnect=True, host="", user="", passwd="", db="")
I have not tested this but i will re-post some results when i do or if anyone else has please share.
When i was developing the script i was using twisted 8.2.0 (i havent touched twisted in a while) and back then the framework had no such explicit keep alive method, so i developed a ping/keepalive extension employing event driven paradigm twisted builds upon in conjunction with direct MySQLdb module ping() method (see code comment).
As i was typing this response; however, i did look around the current twisted documentation i was still unable to find an explicit keep-alive method or parameter. My guess is because twisted itself does not have database connectivity libraries/classes. It uses the methods available to python and provides an indirect layer of interfacing with those modules; with some exposure for direct calls to the database library being used. This is accomplished by using the adbapi.runWithConnection method.
here is the module i wrote under twisted 8.2.0 and python 2.6; you can set the intervals between pings. what the script does is, every 20 minutes it pings the database and if it fails, it attempts to reconnect back to it every 60 seconds. I must warn that the script does NOT handle sudden/dropped connection; that you can handle through addErrback whenever you run a query through twisted, atleast thats how i did it. I have noticed that whenever database connection drops, you can only find out if it has when you are executing a query and the event raises an errback, and then at that point you deal with it. Basically, if i dont run a query for 10 minutes, and my database disconnects me, my application will not respond in real time. the application will realize the connection has been dropped when it runs the query that follows; so the database could have disconnected us 1 minute after the first query, 5, 9, etc....
I guess this sort of goes back to the original idea that i have stated, twisted utilizes python's own libraries or 3rd party libraries for database connectivity and because of that, some things are handled a bit differently.
from twisted.enterprise import adbapi
from twisted.internet import reactor, defer, task
class sqlClass:
def __init__(self, db_pointer):
self.dbpool=db_pointer
self.dbping = task.LoopingCall(self.dbping)
self.dbping.start(1200) #20 minutes = 1200 seconds; i found out that if MySQL socket is idled for 20 minutes or longer, MySQL itself disconnects the session for security reasons; i do believe you can change that in the configuration of the database server itself but it may not be recommended.
self.reconnect=False
print "database ping initiated"
def dbping(self):
def ping(conn):
conn.ping() #what happens here is that twisted allows us to access methods from the MySQLdb module that python posesses; i chose to use the native command instead of sending null commands to the database.
pingdb=self.dbpool.runWithConnection(ping)
pingdb.addCallback(self.dbactive)
pingdb.addErrback(self.dbout)
print "pinging database"
def dbactive(self, data):
if data==None and self.reconnect==True:
self.dbping.stop()
self.reconnect=False
self.dbping.start(1200) #20 minutes = 1200 seconds
print "Reconnected to database!"
elif data==None:
print "database is active"
def dbout(self, deferr):
#print deferr
if self.reconnect==False:
self.dbreconnect()
elif self.reconnect==True:
print "Unable to reconnect to database"
print "unable to ping MySQL database!"
def dbreconnect(self, *data):
self.dbping.stop()
self.reconnect=True
#self.dbping = task.LoopingCall(self.dbping)
self.dbping.start(60) #60
if __name__ == "__main__":
db = sqlClass(adbapi.ConnectionPool("MySQLdb", cp_reconnect=True, host="", user="", passwd="", db=""))
reactor.callLater(2, db.dbping)
reactor.run()
let me know how it works out for you :)

Best way to run remote commands thru ssh in Twisted?

I have a twisted application which now needs to monitor processes running on several boxes. The way I manually do is 'ssh and ps', now I'd like my twisted application to do. I have 2 options.
Use paramiko or leverage the power of twisted.conch
I really want to use twisted.conch but my research led me to believe that its primarily intended to create SSHServers and SSHClients. However my requirement is a simple remoteExecute(some_cmd)
I was able to figure out how to do this using paramiko but I didnt want to stickparamiko in my twisted app before looking at how to do this using twisted.conch
Code snippets using twisted on how to run remote_cmds using ssh would be highly appreciated. Thanks.
Followup - Happily, the ticket I referenced below is now resolved. The simpler API will be included in the next release of Twisted. The original answer is still a valid way to use Conch and may reveal some interesting details about what's going on, but from Twisted 13.1 and on, if you just want to run a command and handle it's I/O, this simpler interface will work.
It takes an unfortunately large amount of code to execute a command on an SSH using the Conch client APIs. Conch makes you deal with a lot of different layers, even if you just want sensible boring default behavior. However, it's certainly possible. Here's some code which I've been meaning to finish and add to Twisted to simplify this case:
import sys, os
from zope.interface import implements
from twisted.python.failure import Failure
from twisted.python.log import err
from twisted.internet.error import ConnectionDone
from twisted.internet.defer import Deferred, succeed, setDebugging
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.internet.protocol import Factory, Protocol
from twisted.conch.ssh.common import NS
from twisted.conch.ssh.channel import SSHChannel
from twisted.conch.ssh.transport import SSHClientTransport
from twisted.conch.ssh.connection import SSHConnection
from twisted.conch.client.default import SSHUserAuthClient
from twisted.conch.client.options import ConchOptions
# setDebugging(True)
class _CommandTransport(SSHClientTransport):
_secured = False
def verifyHostKey(self, hostKey, fingerprint):
return succeed(True)
def connectionSecure(self):
self._secured = True
command = _CommandConnection(
self.factory.command,
self.factory.commandProtocolFactory,
self.factory.commandConnected)
userauth = SSHUserAuthClient(
os.environ['USER'], ConchOptions(), command)
self.requestService(userauth)
def connectionLost(self, reason):
if not self._secured:
self.factory.commandConnected.errback(reason)
class _CommandConnection(SSHConnection):
def __init__(self, command, protocolFactory, commandConnected):
SSHConnection.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def serviceStarted(self):
channel = _CommandChannel(
self._command, self._protocolFactory, self._commandConnected)
self.openChannel(channel)
class _CommandChannel(SSHChannel):
name = 'session'
def __init__(self, command, protocolFactory, commandConnected):
SSHChannel.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def openFailed(self, reason):
self._commandConnected.errback(reason)
def channelOpen(self, ignored):
self.conn.sendRequest(self, 'exec', NS(self._command))
self._protocol = self._protocolFactory.buildProtocol(None)
self._protocol.makeConnection(self)
def dataReceived(self, bytes):
self._protocol.dataReceived(bytes)
def closed(self):
self._protocol.connectionLost(
Failure(ConnectionDone("ssh channel closed")))
class SSHCommandClientEndpoint(object):
implements(IStreamClientEndpoint)
def __init__(self, command, sshServer):
self._command = command
self._sshServer = sshServer
def connect(self, protocolFactory):
factory = Factory()
factory.protocol = _CommandTransport
factory.command = self._command
factory.commandProtocolFactory = protocolFactory
factory.commandConnected = Deferred()
d = self._sshServer.connect(factory)
d.addErrback(factory.commandConnected.errback)
return factory.commandConnected
class StdoutEcho(Protocol):
def dataReceived(self, bytes):
sys.stdout.write(bytes)
sys.stdout.flush()
def connectionLost(self, reason):
self.factory.finished.callback(None)
def copyToStdout(endpoint):
echoFactory = Factory()
echoFactory.protocol = StdoutEcho
echoFactory.finished = Deferred()
d = endpoint.connect(echoFactory)
d.addErrback(echoFactory.finished.errback)
return echoFactory.finished
def main():
from twisted.python.log import startLogging
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
# startLogging(sys.stdout)
sshServer = TCP4ClientEndpoint(reactor, "localhost", 22)
commandEndpoint = SSHCommandClientEndpoint("/bin/ls", sshServer)
d = copyToStdout(commandEndpoint)
d.addErrback(err, "ssh command / copy to stdout failed")
d.addCallback(lambda ignored: reactor.stop())
reactor.run()
if __name__ == '__main__':
main()
Some things to note about it:
It uses the new endpoint APIs introduced in Twisted 10.1. It's possible to do this directly on reactor.connectTCP, but I did it as an endpoint to make it more useful; endpoints can be swapped easily without the code that actually asks for a connection knowing.
It does no host key verification at all! _CommandTransport.verifyHostKey is where you would implement that. Take a look at twisted/conch/client/default.py for some hints about what kinds of things you might want to do.
It takes $USER to be the remote username, which you may want to be a parameter.
It probably only works with key authentication. If you want to enable password authentication, you probably need to subclass SSHUserAuthClient and override getPassword to do something.
Almost all of the layers of SSH and Conch are visible here:
_CommandTransport is at the bottom, a plain old protocol that implements the SSH transport protocol. It creates a...
_CommandConnection which implements the SSH connection negotiation parts of the protocol. Once that completes, a...
_CommandChannel is used to talk to a newly opened SSH channel. _CommandChannel does the actual exec to launch your command. Once the channel is opened it creates an instance of...
StdoutEcho, or whatever other protocol you supply. This protocol will get the output from the command you execute, and can write to the command's stdin.
See http://twistedmatrix.com/trac/ticket/4698 for progress in Twisted on supporting this with less code.

Categories

Resources