I have a twisted application which now needs to monitor processes running on several boxes. The way I manually do is 'ssh and ps', now I'd like my twisted application to do. I have 2 options.
Use paramiko or leverage the power of twisted.conch
I really want to use twisted.conch but my research led me to believe that its primarily intended to create SSHServers and SSHClients. However my requirement is a simple remoteExecute(some_cmd)
I was able to figure out how to do this using paramiko but I didnt want to stickparamiko in my twisted app before looking at how to do this using twisted.conch
Code snippets using twisted on how to run remote_cmds using ssh would be highly appreciated. Thanks.
Followup - Happily, the ticket I referenced below is now resolved. The simpler API will be included in the next release of Twisted. The original answer is still a valid way to use Conch and may reveal some interesting details about what's going on, but from Twisted 13.1 and on, if you just want to run a command and handle it's I/O, this simpler interface will work.
It takes an unfortunately large amount of code to execute a command on an SSH using the Conch client APIs. Conch makes you deal with a lot of different layers, even if you just want sensible boring default behavior. However, it's certainly possible. Here's some code which I've been meaning to finish and add to Twisted to simplify this case:
import sys, os
from zope.interface import implements
from twisted.python.failure import Failure
from twisted.python.log import err
from twisted.internet.error import ConnectionDone
from twisted.internet.defer import Deferred, succeed, setDebugging
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.internet.protocol import Factory, Protocol
from twisted.conch.ssh.common import NS
from twisted.conch.ssh.channel import SSHChannel
from twisted.conch.ssh.transport import SSHClientTransport
from twisted.conch.ssh.connection import SSHConnection
from twisted.conch.client.default import SSHUserAuthClient
from twisted.conch.client.options import ConchOptions
# setDebugging(True)
class _CommandTransport(SSHClientTransport):
_secured = False
def verifyHostKey(self, hostKey, fingerprint):
return succeed(True)
def connectionSecure(self):
self._secured = True
command = _CommandConnection(
self.factory.command,
self.factory.commandProtocolFactory,
self.factory.commandConnected)
userauth = SSHUserAuthClient(
os.environ['USER'], ConchOptions(), command)
self.requestService(userauth)
def connectionLost(self, reason):
if not self._secured:
self.factory.commandConnected.errback(reason)
class _CommandConnection(SSHConnection):
def __init__(self, command, protocolFactory, commandConnected):
SSHConnection.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def serviceStarted(self):
channel = _CommandChannel(
self._command, self._protocolFactory, self._commandConnected)
self.openChannel(channel)
class _CommandChannel(SSHChannel):
name = 'session'
def __init__(self, command, protocolFactory, commandConnected):
SSHChannel.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def openFailed(self, reason):
self._commandConnected.errback(reason)
def channelOpen(self, ignored):
self.conn.sendRequest(self, 'exec', NS(self._command))
self._protocol = self._protocolFactory.buildProtocol(None)
self._protocol.makeConnection(self)
def dataReceived(self, bytes):
self._protocol.dataReceived(bytes)
def closed(self):
self._protocol.connectionLost(
Failure(ConnectionDone("ssh channel closed")))
class SSHCommandClientEndpoint(object):
implements(IStreamClientEndpoint)
def __init__(self, command, sshServer):
self._command = command
self._sshServer = sshServer
def connect(self, protocolFactory):
factory = Factory()
factory.protocol = _CommandTransport
factory.command = self._command
factory.commandProtocolFactory = protocolFactory
factory.commandConnected = Deferred()
d = self._sshServer.connect(factory)
d.addErrback(factory.commandConnected.errback)
return factory.commandConnected
class StdoutEcho(Protocol):
def dataReceived(self, bytes):
sys.stdout.write(bytes)
sys.stdout.flush()
def connectionLost(self, reason):
self.factory.finished.callback(None)
def copyToStdout(endpoint):
echoFactory = Factory()
echoFactory.protocol = StdoutEcho
echoFactory.finished = Deferred()
d = endpoint.connect(echoFactory)
d.addErrback(echoFactory.finished.errback)
return echoFactory.finished
def main():
from twisted.python.log import startLogging
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
# startLogging(sys.stdout)
sshServer = TCP4ClientEndpoint(reactor, "localhost", 22)
commandEndpoint = SSHCommandClientEndpoint("/bin/ls", sshServer)
d = copyToStdout(commandEndpoint)
d.addErrback(err, "ssh command / copy to stdout failed")
d.addCallback(lambda ignored: reactor.stop())
reactor.run()
if __name__ == '__main__':
main()
Some things to note about it:
It uses the new endpoint APIs introduced in Twisted 10.1. It's possible to do this directly on reactor.connectTCP, but I did it as an endpoint to make it more useful; endpoints can be swapped easily without the code that actually asks for a connection knowing.
It does no host key verification at all! _CommandTransport.verifyHostKey is where you would implement that. Take a look at twisted/conch/client/default.py for some hints about what kinds of things you might want to do.
It takes $USER to be the remote username, which you may want to be a parameter.
It probably only works with key authentication. If you want to enable password authentication, you probably need to subclass SSHUserAuthClient and override getPassword to do something.
Almost all of the layers of SSH and Conch are visible here:
_CommandTransport is at the bottom, a plain old protocol that implements the SSH transport protocol. It creates a...
_CommandConnection which implements the SSH connection negotiation parts of the protocol. Once that completes, a...
_CommandChannel is used to talk to a newly opened SSH channel. _CommandChannel does the actual exec to launch your command. Once the channel is opened it creates an instance of...
StdoutEcho, or whatever other protocol you supply. This protocol will get the output from the command you execute, and can write to the command's stdin.
See http://twistedmatrix.com/trac/ticket/4698 for progress in Twisted on supporting this with less code.
Related
I'm working on an asynchronous communication script that will act as a middleman between a react native app and another agent. To do this I used python with DBUS to implement the communication between the two.
To implement this we created two processes one for the BLE and one for the communication with the agent. In cases where the agent replies immediately (with a non-blocking call) the communication always works as intended. For the case where we attach to a signal to continuously monitor an update status this error occurs most of the time at random points during the process:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
I have tested both the BLE process and the agent process separately and they work as intended.
I'm currently suspecting that it could be related to messages "crashing" on the system bus or some race conditions but we are unsure how to validate that.
Any advice on what could be causing this issue or how I could avoid it?
For completeness I've attached a simplified version of the class that handles the communication with the agent.
import multiprocessing
from enum import Enum
import dbus
import dbus.mainloop.glib
from dbus.proxies import ProxyObject
from gi.repository import GLib
from omegaconf import DictConfig
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
class ClientUpdateStatus(Enum):
SUCCESS = 0
PENDING = 1
IN_PROGRESS = 2
FAILED = 3
class DBUSManager:
GLIB_LOOP = GLib.MainLoop()
COMMUNICATION_QUEUE = multiprocessing.Queue()
def __init__(self, config: DictConfig) -> None:
dbus_system_bus = dbus.SystemBus()
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
self._config = config
self._dbus_object = dbus_system_bus.get_object(self._config['dbus_interface'],
self._config['dbus_object_path'], introspect=False)
def get_version(self) -> str:
version = self._dbus_object.GetVersion("clientSimulator", dbus_interface=self._config['dbus_interface'])
return version
def check_for_update(self) -> str:
update_version = self._dbus_object.CheckForUpdate("clientSimulator",
dbus_interface=self._config['dbus_interface'])
return update_version
def run_update(self) -> ClientUpdateStatus:
raw_status = self._dbus_object.ExecuteUpdate(dbus_interface=self._config['dbus_interface'])
update_status = ClientUpdateStatus(raw_status)
# Launch listening process
signal_update_proc = multiprocessing.Process(target=DBUSManager.start_listener_process,
args=(self._dbus_object, self._config['dbus_interface'],))
signal_update_proc.start()
while True:
raw_status = DBUSManager.COMMUNICATION_QUEUE.get()
update_status = ClientUpdateStatus(raw_status)
if ClientUpdateStatus.SUCCESS == update_status:
break
signal_update_proc.join()
return update_status
#staticmethod
def start_listener_process(dbus_object: ProxyObject, dbus_interface: str) -> None:
dbus_object.connect_to_signal("UpdateStatusChanged", DBUSManager.status_change_handler,
dbus_interface=dbus_interface)
# Launch loop to acquire signals
DBUSManager.GLIB_LOOP.run() # This listening loop exits on GLIB_LOOP.quit()
#staticmethod
def status_change_handler(new_status: int) -> None:
DBUSManager.COMMUNICATION_QUEUE.put(new_status)
if ClientUpdateStatus.SUCCESS == ClientUpdateStatus(new_status):
DBUSManager.GLIB_LOOP.quit()
At this stage I would recommend to do a dbus-monitor to see if agent and BLE are reacting to requests properly.
Maybe to help others in the future I haven't found the solution but at least a way to work around it. We have encountered that problem at various places in our project and what generally helped, don't ask me why, was to always re-instantiate all needed dbus objects. So instead of having a single class that has a system bus variable self._system_bus = dbus.SystemBus() or a interface variable self._manager_interface = dbus.Interface(proxy_object, "org.freedesktop.DBus.ObjectManager") we would always re-instiate them.
If somebody knows what the problem is I'm happy to hear it.
I implemented a Twisted SSH server to test a component that uses fabric to run commands on a remote machine via SSH. I have found this example but I don't understand how I have to implement the execCommand method to be compatible with fabric. Here is my implementation of the SSH server:
from pathlib import Path
from twisted.conch import avatar, recvline
from twisted.conch.insults import insults
from twisted.conch.interfaces import ISession
from twisted.conch.ssh import factory, keys, session
from twisted.cred import checkers, portal
from twisted.internet import reactor
from zope.interface import implementer
SSH_KEYS_FOLDER = Path(__file__).parent.parent / "resources" / "ssh_keys"
#implementer(ISession)
class SSHDemoAvatar(avatar.ConchUser):
def __init__(self, username: str):
avatar.ConchUser.__init__(self)
self.username = username
self.channelLookup.update({b"session": session.SSHSession})
def openShell(self, protocol):
pass
def getPty(self, terminal, windowSize, attrs):
return None
def execCommand(self, protocol: session.SSHSessionProcessProtocol, cmd: bytes):
protocol.write("Some text to return")
protocol.session.conn.sendEOF(protocol.session)
def eofReceived(self):
pass
def closed(self):
pass
#implementer(portal.IRealm)
class SSHDemoRealm(object):
def requestAvatar(self, avatarId, _, *interfaces):
return interfaces[0], SSHDemoAvatar(avatarId), lambda: None
def getRSAKeys():
with open(SSH_KEYS_FOLDER / "ssh_key") as private_key_file:
private_key = keys.Key.fromString(data=private_key_file.read())
with open(SSH_KEYS_FOLDER / "ssh_key.pub") as public_key_file:
public_key = keys.Key.fromString(data=public_key_file.read())
return public_key, private_key
if __name__ == "__main__":
sshFactory = factory.SSHFactory()
sshFactory.portal = portal.Portal(SSHDemoRealm())
users = {
"admin": b"aaa",
"guest": b"bbb",
}
sshFactory.portal.registerChecker(checkers.InMemoryUsernamePasswordDatabaseDontUse(**users))
pubKey, privKey = getRSAKeys()
sshFactory.publicKeys = {b"ssh-rsa": pubKey}
sshFactory.privateKeys = {b"ssh-rsa": privKey}
reactor.listenTCP(22222, sshFactory)
reactor.run()
Trying to execute a command via fabric yields the following output:
[paramiko.transport ][INFO ] Connected (version 2.0, client Twisted_22.4.0)
[paramiko.transport ][INFO ] Authentication (password) successful!
Some text to return
This looks promising but the program execution hangs after this line. Do I have to close the connection from the server side after executing the command? How do I implement that properly?
In a traditional UNIX server, it's still the server's responsibility to close the connection if it was told to execute a command. It's up to the server's discretion where and how to do this.
I believe you just want to change protocol.session.conn.sendEOF(protocol.session) to protocol.loseConnection(). I apologize for not testing this myself to be sure, setting up an environment to properly test a full-size conch setup like this is a bit tedious (and the example isn't self-contained, requiring key generation and moduli, etc)
I want to make a little update script for a software that runs on a Raspberry Pi and works like a local server. That should connect to a master server in the web to get software updates and also to verify the license of the software.
For that I set up two python scripts. I want these to connect via a TLS socket. Then the client checks the server certificate and the server checks if it's one of the authorized clients. I found a solution for this using twisted on this page.
Now there is a problem left. I want to know which client (depending on the certificate) is establishing the connection. Is there a way to do this in Python 3 with twisted?
I'm happy with every answer.
In a word: yes, this is quite possible, and all the necessary stuff is
ported to python 3 - I tested all the following under Python 3.4 on my Mac and it seems to
work fine.
The short answer is
"use twisted.internet.ssl.Certificate.peerFromTransport"
but given that a lot of set-up is required to get to the point where that is
possible, I've constructed a fully working example that you should be able to
try out and build upon.
For posterity, you'll first need to generate a few client certificates all
signed by the same CA. You've probably already done this, but so others can
understand the answer and try it out on their own (and so I could test my
answer myself ;-)), they'll need some code like this:
# newcert.py
from twisted.python.filepath import FilePath
from twisted.internet.ssl import PrivateCertificate, KeyPair, DN
def getCAPrivateCert():
privatePath = FilePath(b"ca-private-cert.pem")
if privatePath.exists():
return PrivateCertificate.loadPEM(privatePath.getContent())
else:
caKey = KeyPair.generate(size=4096)
caCert = caKey.selfSignedCert(1, CN="the-authority")
privatePath.setContent(caCert.dumpPEM())
return caCert
def clientCertFor(name):
signingCert = getCAPrivateCert()
clientKey = KeyPair.generate(size=4096)
csr = clientKey.requestObject(DN(CN=name), "sha1")
clientCert = signingCert.signRequestObject(
csr, serialNumber=1, digestAlgorithm="sha1")
return PrivateCertificate.fromCertificateAndKeyPair(clientCert, clientKey)
if __name__ == '__main__':
import sys
name = sys.argv[1]
pem = clientCertFor(name.encode("utf-8")).dumpPEM()
FilePath(name.encode("utf-8") + b".client.private.pem").setContent(pem)
With this program, you can create a few certificates like so:
$ python newcert.py a
$ python newcert.py b
Now you should have a few files you can use:
$ ls -1 *.pem
a.client.private.pem
b.client.private.pem
ca-private-cert.pem
Then you'll want a client which uses one of these certificates, and sends some
data:
# tlsclient.py
from twisted.python.filepath import FilePath
from twisted.internet.endpoints import SSL4ClientEndpoint
from twisted.internet.ssl import (
PrivateCertificate, Certificate, optionsForClientTLS)
from twisted.internet.defer import Deferred, inlineCallbacks
from twisted.internet.task import react
from twisted.internet.protocol import Protocol, Factory
class SendAnyData(Protocol):
def connectionMade(self):
self.deferred = Deferred()
self.transport.write(b"HELLO\r\n")
def connectionLost(self, reason):
self.deferred.callback(None)
#inlineCallbacks
def main(reactor, name):
pem = FilePath(name.encode("utf-8") + b".client.private.pem").getContent()
caPem = FilePath(b"ca-private-cert.pem").getContent()
clientEndpoint = SSL4ClientEndpoint(
reactor, u"localhost", 4321,
optionsForClientTLS(u"the-authority", Certificate.loadPEM(caPem),
PrivateCertificate.loadPEM(pem)),
)
proto = yield clientEndpoint.connect(Factory.forProtocol(SendAnyData))
yield proto.deferred
import sys
react(main, sys.argv[1:])
And finally, a server which can distinguish between them:
# whichclient.py
from twisted.python.filepath import FilePath
from twisted.internet.endpoints import SSL4ServerEndpoint
from twisted.internet.ssl import PrivateCertificate, Certificate
from twisted.internet.defer import Deferred
from twisted.internet.task import react
from twisted.internet.protocol import Protocol, Factory
class ReportWhichClient(Protocol):
def dataReceived(self, data):
peerCertificate = Certificate.peerFromTransport(self.transport)
print(peerCertificate.getSubject().commonName.decode('utf-8'))
self.transport.loseConnection()
def main(reactor):
pemBytes = FilePath(b"ca-private-cert.pem").getContent()
certificateAuthority = Certificate.loadPEM(pemBytes)
myCertificate = PrivateCertificate.loadPEM(pemBytes)
serverEndpoint = SSL4ServerEndpoint(
reactor, 4321, myCertificate.options(certificateAuthority)
)
serverEndpoint.listen(Factory.forProtocol(ReportWhichClient))
return Deferred()
react(main, [])
For simplicity's sake we'll just re-use the CA's own certificate for the
server, but in a more realistic scenario you'd obviously want a more
appropriate certificate.
You can now run whichclient.py in one window, then python tlsclient.py a;
python tlsclient.py b in another window, and see whichclient.py print out
a and then b respectively, identifying the clients by the commonName
field in their certificate's subject.
The one caveat here is that you might initially want to put that call to
Certificate.peerFromTransport into a connectionMade method; that won't
work.
Twisted does not presently have a callback for "TLS handshake complete";
hopefully it will eventually, but until it does, you have to wait until you've
received some authenticated data from the peer to be sure the handshake has
completed. For almost all applications, this is fine, since by the time you
have received instructions to do anything (download updates, in your case) the
peer must already have sent the certificate.
I am using the following script from the Twisted tutorial (with slight modification):
from twisted.application import internet, service
from twisted.internet import reactor, protocol, defer
from twisted.protocols import basic
from twisted.web import client
class FingerProtocol(basic.LineReceiver):
def lineReceived(self, user):
d = self.factory.getUser(user)
def onError(err):
return "Internal server error"
d.addErrback(onError)
def writeResponse(message):
self.transport.write(message + "\r\n")
self.transport.loseConnection()
d.addCallback(writeResponse)
class FingerFactory(protocol.ServerFactory):
protocol = FingerProtocol
def __init__(self, prefix):
self.prefix = prefix
def getUser(self, user):
return client.getPage(self.prefix + user)
application = service.Application('finger', uid=1, gid=1)
factory = FingerFactory(prefix="http://livejournal.com/~")
internet.TCPServer(7979, factory).setServiceParent(
service.IServiceCollection(application))
which I save as finger_daemon.tac and run with
twistd -y finger_daemon.tac \
-l /home/me/twisted/finger.log \
--pidfile=/home/me/twisted/finger.pid
but of course it won't bind to 79, since it's a privileged port. I tried also running with sudo, no difference there.
I then tried changing the TCPServer port to 7979 and then connecting to the daemon once running with
telnet 127.0.0.1 7979
and I get Connection Refused error. What's going on here specifically? How is daemonizing supposed to work in Twisted?
When I run this code, I see the following log message:
2013-10-02 23:50:34-0700 [-] failed to set uid/gid 1/1 (are you root?) -- exiting.
and then twistd exits. So you'd need to do sudo twistd and then that adds a whole bunch of python path management problems...
Why are you setting the uid and gid arguments? You're trying to run it as the daemon user? You don't need to do that in order to daemonize. Just removing the uid=1, gid=1 arguments to Application makes it work for me.
I'm new to python and currently researching its viability to be used as a soap server. I currently have a very rough application that uses the mysql blocking api, but would like to try twisted adbapi. I've successfully used twisted adbapi on regular twisted code using reactors, but can't seem to make it work with code below using ZSI framework. It's not returning anything from mysql. Anyone ever used twisted adbapi with ZSI?
import os
import sys
from dpac_server import *
from ZSI.twisted.wsgi import (SOAPApplication,
soapmethod,
SOAPHandlerChainFactory)
from twisted.enterprise import adbapi
import MySQLdb
def _soapmethod(op):
op_request = GED("http://www.example.org/dpac/", op).pyclass
op_response = GED("http://www.example.org/dpac/", op + "Response").pyclass
return soapmethod(op_request.typecode, op_response.typecode,operation=op, soapaction=op)
class DPACServer(SOAPApplication):
factory = SOAPHandlerChainFactory
#_soapmethod('GetIPOperation')
def soap_GetIPOperation(self, request, response, **kw):
dbpool = adbapi.ConnectionPool("MySQLdb", '127.0.0.1','def_user', 'def_pwd', 'def_db', cp_reconnect=True)
def _dbSPGeneric(txn, cmts):
txn.execute("call def_db.getip(%s)", (cmts, ))
return txn.fetchall()
def dbSPGeneric(cmts):
return dbpool.runInteraction(_dbSPGeneric, cmts)
def returnResults(results):
response.Result = results
def showError(msg):
response.Error = msg
response.Result = ""
response.Error = ""
d = dbSPGeneric(request.Cmts)
d.addCallbacks(returnResults, showError)
return request, response
def main():
from wsgiref.simple_server import make_server
from ZSI.twisted.wsgi import WSGIApplication
application = WSGIApplication()
httpd = make_server('127.0.0.1', 8080, application)
application['dpac'] = DPACServer()
print "listening..."
httpd.serve_forever()
if __name__ == '__main__':
main()
The code you posted creates a new ConnectionPool per (some kind of) request and it never stops the pool. This means you'll eventually run out of resources and you won't be able to service any more requests. "Eventually" is probably after one or two or three requests.
If you never get any responses perhaps this isn't the problem you've encountered. It will be a problem at some point though.
On closer inspection, I wonder if this code even runs the Twisted reactor at all. On first read, I thought you were using some ZSI Twisted integration to run your server. Now I see that you're using wsgiref.simple_server. I am moderately confident that this won't work.
You're already using Twisted, use Twisted's WSGI server instead.
Beyond that, verify that ZSI executes your callbacks in the correct thread. The default for WSGI applications is to run in a non-reactor thread. Twisted APIs are not thread-safe, so if ZSI doesn't do something to correct for this, you'll have bugs introduced by using un-thread-safe APIs in threads.