I've started a client/server project at work using Twisted (I'm a newcomer, so not much experience). I probably did setup things the wrong way/order, because now I'm a little stuck with a Daemon server (using twistd --python to launch it).
I'm wondering if I've to re-implement the server as a standard process to use it in my unittest module?
Here's part of the code to launch the server as a Daemon in the server module (you'll probably recognize part of krondo's articles in this):
class TwistedHawkService(service.Service):
def startService(self):
''''''
service.Service.startService(self)
log.msg('TwistedHawkService running ...')
# Configuration
port = 10000
iface = 'localhost'
topService = service.MultiService()
thService = TwistedHawkService()
thService.setServiceParent(topService)
factory = ReceiverFactory(thService)
tcpService = internet.TCPServer(port, factory, interface=iface)
tcpService.setServiceParent(topService)
application = service.Application("TwistedHawkService")
topService.setServiceParent(application)
I tried copy/pasting the configuration part in the setUp method:
from mfxTwistedHawk.client import mfxTHClient
from mfxTwistedHawk.server import mfxTHServer
class RequestTestCase(TestCase):
def setUp(self):
# Configuration
port = 10000
iface = 'localhost'
self.topService = service.MultiService()
thService = mfxTHServer.TwistedHawkService()
thService.setServiceParent(self.topService)
factory = mfxTHServer.ReceiverFactory(thService)
tcpService = internet.TCPServer(port, factory, interface=iface)
tcpService.setServiceParent(self.topService)
application = service.Application("TwistedHawkService")
self.topService.setServiceParent(application)
def test_connection(self):
mfxTHClient.requestMain('someRequest')
... but of course using trial unittest.py doesn't start it a daemon, so my client can't reach it.
Any advice of how to setup things would be appreciated.
Thanks!
Edit:
Managed to make everything works with this and this, but still feel unsure about the whole thing:
def setUp(self):
# Configuration
port = 10000
iface = 'localhost'
service = mfxTHServer.TwistedHawkService()
factory = mfxTHServer.ReceiverFactory(service)
self.server = reactor.listenTCP(port, factory, interface=iface)
Is it ok to have a daemon implementation for production and standard process for unittest?
Is it ok to have a daemon implementation for production and standard process for unittest?
Your unit test isn't for Twisted's daemonization functionality. It's for the custom application/protocol/server/whatever functionality that you implemented. In general, in a unit test, you want to involve as little code as possible. So in general, it's quite okay, and even preferable, to have your unit tests not daemonize. In fact, you probably want to write some unit tests that don't even listen on a real TCP port, but just call methods on your service, factory, and protocol classes.
Related
How can I manage my rabbit-mq connection in Pyramid app?
I would like to re-use a connection to the queue throughout the web application's lifetime. Currently I am opening/closing connection to the queue for every publish call.
But I can't find any "global" services definition in Pyramid. Any help appreciated.
Pyramid does not need a "global services definition" because you can trivially do that in plain Python:
db.py:
connection = None
def connect(url):
global connection
connection = FooBarBaz(url)
your startup file (__init__.py)
from db import connect
if __name__ == '__main__':
connect(DB_CONNSTRING)
elsewhere:
from db import connection
...
connection.do_stuff(foo, bar, baz)
Having a global (any global) is going to cause problems if you ever run your app in a multi-threaded environment, but is perfectly fine if you run multiple processes, so it's not a huge restriction. If you need to work with threads the recipe can be extended to use thread-local variables. Here's another example which also connects lazily, when the connection is needed the first time.
db.py:
import threading
connections = threading.local()
def get_connection():
if not hasattr(connections, 'this_thread_connection'):
connections.this_thread_connection = FooBarBaz(DB_STRING)
return connections.this_thread_connection
elsewhere:
from db import get_connection
get_connection().do_stuff(foo, bar, baz)
Another common problem with long-living connections is that the application won't auto-recover if, say, you restart RabbitMQ while your application is running. You'll need to somehow detect dead connections and reconnect.
It looks like you can attach objects to the request with add_request_method.
Here's a little example app using that method to make one and only one connection to a socket on startup, then make the connection available to each request:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def index(request):
return Response('I have a persistent connection: {} with id {}'.format(
repr(request.conn).replace("<", "<"),
id(request.conn),
))
def add_connection():
import socket
s = socket.socket()
s.connect(("google.com", 80))
print("I should run only once")
def inner(request):
return s
return inner
if __name__ == '__main__':
config = Configurator()
config.add_route('index', '/')
config.add_view(index, route_name='index')
config.add_request_method(add_connection(), 'conn', reify=True)
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()
You'll need to be careful about threading / forking in this case though (each thread / process will need its own connection). Also, note that I am not very familiar with pyramid, there may be a better way to do this.
I'm new to python and currently researching its viability to be used as a soap server. I currently have a very rough application that uses the mysql blocking api, but would like to try twisted adbapi. I've successfully used twisted adbapi on regular twisted code using reactors, but can't seem to make it work with code below using ZSI framework. It's not returning anything from mysql. Anyone ever used twisted adbapi with ZSI?
import os
import sys
from dpac_server import *
from ZSI.twisted.wsgi import (SOAPApplication,
soapmethod,
SOAPHandlerChainFactory)
from twisted.enterprise import adbapi
import MySQLdb
def _soapmethod(op):
op_request = GED("http://www.example.org/dpac/", op).pyclass
op_response = GED("http://www.example.org/dpac/", op + "Response").pyclass
return soapmethod(op_request.typecode, op_response.typecode,operation=op, soapaction=op)
class DPACServer(SOAPApplication):
factory = SOAPHandlerChainFactory
#_soapmethod('GetIPOperation')
def soap_GetIPOperation(self, request, response, **kw):
dbpool = adbapi.ConnectionPool("MySQLdb", '127.0.0.1','def_user', 'def_pwd', 'def_db', cp_reconnect=True)
def _dbSPGeneric(txn, cmts):
txn.execute("call def_db.getip(%s)", (cmts, ))
return txn.fetchall()
def dbSPGeneric(cmts):
return dbpool.runInteraction(_dbSPGeneric, cmts)
def returnResults(results):
response.Result = results
def showError(msg):
response.Error = msg
response.Result = ""
response.Error = ""
d = dbSPGeneric(request.Cmts)
d.addCallbacks(returnResults, showError)
return request, response
def main():
from wsgiref.simple_server import make_server
from ZSI.twisted.wsgi import WSGIApplication
application = WSGIApplication()
httpd = make_server('127.0.0.1', 8080, application)
application['dpac'] = DPACServer()
print "listening..."
httpd.serve_forever()
if __name__ == '__main__':
main()
The code you posted creates a new ConnectionPool per (some kind of) request and it never stops the pool. This means you'll eventually run out of resources and you won't be able to service any more requests. "Eventually" is probably after one or two or three requests.
If you never get any responses perhaps this isn't the problem you've encountered. It will be a problem at some point though.
On closer inspection, I wonder if this code even runs the Twisted reactor at all. On first read, I thought you were using some ZSI Twisted integration to run your server. Now I see that you're using wsgiref.simple_server. I am moderately confident that this won't work.
You're already using Twisted, use Twisted's WSGI server instead.
Beyond that, verify that ZSI executes your callbacks in the correct thread. The default for WSGI applications is to run in a non-reactor thread. Twisted APIs are not thread-safe, so if ZSI doesn't do something to correct for this, you'll have bugs introduced by using un-thread-safe APIs in threads.
I am using txzmq and twisted to build a listener service that will process some data through a push-pull pattern. Here's a working code:
from txzmq import ZmqFactory, ZmqEndpoint, ZmqPullConnection
from twisted.internet import reactor
zf = ZmqFactory()
endpoint = ZmqEndpoint('bind', 'tcp://*:5050')
def onPull(data):
# do something with data
puller = ZmqPullConnection(zf, endpoint)
puller.onPull = onPull
reactor.run()
My question is - how can I wrap this code in a twisted application service? That is, how to wrap this into a specific service (e.g. MyService) that I can later run with:
from twisted.application.service import Application
application = Application('My listener')
service = MyService(bind_address='*', port=5050)
service.setServiceParent(application)
with the twistd runner?
IService defines what it means to be a service. Service is a base class that is often helpful when implementing a new service.
Just move your ZMQ initialization code into a startService method of an object that implements IService, perhaps a subclass of Service. If you want to do proper cleanup too, then add some cleanup code to the stopService method of that class.
I have a simple Cherrypy web application, including two classes. The init code looks like this:
c = MyClass()
c.updates = AnotherClass()
app = cherrypy.tree.mount(c, '/', 'myapp.config')
c.setConfig(app.config)
c.updates.setConfig(app.config)
cherrypy.engine.start()
cherrypy.engine.block()
The setConfig method for both classes is just a line of code to store some database configuration:
def setConfig(self, conf):
self.config = conf['Database']
The configuration file myapp.config looks like this:
[global]
server.socket_host = "0.0.0.0"
server.socket_port = 80
[/]
tools.staticdir.root = com.stuff.myapp.rootDir + '/html'
[Database]
dbtable: "mydbtable"
username: "user"
password: "pass"
When I start the lot, the application gets the database config data, and correctly serves static files from the /html directory, but it only listens on localhost on 8080. I get this on the console:
[11/Apr/2013:10:03:58] ENGINE Bus STARTING
[11/Apr/2013:10:03:58] ENGINE Started monitor thread 'Autoreloader'.
[11/Apr/2013:10:03:58] ENGINE Started monitor thread '_TimeoutMonitor'.
[11/Apr/2013:10:03:58] ENGINE Serving on 127.0.0.1:8080
[11/Apr/2013:10:03:58] ENGINE Bus STARTED
I definitely must have done something wrong. It's as if the global part of the configuration doesn't get applied. How can I fix it?
I think I figured out how to solve it. I added this line:
cherrypy.config.update('myapp.config')
after the line that says
app = cherrypy.tree.mount(c, '/', 'myapp.config')
I think the reason why my classes were getting the Database configuration is that I pass it manually with the setConfig() calls. This passes the application configuration only, not the global configuration. The mount() call apparently doesn't propagate the configuration data to the objects it mounts, as I thought it would do.
Furthermore, the update() call must be after the mount() call, or an exception is raised.
I'm not sure whether this is the best way to organize this code. This works for now, but better ideas are always welcome.
I have a twisted application which now needs to monitor processes running on several boxes. The way I manually do is 'ssh and ps', now I'd like my twisted application to do. I have 2 options.
Use paramiko or leverage the power of twisted.conch
I really want to use twisted.conch but my research led me to believe that its primarily intended to create SSHServers and SSHClients. However my requirement is a simple remoteExecute(some_cmd)
I was able to figure out how to do this using paramiko but I didnt want to stickparamiko in my twisted app before looking at how to do this using twisted.conch
Code snippets using twisted on how to run remote_cmds using ssh would be highly appreciated. Thanks.
Followup - Happily, the ticket I referenced below is now resolved. The simpler API will be included in the next release of Twisted. The original answer is still a valid way to use Conch and may reveal some interesting details about what's going on, but from Twisted 13.1 and on, if you just want to run a command and handle it's I/O, this simpler interface will work.
It takes an unfortunately large amount of code to execute a command on an SSH using the Conch client APIs. Conch makes you deal with a lot of different layers, even if you just want sensible boring default behavior. However, it's certainly possible. Here's some code which I've been meaning to finish and add to Twisted to simplify this case:
import sys, os
from zope.interface import implements
from twisted.python.failure import Failure
from twisted.python.log import err
from twisted.internet.error import ConnectionDone
from twisted.internet.defer import Deferred, succeed, setDebugging
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.internet.protocol import Factory, Protocol
from twisted.conch.ssh.common import NS
from twisted.conch.ssh.channel import SSHChannel
from twisted.conch.ssh.transport import SSHClientTransport
from twisted.conch.ssh.connection import SSHConnection
from twisted.conch.client.default import SSHUserAuthClient
from twisted.conch.client.options import ConchOptions
# setDebugging(True)
class _CommandTransport(SSHClientTransport):
_secured = False
def verifyHostKey(self, hostKey, fingerprint):
return succeed(True)
def connectionSecure(self):
self._secured = True
command = _CommandConnection(
self.factory.command,
self.factory.commandProtocolFactory,
self.factory.commandConnected)
userauth = SSHUserAuthClient(
os.environ['USER'], ConchOptions(), command)
self.requestService(userauth)
def connectionLost(self, reason):
if not self._secured:
self.factory.commandConnected.errback(reason)
class _CommandConnection(SSHConnection):
def __init__(self, command, protocolFactory, commandConnected):
SSHConnection.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def serviceStarted(self):
channel = _CommandChannel(
self._command, self._protocolFactory, self._commandConnected)
self.openChannel(channel)
class _CommandChannel(SSHChannel):
name = 'session'
def __init__(self, command, protocolFactory, commandConnected):
SSHChannel.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def openFailed(self, reason):
self._commandConnected.errback(reason)
def channelOpen(self, ignored):
self.conn.sendRequest(self, 'exec', NS(self._command))
self._protocol = self._protocolFactory.buildProtocol(None)
self._protocol.makeConnection(self)
def dataReceived(self, bytes):
self._protocol.dataReceived(bytes)
def closed(self):
self._protocol.connectionLost(
Failure(ConnectionDone("ssh channel closed")))
class SSHCommandClientEndpoint(object):
implements(IStreamClientEndpoint)
def __init__(self, command, sshServer):
self._command = command
self._sshServer = sshServer
def connect(self, protocolFactory):
factory = Factory()
factory.protocol = _CommandTransport
factory.command = self._command
factory.commandProtocolFactory = protocolFactory
factory.commandConnected = Deferred()
d = self._sshServer.connect(factory)
d.addErrback(factory.commandConnected.errback)
return factory.commandConnected
class StdoutEcho(Protocol):
def dataReceived(self, bytes):
sys.stdout.write(bytes)
sys.stdout.flush()
def connectionLost(self, reason):
self.factory.finished.callback(None)
def copyToStdout(endpoint):
echoFactory = Factory()
echoFactory.protocol = StdoutEcho
echoFactory.finished = Deferred()
d = endpoint.connect(echoFactory)
d.addErrback(echoFactory.finished.errback)
return echoFactory.finished
def main():
from twisted.python.log import startLogging
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
# startLogging(sys.stdout)
sshServer = TCP4ClientEndpoint(reactor, "localhost", 22)
commandEndpoint = SSHCommandClientEndpoint("/bin/ls", sshServer)
d = copyToStdout(commandEndpoint)
d.addErrback(err, "ssh command / copy to stdout failed")
d.addCallback(lambda ignored: reactor.stop())
reactor.run()
if __name__ == '__main__':
main()
Some things to note about it:
It uses the new endpoint APIs introduced in Twisted 10.1. It's possible to do this directly on reactor.connectTCP, but I did it as an endpoint to make it more useful; endpoints can be swapped easily without the code that actually asks for a connection knowing.
It does no host key verification at all! _CommandTransport.verifyHostKey is where you would implement that. Take a look at twisted/conch/client/default.py for some hints about what kinds of things you might want to do.
It takes $USER to be the remote username, which you may want to be a parameter.
It probably only works with key authentication. If you want to enable password authentication, you probably need to subclass SSHUserAuthClient and override getPassword to do something.
Almost all of the layers of SSH and Conch are visible here:
_CommandTransport is at the bottom, a plain old protocol that implements the SSH transport protocol. It creates a...
_CommandConnection which implements the SSH connection negotiation parts of the protocol. Once that completes, a...
_CommandChannel is used to talk to a newly opened SSH channel. _CommandChannel does the actual exec to launch your command. Once the channel is opened it creates an instance of...
StdoutEcho, or whatever other protocol you supply. This protocol will get the output from the command you execute, and can write to the command's stdin.
See http://twistedmatrix.com/trac/ticket/4698 for progress in Twisted on supporting this with less code.