I have some code thats monitoring some other changing files, what i would like to do is to start that code that uses zeromq with different socket, the way im doing it now seems to cause assertions to fail somewhere in libzmq, since i may be reusing the same socket. how do i ensure when i create a new process from the monitor class the context will not be reused? thats what i think is going on, if you can tell there is some other stupidity on my part, please advise.
here is some code:
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
class Monitor(object):
def __init(self)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.connect("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.somefunc)
def initialize(self,id)
self._id = id
def somefunc(self, something)
"""work here and send back results if any """
import json
jdecoded = json.loads(something)
if self_id == jdecoded['_id']
""" good im the right monitor for you """
work = jdecoded['message']
results = algorithm (work)
self.socket.send(json.dumps(results))
else:
"""let some other process deal with it, not mine """
pass
class Prefect(object):
def __init(self, id)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.bind("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.check_if)
self._id = id
self.monitors = []
def check_if(self,message):
"""find out from message's id whether we have
started a proces for it previously"""
import json
jdecoded = json.loads(message)
this_id = jdecoded['_id']
if this_id in self.monitors:
pass
else:
"""start new process for it should have its won socket """
new = Monitor()
import Process
newp = Process(target=new.initialize,args=(this_id) )
newp.start()
self.monitors.append(this_id) ## ensure its remembered
what is going on is that i want all the monitor processess and a single prefect process listening on the same port, so when prefect sees a request that it hasnt seen it starts a process for it, all the processes that exist probably should listen too but ignore messages not meant for them.
as it stands, if i do this i get some crash possibly related to concurrent access of the same zmq socket by something (i tried threading.thread, still crashes) i read somewhere that concurrent access of a zmq socket by different threads is not possible. How would i ensure that new processes get their own zmq sockets?
EDIT:
the main deal in my app is that a request comes in via zmq socket, and a process(s) thats listening reacts to the message by:
1. If its directed at that process judged by the _id field, do some reading on a file and reply since one of the monitors match the messages _id, if none match, then:
2 If the messages _id files is not recognized, all monitors ignore it but the Prefect creates a process to handle that _id and all future messages to that id.
3. I want all the messages to be seen by the monitor processes as well as the prefect process, seems that seems easiest,
4. All the messages are very small, avarage ~4096 bytes.
5. The monitor does some non-blocking read and for each ioloop it sends what it has found out
more-edit=>and the prefect process binds now, and it will receive messages and echo them so they can be seen by monitors. This is what i have in mind, as the architecture but its not final.
.
All the messages are arriving from remote users over a browser that lets the server know what a client wants and the server sends the message to the backend via zmq(i did not show this, but is not hard) so in production they might not bind/connect to localhost.
I chose DEALER since it allows asyc / unlimited messages in either direction (see point 5.) and DEALER can bind with DEALER, and initial request/reply can arrive from either side. The other that can do this is possibly DEALER/ROUTER.
You are correct that you cannot keep using the same socket in a subprocess (multiprocessing usually uses fork to create subprocesses). In general, what this means is that you don't want to create the socket that will be used in the subprocess until after the subprocess starts.
Since, in your case, the socket is an attribute on the Monitor object, you probably don't want to create the Monitor in the main process at all. That would look something like this:
def start_monitor(this_id):
monitor = Monitor()
monitor.initialize(this_id)
# run the eventloop, or this will return immediately and destroy the monitor
... inside Prefect.check_if():
proc = Process(target=start_monitor, args=(this_id,))
proc.start()
self.monitors.append(this_id)
rather than your example, where the only thing the subprocess does is assign an ID and then kill the process, ultimately having no effect.
Related
I have a fake HTTP server that I use as a fixture in my testing. At some point in the test, I want to stop the server regardless of any still open connections. Clients on these open connections should get a TCP FIN.
I am aware that usually production servers need to solve different problem, that of quiescing, sometimes called graceful shutdown. This is the opposite of what I want.
With a standalone process, it is usually possible to simply get the process to quit and the OS will take care of the rest. (Forcibly killing processes is easy, while forcibly killing threads is not.) My fake server is, however, running in a thread of the test process itself, so I don't have this option (and I don't want to externalize it if there is other way around).
I investigated this issue in Python, with the HTTPServer class, where I was not able to find any solution.
I also investigated this in Go, where I was able to find the concept of Contexts, which is close to what I need, but it works the other way around: a http server would propagate a Context that can be used to cancel e.g. a database lookup if a client disconnected.
Edit: looks like Go actually does what I need and has a separate graceful and nongraceful shutdown methods, with the nongraceful being net/http#Server.Close.
server = http.server.HTTPServer(...)
thread = threading.Thread(run=server.serve_forever)
thread.start()
# a client has connected ....
server.shutdown()
# at this point I want to have the server stopped,
# without waiting for the request handling to complete
I've implemented the Go solution in Python. When new client connects, I remember the client socket, and when I want to quit, I shutdown all remembered sockets.
It seems to work.
import socket
import http.server.HTTPServer
class MyHTTPServer(HTTPServer):
"""Adds a method to the HTTPServer to allow it to exit gracefully"""
def __init__(self, addr, handler_cls):
super().__init__(addr, handler_cls)
self._client_sockets: List[socket.socket] = []
self.server_killed = False
def get_request(self) -> Tuple[socket.socket, Any]:
"""Remember the client socket"""
sock, addr = super().get_request()
self._client_sockets.append(sock)
return sock, addr
def shutdown_request(self, request: socket.socket) -> None:
"""Forget the client socket"""
self._client_sockets.remove(request)
print(f"{self._client_sockets=}")
super().shutdown_request(request)
def force_disconnect_clients(self) -> None:
"""Shutdown the remembered sockets"""
for client in self._client_sockets:
client.shutdown(socket.SHUT_RDWR)
Usage
server = MyHTTPServer(server_addr, MyRequestHandler)
# in a new thread
while not server.server_killed:
self._server.handle_request()
# ... use the server (keep in mind it can have at most one client at a time) ...
# in the main program
server.server_killed = True
server.force_disconnect_clients()
server.server_close()
I'm having a weird issue with the proxy in pyzmq. Here's the code of that proxy:
import zmq
context = zmq.Context.instance()
frontend_socket = context.socket(zmq.XSUB)
frontend_socket.bind("tcp://0.0.0.0:%s" % sub_port)
backend_socket = context.socket(zmq.XPUB)
backend_socket.bind("tcp://0.0.0.0:%s" % pub_port)
zmq.proxy(frontend_socket, backend_socket)
I'm using that proxy to send messages between ~50 processes that run on 6 different machines. The total amount of topics is around 1,000, but since multiple processes can listen on the same topics, the total amount of subscriptions is around 10,000.
In normal times this works very well, messages go through the proxy correctly as long as a process publishes it and at least one other processes is subscribed to the topic. It works whether the publisher or subscriber was started first.
But at some point in time, when we start a new process (let's call it X), it starts behaving strangely. Everything that was already connected keeps working, but the new processes that we connect can only get messages to go through if the publisher is connected before the subscriber. X can be any one of the processes that normally work, and it can be from any machine, and the result is the same. When we get in this state, killing X makes everything work again, and starting it again makes it fail. If we stop other processes and then start X, it works well (so it's not related with X's code in particular).
I'm not sure if we could be reaching some limit of ZMQ? I've read examples of people that seem to have way more processes, subscriptions, etc. than us. It could be some option that we should set on the proxy, so far here are the ones we've tried without success:
Changing RCVHWM on frontend_socket
Changing SNDHWM on backend_socket
Setting XPUB_VERBOSE on backend_socket
Setting XPUB_VERBOSER on backend_socket
Here is sample code of how we publish messages to the proxy:
topic = "test"
message = {"test": "test"}
context = zmq.Context.instance()
socket = context.socket(zmq.PUB)
socket.connect("tcp://1.2.3.4:1234")
while True:
time.sleep(1)
socket.send_multipart([topic.encode(), json.dumps(message).encode()])
Here is sample code of how we subscribe to messages from the proxy:
topic = "test"
context = zmq.Context.instance()
socket = context.socket(zmq.SUB)
socket.connect("tcp://1.2.3.4:5678")
socket.subscribe(topic)
while True:
multi_part = socket.recv_multipart()
[topic, message] = multi_part
print(topic.decode(), message.decode())
Has anyone ever seen a similar issue? Is there something we can do to avoid the proxy getting in this state?
Thanks!
Make all the publishers (proxy and publish process) XPUB ( + sockopt verbose/verboser) then read from the publisher sockets on a poll loop. The first byte of the subscription message will tell you if the message is sub/unsub followed by the subject/topic. If you log all of the this information with timestamps it should tell you which component is at fault (it could be any of the three) and help with a fix.
The format of the subscription messages that arrive on the publisher (XPUB) will be
Subscription [0x01][topic]
Unsubscription [0x00][topic]
Code needed
I usually work on C++ but this is the general idea in python
proxy
You need to create a capture socket (this acts like a network tap). You connect a ZMQ_PAIR socket to the proxy (capture) over inproc and then read the contents at the other end of the socket. As you are using XPUB/XSUB you will see the subscription messages.
zmq.proxy(frontend, backend, capture)
read the docs/examples for the python proxy.
publisher
In this case you need to read from the publishing socket in the same thread as you are sending on it. That's the reason I said a poll loop might be best.
This code is not tested at all.
topic = "test"
message = {"test": "test"}
context = zmq.Context.instance()
socket = context.socket(zmq.XPUB)
socket.connect("tcp://1.2.3.4:1234")
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
timeout = 1000 #ms
while True:
socks = dict(poller.poll(timeout))
if not socks : # 1
socket.send_multipart([topic.encode(), json.dumps(message).encode()])
if socket in socks:
sub_msg = socket.recv()
# print out the message here.
I'm currently working with RabbitMQ in Python using the Pika client to create a server that handles various message types. The basic setup I have is one queue receiving all incoming messages, a routing process that directs them to the correct destinations, and several processes to handle requests and accept incoming data. This setup has been working fine, except in one specific case. When I have the RabbitMQ server running before the server processes are started and it gets a message, it correctly stores those in the incoming message queue. However, when I then try to start those processes and set up a consumer to that non-empty incoming queue with the pika.basic_consume function, the program hangs. So, at the moment if I want to start up my server processes, I have to purge all messages from the queues before it will work correctly. How do I fix this to work with nonempty queues?
Here's a sample of one of the processes, they all are set up essentially the same as this one.
class Router(Process):
def __init__(self,routing_table):
super(Router,self).__init__()
self.routing_table = routing_table
self.routeQueues = {
'r' : 'registration',
't' : 'util',
'p' : 'util',
's' : 'data'
}
# Create a connection to the RabbitMQ server.
self.rabbitConn = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
self.channel = self.rabbitConn.channel()
# Load all of the existing registered node queues
with open('registrations/nodes.txt','r') as nodes:
for line in nodes:
info = line.strip().split(":")
self.channel.queue_declare(info[1])
# Declare the default queues
queue_list = ["incoming","registration","util"]
for queueName in queue_list:
self.channel.queue_declare(queueName)
# Start consuming things from the incoming queue
self.channel.basic_consume(self.gotPacket,queue='incoming')
def gotPacket(self,ch,method,params,body):
# Does stuff. Not relevant here.
pass
def run(self):
self.channel.start_consuming()
This issue was caused by the pika 0.9.13 library. Upgrading to pika 0.9.14 resolves this issue. #eandersson
I'm developing a Flask/gevent WSGIserver webserver that needs to communicate (in the background) with a hardware device over two sockets using XML.
One socket is initiated by the client (my application) and I can send XML commands to the device. The device answers on a different port and sends back information that my application has to confirm. So my application has to listen to this second port.
Up until now I have issued a command, opened the second port as a server, waited for a response from the device and closed the second port.
The problem is that it's possible that the device sends multiple responses that I have to confirm. So my solution was to keep the port open and keep responding to incoming requests. However, in the end the device is done sending requests, and my application is still listening (I don't know when the device is done), thereby blocking everything else.
This seemed like a perfect use case for a thread, so that my application launches a listening server in a separate thread. Because I'm already using gevent as a WSGI server for Flask, I can use the greenlets.
The problem is, I have looked for a good example of such a thing, but all I can find is examples of multi-threading handlers for a single socket server. I don't need to handle a lot of connections on the socket server, but I need it launched in a separate thread so it can listen for and handle incoming messages while my main program can keep sending messages.
The second problem I'm running into is that in the server, I need to use some methods from my "main" class. Being relatively new to Python I'm unsure how to structure it in a way to make that possible.
class Device(object):
def __init__(self, ...):
self.clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def _connect_to_device(self):
print "OPEN CONNECTION TO DEVICE"
try:
self.clientsocket.connect((self.ip, 5100))
except socket.error as e:
pass
def _disconnect_from_device(self):
print "CLOSE CONNECTION TO DEVICE"
self.clientsocket.close()
def deviceaction1(self, ...):
# the data that is sent is an XML document that depends on the parameters of this method.
self._connect_to_device()
self._send_data(XMLdoc)
self._wait_for_response()
return True
def _send_data(self, data):
print "SEND:"
print(data)
self.clientsocket.send(data)
def _wait_for_response(self):
print "WAITING FOR REQUESTS FROM DEVICE (CHANNEL 1)"
self.serversocket.bind(('10.0.0.16', 5102))
self.serversocket.listen(5) # listen for answer, maximum 5 connections
connection, address = self.serversocket.accept()
# the data is of a specific length I can calculate
if len(data) > 0:
self._process_response(data)
self.serversocket.close()
def _process_response(self, data):
print "RECEIVED:"
print(data)
# here is some code that processes the incoming data and
# responds to the device
# this may or may not result in more incoming data
if __name__ == '__main__':
machine = Device(ip="10.0.0.240")
Device.deviceaction1(...)
This is (globally, I left out sensitive information) what I'm doing now. As you can see everything is sequential.
If anyone can provide an example of a listening server in a separate thread (preferably using greenlets) and a way to communicate from the listening server back to the spawning thread, it would be of great help.
Thanks.
EDIT:
After trying several methods, I decided to use Pythons default select() method to solve this problem. This worked, so my question regarding the use of threads is no longer relevant. Thanks for the people who provided input for your time and effort.
Hope it can provide some help, In example class if we will call tenMessageSender function then it will fire up an async thread without blocking main loop and then _zmqBasedListener will start listening on separate port untill that thread is alive. and whatever message our tenMessageSender function will send, those will be received by client and respond back to zmqBasedListener.
Server Side
import threading
import zmq
import sys
class Example:
def __init__(self):
self.context = zmq.Context()
self.publisher = self.context.socket(zmq.PUB)
self.publisher.bind('tcp://127.0.0.1:9997')
self.subscriber = self.context.socket(zmq.SUB)
self.thread = threading.Thread(target=self._zmqBasedListener)
def _zmqBasedListener(self):
self.subscriber.connect('tcp://127.0.0.1:9998')
self.subscriber.setsockopt(zmq.SUBSCRIBE, "some_key")
while True:
message = self.subscriber.recv()
print message
sys.exit()
def tenMessageSender(self):
self._decideListener()
for message in range(10):
self.publisher.send("testid : %d: I am a task" %message)
def _decideListener(self):
if not self.thread.is_alive():
print "STARTING THREAD"
self.thread.start()
Client
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9997')
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9998')
subscriber.setsockopt(zmq.SUBSCRIBE, "testid")
count = 0
print "Listener"
while True:
message = subscriber.recv()
print message
publisher.send('some_key : Message received %d' %count)
count+=1
Instead of thread you can use greenlet etc.
I have been asked to write a class that connects to a server, asynchronously sends the server various commands, and then provides the returned data to the client. I've been asked to do this in Python, which is a new language to me. I started digging around and found the Twisted framework which offers some very nice abstractions (Protocol, ProtocolFactory, Reactor) that do a lot of the things that I would have to do if I would roll my own socket-based app. It seems like the right choice given the problem that I have to solve.
I've looked through numerous examples on the web (mostly Krondo), but I still haven't seen a good example of creating a client that will send multiple commands across the wire and I maintain the connection I create. The server (of which I have no control over), in this case, doesn't disconnect after it sends the response. So, what's the proper way to design the client so that I can tickle the server in various ways?
Right now I do this:
class TestProtocol(Protocol)
def connectionMade(self):
self.transport.write(self.factory.message)
class TestProtocolFactory(Factory):
message = ''
def setMessage(self, msg):
self.message = msg
def main():
f = TestProtocolFactory()
f.setMessage("my message")
reactor.connectTCP(...)
reactor.run()
What I really want to do is call self.transport.write(...) via the reactor (really, call TestProtocolFactory::setMessage() on-demand from another thread of execution), not just when the connection is made.
Depends. Here are some possibilities:
I'm assuming
Approach 1. You have a list of commands to send the server, and for some reason can't do them all at once. In that case send a new one as the previous answer returns:
class proto(parentProtocol):
def stringReceived(self, data):
self.handle_server_response(data)
next_command = self.command_queue.pop()
# do stuff
Approach 2. What you send to the server is based on what the server sends you:
class proto(parentProtocol):
def stringReceived(self, data):
if data == "this":
self.sendString("that")
elif data == "foo":
self.sendString("bar")
# and so on
Approach 3. You don't care what the server sends to, you just want to periodically send some commands:
class proto(parentProtocol):
def callback(self):
next_command = self.command_queue.pop()
# do stuff
def connectionMade(self):
from twisted.internet import task
self.task_id = task.LoopingCall(self.callback)
self.task_id.start(1.0)
Approach 4: Your edit now mentions triggering from another thread. Feel free to check the twisted documentation to find out if proto.sendString is threadsafe. You may be able to call it directly, but I don't know. Approach 3 is threadsafe though. Just fill the queue (which is threadsafe) from another thread.
Basically you can store any amount of state in your protocol; it will stay around until you are done. The you either send commands to the server as a response to it's messages to you, or you set up some scheduling to do your stuff. Or both.
You may want to use a Service.
Services are pieces of functionality within a Twisted app which are started and stopped, and are nice abstractions for other parts of your code to interact with. For example, in this case you might have a SayStuffToServerService (I know, terrible name, but without knowing more about its job it was the best I could do here :) ) that exposed something like this:
class SayStuffToServerService:
def __init__(self, host, port):
# this is the host and port to connect to
def sendToServer(self, whatToSend):
# send some line to the remote server
def startService(self):
# call me before using the service. starts outgoing connection efforts.
def stopService(self):
# clean reactor shutdowns should call this method. stops outgoing
# connection efforts.
(That might be all the interface you need, but it should be fairly clear where you can add things to this.)
The startService() and stopService() methods here are just what Twisted's Services expose. And helpfully, there is a premade Twisted Service which acts like a TCP client and takes care of all the reactor stuff for you. It's twisted.application.internet.TCPClient, which takes arguments for a remote host and port, along with a ProtocolFactory to take care of handling the actual connection attempt.
Here is the SayStuffToServerService, implemented as a subclass of TCPClient:
from twisted.application import internet
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
# we'll do stuff here
(See below for the SayStuffToServerProtocolFactory.)
Using this Service architecture is convenient in a lot of ways; you can group Services together in one container, so that they all get stopped and started as one when you have different parts of your app that you want active. It may make good sense to implement other parts of your app as separate Services. You can set Services as child services to application- the magic name that twistd looks for in order to know how to initialize, daemonize, and shut down your app. Actually yes, let's add some code to do that now.
from twisted.application import service
...
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
That's all. Now when you run this module under twistd (i.e., for debugging, twistd -noy saystuff.py), that application will be started under the right reactor, and it will in turn start the SayStuffToServerService, which will start a connection effort to localhost:65432, which will use the service's factory attribute to set up the connection and the Protocol. You don't need to call reactor.run() or attach things to the reactor yourself anymore.
So we haven't implemented SayStuffToServerProtocolFactory yet. Since it sounds like you would prefer that your client reconnect if it has lost the connection (so that callers of sendToServer can usually just assume that there's a working connection), I'm going to put this protocol factory on top of ReconnectingClientFactory.
from twisted.internet import protocol
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
This is a pretty nice minimal definition, which will keep trying to make outgoing TCP connections to the host and port we specified, and instantiate a SayStuffToServerProtocol each time. When we fail to connect, this class will do nice, well-behaved exponential backoff so that your network doesn't get hammered (you can set a maximum wait time). It will be the responsibility of the Protocol to assign to _my_live_proto and call this factory's resetDelay() method, so that exponential backoff will continue to work as expected. And here is that Protocol now:
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
This is implemented on top of twisted.protocols.basic.LineReceiver, but would work as well with any other sort of Protocol, in case your protocol isn't line-oriented.
The only thing left is hooking up the Service to the right Protocol instance. This is why the Factory keeps a _my_live_proto attribute, which should be set when a connection is successfully made, and cleared (set to None) when that connection is lost. Here's the new implementation of SayStuffToServerService.sendToServer:
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
...
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
And now to tie it all together in one place:
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.protocols import basic
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
Hopefully that gives enough of a framework with which to start. There is sometimes a lot of plumbing to do to handle client disconnections just the way you want, or to handle out-of-order responses from the server, or handle various sorts of timeout, canceling pending requests, allowing multiple pooled connections, etc, etc, but this should help.
The twisted framework is event-based programming; and by nature, its method is all called in async, and result is get by defer object.
The framework's nature is approprivate for protocol developing, just you have to change your minding from traditional sequential programming. The Protocol class is like a finite state machine with events like: connection make, connection lost, receive data.
You can convert your client code into FSM and then will be easily to fit into the Protocol class.
Below is an rough example of what I want to express. A bit of rouge, but this is i can provide now:
class SyncTransport(Protocol):
# protocol
def dataReceived(self, data):
print 'receive data', data
def connectionMade(self):
print 'i made a sync connection, wow'
self.transport.write('x')
self.state = I_AM_LIVING
def connectionLost(self):
print 'i lost my sync connection, sight'
def send(self, data):
if self.state == I_AM_LIVING:
if data == 'x':
self.transport.write('y')
if data == 'Y':
self.transport.write('z')
self.state = WAITING_DEAD
if self.state == WAITING_DEAD:
self.transport.close()