I am writing a client that needs to establish several independent communication channels, each with its own unique port on the server, with a server through a series of sending and receiving messages. I know how to do this using socket send and recv, by giving each communication channel a socket, and do send and recv on that socket. I need to make this work in Twisted, and found potentially useful interfaces including Factory and ProcessProtocol. However, the Protocol interfaces do not provide a method to send messages. Is ProcessProtocol a good choice for my task, and how I make ProcessProtocol send messages?
In case you don't know about it, I'd like to give a shout out to the excellent Twisted finger tutorial that goes through the library at a good pace but with enough detail that you know what's going on.
To directly answer your question, though, I'd say you're on the right track with Protocol and (Client)Factory. I think the cleanest way to do what you're looking for (assuming you need to connect to different ports because they're outputs for different data) would be to make a factory/protocol pair for each port you want to connect to/handle, and then use an external class to handle the application logic aggregating all of them. Generally you wouldn't want your application logic mixed deeply with your networking logic.
A simple example: (note the use of self.transport.write to send data)
from twisted.internet.protocol import Protocol, ClientFactory
from sys import stdout
from foobar_application import CustomAppObject
class FooProtocol(Protocol):
def connectionMade(self):
# Use self.transport.write to send data to the server
self.transport.write('Hello server this is the Foo protocol.')
self.factory.do_app_logic()
class FooFactory(ClientFactory):
protocol = FooProtocol
def __init__(self, app_object=None):
self.app = app_object
def do_app_logic(self):
self.app.do_something()
class BarProtocol(Protocol):
def dataReceived(self, data):
stdout.write('Received data from server using the Bar protocol.')
self.factory.do_fancy_logic(data)
class BarFactory(ClientFactory):
protocol = BarProtocol
def __init__(self, app_object=None):
self.app = app_object
def do_fancy_logic(self, data):
self.app.do_something_else(data)
logic_obj = CustomAppObject()
reactor.listenTCP(8888, FooFactory(app_object=logic_obj)
reactor.listenTCP(9999, BarFactory(app_object=logic_obj)
reactor.run()
You might also want to look at the 'Writing Clients' docs on the Twisted site.
Related
I'm using a SocketServer.ThreadingTCPServer to serve socket connections to clients. This provides an interface where users can connect, type commands and get responses. That part I have working well.
However, in some cases I need a separate thread to broadcast a message to all connected clients. I can't figure out how to do this because there is no way to pass arguments to the class instantiated by ThreadingTCPServer. I don't know how to gather a list of socket connections that have been created.
Consider the example here. How could I access the socket created in the MyTCPHandler class from the __main__ thread?
You should not write to the same TCP socket from multiple threads. The writes may be interleaved if you do ("Hello" and "World" may become "HelWloorld").
That being said, you can create a global list to contain references to all the server objects (who would register themselves in __init__()). The question is, what to do with this list? One idea would be to use a queue or pipe to send the broadcast data to each server object, and have the server objects look in that queue for the "extra" broadcast data to send each time their handle() method is invoked.
Alternatively, you could use the Twisted networking library, which is more flexible and will let you avoid threading altogether - usually a superior alternative.
Here is what I've come up with. It isn't thread safe yet, but that shouldn't be a hard fix:
When the socket is accepted:
if not hasattr(self.server, 'socketlist'):
self.server.socketlist = dict()
thread_id = threading.current_thread().ident
self.server.socketlist[thread_id] = self.request
When the socket closes:
del self.server.socketlist[thread_id]
When I want to write to all sockets:
def broadcast(self, message):
if hasattr(self._server, 'socketlist'):
for socket in self._server.socketlist.values():
socket.sendall(message + "\r\n")
It seems to be working well and isn't as messy as I thought it might end up being.
I've been working with a project that involves sending information to a public server (to demonstrate how key-exchange schemes work) and then sending it to a specific client. There is only two clients.
I'm hoping to get pushed in the right direction on how to get information from client(1) to the server, then have the server redirect that information to client(2). I've messed with the code somewhat, getting comfortable with how to send and recieve information from the server, but I have no idea (~2 hours of research so far) how to differentiate clients and send information to specific clients
My current server code (pretty much unchanged from the python3 docs:
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print("{} wrote:".format(self.client_address[0]))
print(self.data)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
if __name__ == "__main__":
HOST, PORT = "localhost", 9999
# Create the server, binding to localhost on port 9999
server = socketserver.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
My client code (pretty much unchanged from the python3 docs:
import socket
import time
data = "matt is ok"
def contactserver(data):
HOST, PORT = "localhost", 9999
# Create a socket (SOCK_STREAM means a TCP socket)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect to server and send data
sock.connect((HOST, PORT))
sock.sendall(bytes(data, "utf-8"))
# Receive data from the server and shut down
received = str(sock.recv(1024), "utf-8")
print("Sent: {}".format(data))
print("Received: {}".format(received))
return format(received)
while True:
k = contactserver('banana')
time.sleep(1)
print(k)
First, a base socketserver.TCPServer can't even talk to two clients at the same time. As the docs explain:
These four classes process requests synchronously; each request must be completed before the next request can be started.
As the same paragraph tells you, you can solve that problem by using a forking or threading mix-in. That's pretty easy.
But there's a bigger problem. A threaded socketserver server creates a separate, completely independent, object for each connected client, and has no means of communicating between them, or even letting them find out about each other. So, what can you do?
You can always build it yourself. Put some kind of shared data somewhere, and some kind of synchronization on it, and all of the threads can talk to each other the same way any threads can, socketserver or otherwise.
For your design, a queue has all the magic built in for everything we need: client 1 can put a message on the queue (whether client 2 has shown up yet or not), and client 2 can get a message off the same queue (automatically waiting around if the message isn't there yet), and it's all automatically synchronized.
The big question is: how does the server know who's client 1 and who's client 2? Unless you want to switch based on address and port, or add some kind of "login" mechanism, the only rule I can think of is that whoever connects first is client 1, whoever connects second is client 2, and anyone who connects after that, who cares, they don't belong here. So, we can use a simple shared flag with a Lock on it.
Putting it all together:
class MyTCPHandler(socketserver.ThreadingMixIn, socketserver.BaseRequestHandler):
q = queue.queue()
got_first = False
got_first_lock = threading.Lock()
def handle_request(self):
with MyTCPHandler.got_first_lock:
if MyTCPHandler.got_first:
first = False
else:
first = True
MyTCPHandler.got_first = True
if first:
self.data = self.request.recv(1024).strip()
print("{} wrote:".format(self.client_address[0]))
print(self.data)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
# and also queue it up for client 2
MyTCPHandler.q.put(self.data)
else:
# get the message off the queue, waiting if necessary
self.data = MyTCPHandler.q.get()
self.request.sendall(self.data)
If you want to build a more complicated chat server, where everyone talks to everyone… well, that gets a bit more complicated, and you're stretching socketserver even farther beyond its intended limits.
I would suggest either (a) dropping to a lower level and writing a threaded or multiplexing server manually, or (b) going to a higher-level, more-powerful framework that can more easily handle interdependent clients.
The stdlib comes with a few alternatives for writing servers, but all of them suck except for asyncio—which is great, but unfortunately brand new (it requires 3.4, which is still in beta, or can be installed as a back-port for 3.3). If you don't want to skate on the bleeding edge, there are some great third-party choices like twisted or gevent. All of these options have a higher learning curve than socketserver, but that's only to be expected from something much more flexible and powerful.
I've been trying to wrap my mind around how to get Twisted to perform, for lack of a better word, "interactive" client/server behavior.
I managed to put together a pair of Protocol and ClientFactory classes that do connect to a service, and perform an immediate query/response (see: connectionMade -> self.queryStatus). This succeeds as expected and prints the server's response from the Factory class.
My problem now is that I'll have outside events that must cause data to be sent, while always listening for potential incoming data. But once the reactor.run() loop is going, I'm not sure how the rest of my application is meant to trigger a data send.
I've tried a few different approaches since, but this is the simplest approach that did handle the recv part as described:
class myListenerProtocol(LineReceiver):
delimiter = '\n'
def connectionMade(self):
print("Connected to: %s" % self.transport.getPeer())
self.queryStatus(1)
def dataReceived(self, data):
print("Receiving Data from %s" % self.transport.getPeer())
...
self.commandReceived(self.myData)
def commandReceived(self, myData):
self.factory.commandReceived(myData)
def connectionLost(self, reason):
print("Disconnected.")
def queryStatus(self, CommandValue):
...
strSend = CommandValue # or some such
self.transport.write(strSend)
class mySocketFactory(ClientFactory):
protocol = myListenerProtocol
def __init__(self):
pass
def buildProtocol(self, address):
proto = ClientFactory.buildProtocol(self, address)
return proto
def commandReceived(self, myData):
print myData
reactor.stop() # It won't normally stop after recv
def clientConnectionFailed(self, connector, reason):
print("Connection failed.")
reactor.stop()
def main():
f = mySocketFactory()
reactor.connectTCP("10.10.10.1", 1234, f)
reactor.run()
I imagine this is pretty straight-forward, but countless hours into numerous examples and documentation have left me without a good understanding of how I'm meant to deal with this scenario.
My problem now is that I'll have outside events that must cause data to be sent, while always listening for potential incoming data. But once the reactor.run() loop is going, I'm not sure how the rest of my application is meant to trigger a data send.
"Outside events"? Like what? Data arriving on a connection? Great, having the reactor running means you'll actually be able to handle that data.
Or maybe someone is clicking a button in a GUI? Try one of the GUI integration reactors - again, you can't handle those events until you have a reactor running.
You're probably getting stuck because you think your main function should do reactor.run() and then go on to do other things. This isn't how it works. When you write an event-driven program, you define all of your event sources and then let the event loop call your handlers when events arrive on those sources.
Well, there are many approaches to that, and the best one really depends on the context of your application, so I won't detail you one way of doing this here, but rather link you to a reading I had recently on hacker's news:
http://www.devmusings.com/blog/2013/05/23/python-concurrency/
and good use-case example, though it may not apply to what you're working on (or you may have read it):
http://eflorenzano.com/blog/2008/11/17/writing-markov-chain-irc-bot-twisted-and-python/
BTW, you may also have a look at gevent or tornado that are good at handling that kind of things.
If your other "events" are from a GUI toolkit (like GTK or QT) be really careful of the GIL, and even if you just want command line events you'll need threads and still be careful of that.
Finally, if you want to make more interaction, you may as well write different kind of "peers" for your server, that interacts with the different use cases you're working on (one client that connects to a GUI, another with a CLI, another with a database, another with a SAAS' API etc..).
In other words, if your design is not working, try changing your perspective!
Hello
I'm new in twisted, but have some questions after read manual:
1. How to use different protocols with different reactor in one program? (for example, txNetTools have own reactor and internal IRC support in twisted have own reactor from twisted.internet)
2. How to start more one client in one time? (many client with ping to other remote host) http://bazaar.launchpad.net/~oubiwann/txnet/trunk/view/head:/sandbox/ping.py
3. How to put data from one protocol to other (in one program)? I'm want use data from database in protocol. (for example, every 5 minutes get hosts from database and create ping clients)
My task is simple, create a more different protocol client to many count of servers.
Well, for the third question at least, are you talking about using protocols of different classes or multiple protocol instances of the same class? Protocol instances can communicate between each other by having the factory that creates them store their data, like the following:
class p(Protocol):
factory = None
...
class f(Factory):
protocol = p
data = None
def buildProtocol(self, addr):
returnValue = p()
returnValue.factory = self
return returnValue
From there you can save data to self.factory.data from within a protocol instance, and any other protocol instance can access it. I hope that answered your question.
How to use different protocols with different reactor in one program?
You don't. There is only one reactor per process, and it can handle as many connections as you want it to. The vast majority of libraries don't provide a reactor, and the reactor provided by txNetTools is optional. The only thing it provides is this method:
def listenICMP(self, port, protocol, interface="", maxPacketSize=8192):
p = icmp.Port(port, protocol, interface, maxPacketSize, self)
p.startListening()
return p
If you want to use another reactor, then you can just instantiate an icmp.Port yourself.
How to start more one client in one time?
The same way you start one, but repeated. For example, here's ten concurrent pingers (incorporating the answer to the first question):
for i in range(10):
p = icmp.Port(0, Pinger(), reactor=reactor)
p.startListening()
reactor.run()
chameco gives a fine answer to the last question.
I am currently working on exposing data from legacy system over the web. I have a (legacy) server application that sends and receives data over UDP. The software uses UDP to send sequential updates to a given set of variables in (near) real-time (updates every 5-10 ms). thus, I do not need to capture all UDP data -- it is sufficient that the latest update is retrieved.
In order to expose this data over the web, I am considering building a lightweight web server that reads/write UDP data and exposes this data over HTTP.
As I am experienced with Python, I am considering to use it.
The question is the following: how can I (continuously) read data from UDP and send snapshots of it over TCP/HTTP on-demand with Python? So basically, I am trying to build a kind of "UDP2HTTP" adapter to interface with the legacy app so that I wouldn't need to touch the legacy code.
A solution that is WSGI compliant would be much preferred. Of course any tips are very welcome and MUCH appreciated!
Twisted would be very suitable here. It supports many protocols (UDP, HTTP) and its asynchronous nature makes it possible to directly stream UDP data to HTTP without shooting yourself in the foot with (blocking) threading code. It also support wsgi.
Here's a quick "proof of concept" app using the twisted framework. This assumes that the legacy UDP service is listening on localhost:8000 and will start sending UDP data in response to a datagram containing "Send me data". And that the data is 3 32bit integers. Additionally it will respond to an "HTTP GET /" on port 2080.
You could start this with twistd -noy example.py:
example.py
from twisted.internet import protocol, defer
from twisted.application import service
from twisted.python import log
from twisted.web import resource, server as webserver
import struct
class legacyProtocol(protocol.DatagramProtocol):
def startProtocol(self):
self.transport.connect(self.service.legacyHost,self.service.legacyPort)
self.sendMessage("Send me data")
def stopProtocol(self):
# Assume the transport is closed, do any tidying that you need to.
return
def datagramReceived(self,datagram,addr):
# Inspect the datagram payload, do sanity checking.
try:
val1, val2, val3 = struct.unpack("!iii",datagram)
except struct.error, err:
# Problem unpacking data log and ignore
log.err()
return
self.service.update_data(val1,val2,val3)
def sendMessage(self,message):
self.transport.write(message)
class legacyValues(resource.Resource):
def __init__(self,service):
resource.Resource.__init__(self)
self.service=service
self.putChild("",self)
def render_GET(self,request):
data = "\n".join(["<li>%s</li>" % x for x in self.service.get_data()])
return """<html><head><title>Legacy Data</title>
<body><h1>Data</h1><ul>
%s
</ul></body></html>""" % (data,)
class protocolGatewayService(service.Service):
def __init__(self,legacyHost,legacyPort):
self.legacyHost = legacyHost #
self.legacyPort = legacyPort
self.udpListeningPort = None
self.httpListeningPort = None
self.lproto = None
self.reactor = None
self.data = [1,2,3]
def startService(self):
# called by application handling
if not self.reactor:
from twisted.internet import reactor
self.reactor = reactor
self.reactor.callWhenRunning(self.startStuff)
def stopService(self):
# called by application handling
defers = []
if self.udpListeningPort:
defers.append(defer.maybeDeferred(self.udpListeningPort.loseConnection))
if self.httpListeningPort:
defers.append(defer.maybeDeferred(self.httpListeningPort.stopListening))
return defer.DeferredList(defers)
def startStuff(self):
# UDP legacy stuff
proto = legacyProtocol()
proto.service = self
self.udpListeningPort = self.reactor.listenUDP(0,proto)
# Website
factory = webserver.Site(legacyValues(self))
self.httpListeningPort = self.reactor.listenTCP(2080,factory)
def update_data(self,*args):
self.data[:] = args
def get_data(self):
return self.data
application = service.Application('LegacyGateway')
services = service.IServiceCollection(application)
s = protocolGatewayService('127.0.0.1',8000)
s.setServiceParent(services)
Afterthought
This isn't a WSGI design. The idea for this would to use be to run this program daemonized and have it's http port on a local IP and apache or similar to proxy requests. It could be refactored for WSGI. It was quicker to knock up this way, easier to debug.
The software uses UDP to send sequential updates to a given set of variables in (near) real-time (updates every 5-10 ms). thus, I do not need to capture all UDP data -- it is sufficient that the latest update is retrieved
What you must do is this.
Step 1.
Build a Python app that collects the UDP data and caches it into a file. Create the file using XML, CSV or JSON notation.
This runs independently as some kind of daemon. This is your listener or collector.
Write the file to a directory from which it can be trivially downloaded by Apache or some other web server. Choose names and directory paths wisely and you're done.
Done.
If you want fancier results, you can do more. You don't need to, since you're already done.
Step 2.
Build a web application that allows someone to request this data being accumulated by the UDP listener or collector.
Use a web framework like Django for this. Write as little as possible. Django can serve flat files created by your listener.
You're done. Again.
Some folks think relational databases are important. If so, you can do this. Even though you're already done.
Step 3.
Modify your data collection to create a database that the Django ORM can query. This requires some learning and some adjusting to get a tidy, simple ORM model.
Then write your final Django application to serve the UDP data being collected by your listener and loaded into your Django database.