Hello
I'm new in twisted, but have some questions after read manual:
1. How to use different protocols with different reactor in one program? (for example, txNetTools have own reactor and internal IRC support in twisted have own reactor from twisted.internet)
2. How to start more one client in one time? (many client with ping to other remote host) http://bazaar.launchpad.net/~oubiwann/txnet/trunk/view/head:/sandbox/ping.py
3. How to put data from one protocol to other (in one program)? I'm want use data from database in protocol. (for example, every 5 minutes get hosts from database and create ping clients)
My task is simple, create a more different protocol client to many count of servers.
Well, for the third question at least, are you talking about using protocols of different classes or multiple protocol instances of the same class? Protocol instances can communicate between each other by having the factory that creates them store their data, like the following:
class p(Protocol):
factory = None
...
class f(Factory):
protocol = p
data = None
def buildProtocol(self, addr):
returnValue = p()
returnValue.factory = self
return returnValue
From there you can save data to self.factory.data from within a protocol instance, and any other protocol instance can access it. I hope that answered your question.
How to use different protocols with different reactor in one program?
You don't. There is only one reactor per process, and it can handle as many connections as you want it to. The vast majority of libraries don't provide a reactor, and the reactor provided by txNetTools is optional. The only thing it provides is this method:
def listenICMP(self, port, protocol, interface="", maxPacketSize=8192):
p = icmp.Port(port, protocol, interface, maxPacketSize, self)
p.startListening()
return p
If you want to use another reactor, then you can just instantiate an icmp.Port yourself.
How to start more one client in one time?
The same way you start one, but repeated. For example, here's ten concurrent pingers (incorporating the answer to the first question):
for i in range(10):
p = icmp.Port(0, Pinger(), reactor=reactor)
p.startListening()
reactor.run()
chameco gives a fine answer to the last question.
Related
Generally in Twisted (Python), you define some listeners or connections or looping operations and add them to the reactor and then call reactor.run(). Is there any way to add new connections from within other event loops? Say I want to have a server, and then this server spawns other clients, each having their own on data receive scope.
Thanks
You can create as many client connections as you'd like to a particular server. Disregarding code quality/design patterns, client connections can be made anywhere within your code.
from twisted.internet import protocol, reactor
class SomeProtocol(protocol.Protocol):
def dataReceived(self):
# This is what I believe you're asking about
for x in range(5):
reactor.connectTCP('localhost', 8000, SomeClientFactory())
class SomeServerFactory(protocol.Factory):
def buildProtocol(self):
return SomeProtocol()
reactor.listenTCP(8000, SomeServerFactory())
reactor.run()
I'm using a SocketServer.ThreadingTCPServer to serve socket connections to clients. This provides an interface where users can connect, type commands and get responses. That part I have working well.
However, in some cases I need a separate thread to broadcast a message to all connected clients. I can't figure out how to do this because there is no way to pass arguments to the class instantiated by ThreadingTCPServer. I don't know how to gather a list of socket connections that have been created.
Consider the example here. How could I access the socket created in the MyTCPHandler class from the __main__ thread?
You should not write to the same TCP socket from multiple threads. The writes may be interleaved if you do ("Hello" and "World" may become "HelWloorld").
That being said, you can create a global list to contain references to all the server objects (who would register themselves in __init__()). The question is, what to do with this list? One idea would be to use a queue or pipe to send the broadcast data to each server object, and have the server objects look in that queue for the "extra" broadcast data to send each time their handle() method is invoked.
Alternatively, you could use the Twisted networking library, which is more flexible and will let you avoid threading altogether - usually a superior alternative.
Here is what I've come up with. It isn't thread safe yet, but that shouldn't be a hard fix:
When the socket is accepted:
if not hasattr(self.server, 'socketlist'):
self.server.socketlist = dict()
thread_id = threading.current_thread().ident
self.server.socketlist[thread_id] = self.request
When the socket closes:
del self.server.socketlist[thread_id]
When I want to write to all sockets:
def broadcast(self, message):
if hasattr(self._server, 'socketlist'):
for socket in self._server.socketlist.values():
socket.sendall(message + "\r\n")
It seems to be working well and isn't as messy as I thought it might end up being.
I am writing a client that needs to establish several independent communication channels, each with its own unique port on the server, with a server through a series of sending and receiving messages. I know how to do this using socket send and recv, by giving each communication channel a socket, and do send and recv on that socket. I need to make this work in Twisted, and found potentially useful interfaces including Factory and ProcessProtocol. However, the Protocol interfaces do not provide a method to send messages. Is ProcessProtocol a good choice for my task, and how I make ProcessProtocol send messages?
In case you don't know about it, I'd like to give a shout out to the excellent Twisted finger tutorial that goes through the library at a good pace but with enough detail that you know what's going on.
To directly answer your question, though, I'd say you're on the right track with Protocol and (Client)Factory. I think the cleanest way to do what you're looking for (assuming you need to connect to different ports because they're outputs for different data) would be to make a factory/protocol pair for each port you want to connect to/handle, and then use an external class to handle the application logic aggregating all of them. Generally you wouldn't want your application logic mixed deeply with your networking logic.
A simple example: (note the use of self.transport.write to send data)
from twisted.internet.protocol import Protocol, ClientFactory
from sys import stdout
from foobar_application import CustomAppObject
class FooProtocol(Protocol):
def connectionMade(self):
# Use self.transport.write to send data to the server
self.transport.write('Hello server this is the Foo protocol.')
self.factory.do_app_logic()
class FooFactory(ClientFactory):
protocol = FooProtocol
def __init__(self, app_object=None):
self.app = app_object
def do_app_logic(self):
self.app.do_something()
class BarProtocol(Protocol):
def dataReceived(self, data):
stdout.write('Received data from server using the Bar protocol.')
self.factory.do_fancy_logic(data)
class BarFactory(ClientFactory):
protocol = BarProtocol
def __init__(self, app_object=None):
self.app = app_object
def do_fancy_logic(self, data):
self.app.do_something_else(data)
logic_obj = CustomAppObject()
reactor.listenTCP(8888, FooFactory(app_object=logic_obj)
reactor.listenTCP(9999, BarFactory(app_object=logic_obj)
reactor.run()
You might also want to look at the 'Writing Clients' docs on the Twisted site.
I want to add a timeout to individual connections within my request handler for a server using the SocketServer module.
Let me start by saying this is the first time I'm attempting to do network programming using Python. I've sub-classed SocketServer.BaseRequestHandler and SocketServer.ThreadingTCPServer & SocketServer.TCPServer and managed to create two classes with some basic threaded TCP functionality.
However I would like my incoming connections to time-out. Trying to override any of the built in SocketServer time-out values and methods does not work, as the documentation says this works only with forking server. I have managed to create a timer thread that fires after X seconds, but due to the nature of the blocking recv call within the Handler thread, this is of no use, as I would be forced to kill it, and this is something I really want to avoid.
So it is my understanding that I need an asyncore implementation, where I get notified and read certain amount of data. In the event that no data is sent over a period of 5 seconds lets say, I want to close that connection (I know how to cleanly do that).
I have found a few examples of using asyncore with sockets, but none using SocketServer. So, how can I implement asyncore & threadingTCPserver ?
Is it possible?
Has anyone done it?
You can also set a timeout on the recv call, like this:
sock.settimeout(1.0)
Since you use SocketServer, you will have to find the underlying socket somewhere in the SocketServer. Please note that SocketServer will create the socket for you, so there is no need to do that yourself.
You will probably have defined a RequestHandler to go with your SocketServer. It should look something like this:
class RequestHandler(SocketServer.BaseRequestHandler):
def setup(self):
# the socket is called request in the request handler
self.request.settimeout(1.0)
def handle(self):
while True:
try:
data = self.request.recv(1024)
if not data:
break # connection is closed
else:
pass # do your thing
except socket.timeout:
pass # handle timeout
Greetings, Forum.
I'm working on a program in Python that uses Twisted to manage networking. The basis of this program is a TCP service that is to listen for connections on multiple ports. However, instead of using one Twisted factory to handle a protocol object for each port, I am trying to use a separate factory for each port. The reason for this is to force a separation among the groups of clients connecting to the different ports.
Unfortunately, it appears that this architecture isn't quite working: clients that connect to one port appear to be available among all the factories (e.g., the protocol class used by each factory includes a 'self.factory.clients.append (self)' statement...instead of adding a given client to just the factory for a particular port, the client is added to all factories), and whenever I shutdown service on one port the listeners on all ports also stop.
I've been working with Twisted for a short while, and fear I simply don't fully understand how its factory classes are managed.
My question is: is it simply not possible to have multiple, simultaneous instances of the same factory and same protocol in use across different ports (without these instances stepping on each other's toes)?
You can definitely do what you want -- it's hard to tell what you're doing wrong without seeing your code, but I'd bet you have clients = [] in your factory class instead of
self.clients = []
in your factory class's __init__ method.