Twisted generating new clients for reactor - python

Generally in Twisted (Python), you define some listeners or connections or looping operations and add them to the reactor and then call reactor.run(). Is there any way to add new connections from within other event loops? Say I want to have a server, and then this server spawns other clients, each having their own on data receive scope.
Thanks

You can create as many client connections as you'd like to a particular server. Disregarding code quality/design patterns, client connections can be made anywhere within your code.
from twisted.internet import protocol, reactor
class SomeProtocol(protocol.Protocol):
def dataReceived(self):
# This is what I believe you're asking about
for x in range(5):
reactor.connectTCP('localhost', 8000, SomeClientFactory())
class SomeServerFactory(protocol.Factory):
def buildProtocol(self):
return SomeProtocol()
reactor.listenTCP(8000, SomeServerFactory())
reactor.run()

Related

Accessing Sockets with Python SocketServer.ThreadingTCPServer

I'm using a SocketServer.ThreadingTCPServer to serve socket connections to clients. This provides an interface where users can connect, type commands and get responses. That part I have working well.
However, in some cases I need a separate thread to broadcast a message to all connected clients. I can't figure out how to do this because there is no way to pass arguments to the class instantiated by ThreadingTCPServer. I don't know how to gather a list of socket connections that have been created.
Consider the example here. How could I access the socket created in the MyTCPHandler class from the __main__ thread?
You should not write to the same TCP socket from multiple threads. The writes may be interleaved if you do ("Hello" and "World" may become "HelWloorld").
That being said, you can create a global list to contain references to all the server objects (who would register themselves in __init__()). The question is, what to do with this list? One idea would be to use a queue or pipe to send the broadcast data to each server object, and have the server objects look in that queue for the "extra" broadcast data to send each time their handle() method is invoked.
Alternatively, you could use the Twisted networking library, which is more flexible and will let you avoid threading altogether - usually a superior alternative.
Here is what I've come up with. It isn't thread safe yet, but that shouldn't be a hard fix:
When the socket is accepted:
if not hasattr(self.server, 'socketlist'):
self.server.socketlist = dict()
thread_id = threading.current_thread().ident
self.server.socketlist[thread_id] = self.request
When the socket closes:
del self.server.socketlist[thread_id]
When I want to write to all sockets:
def broadcast(self, message):
if hasattr(self._server, 'socketlist'):
for socket in self._server.socketlist.values():
socket.sendall(message + "\r\n")
It seems to be working well and isn't as messy as I thought it might end up being.

Application of Twisted Python in TCP/IP connections?

I am writing a client that needs to establish several independent communication channels, each with its own unique port on the server, with a server through a series of sending and receiving messages. I know how to do this using socket send and recv, by giving each communication channel a socket, and do send and recv on that socket. I need to make this work in Twisted, and found potentially useful interfaces including Factory and ProcessProtocol. However, the Protocol interfaces do not provide a method to send messages. Is ProcessProtocol a good choice for my task, and how I make ProcessProtocol send messages?
In case you don't know about it, I'd like to give a shout out to the excellent Twisted finger tutorial that goes through the library at a good pace but with enough detail that you know what's going on.
To directly answer your question, though, I'd say you're on the right track with Protocol and (Client)Factory. I think the cleanest way to do what you're looking for (assuming you need to connect to different ports because they're outputs for different data) would be to make a factory/protocol pair for each port you want to connect to/handle, and then use an external class to handle the application logic aggregating all of them. Generally you wouldn't want your application logic mixed deeply with your networking logic.
A simple example: (note the use of self.transport.write to send data)
from twisted.internet.protocol import Protocol, ClientFactory
from sys import stdout
from foobar_application import CustomAppObject
class FooProtocol(Protocol):
def connectionMade(self):
# Use self.transport.write to send data to the server
self.transport.write('Hello server this is the Foo protocol.')
self.factory.do_app_logic()
class FooFactory(ClientFactory):
protocol = FooProtocol
def __init__(self, app_object=None):
self.app = app_object
def do_app_logic(self):
self.app.do_something()
class BarProtocol(Protocol):
def dataReceived(self, data):
stdout.write('Received data from server using the Bar protocol.')
self.factory.do_fancy_logic(data)
class BarFactory(ClientFactory):
protocol = BarProtocol
def __init__(self, app_object=None):
self.app = app_object
def do_fancy_logic(self, data):
self.app.do_something_else(data)
logic_obj = CustomAppObject()
reactor.listenTCP(8888, FooFactory(app_object=logic_obj)
reactor.listenTCP(9999, BarFactory(app_object=logic_obj)
reactor.run()
You might also want to look at the 'Writing Clients' docs on the Twisted site.

SocketServer ThreadingTCPServer & Asyncore Dispatcher

I want to add a timeout to individual connections within my request handler for a server using the SocketServer module.
Let me start by saying this is the first time I'm attempting to do network programming using Python. I've sub-classed SocketServer.BaseRequestHandler and SocketServer.ThreadingTCPServer & SocketServer.TCPServer and managed to create two classes with some basic threaded TCP functionality.
However I would like my incoming connections to time-out. Trying to override any of the built in SocketServer time-out values and methods does not work, as the documentation says this works only with forking server. I have managed to create a timer thread that fires after X seconds, but due to the nature of the blocking recv call within the Handler thread, this is of no use, as I would be forced to kill it, and this is something I really want to avoid.
So it is my understanding that I need an asyncore implementation, where I get notified and read certain amount of data. In the event that no data is sent over a period of 5 seconds lets say, I want to close that connection (I know how to cleanly do that).
I have found a few examples of using asyncore with sockets, but none using SocketServer. So, how can I implement asyncore & threadingTCPserver ?
Is it possible?
Has anyone done it?
You can also set a timeout on the recv call, like this:
sock.settimeout(1.0)
Since you use SocketServer, you will have to find the underlying socket somewhere in the SocketServer. Please note that SocketServer will create the socket for you, so there is no need to do that yourself.
You will probably have defined a RequestHandler to go with your SocketServer. It should look something like this:
class RequestHandler(SocketServer.BaseRequestHandler):
def setup(self):
# the socket is called request in the request handler
self.request.settimeout(1.0)
def handle(self):
while True:
try:
data = self.request.recv(1024)
if not data:
break # connection is closed
else:
pass # do your thing
except socket.timeout:
pass # handle timeout

twisted and more clients in script

Hello
I'm new in twisted, but have some questions after read manual:
1. How to use different protocols with different reactor in one program? (for example, txNetTools have own reactor and internal IRC support in twisted have own reactor from twisted.internet)
2. How to start more one client in one time? (many client with ping to other remote host) http://bazaar.launchpad.net/~oubiwann/txnet/trunk/view/head:/sandbox/ping.py
3. How to put data from one protocol to other (in one program)? I'm want use data from database in protocol. (for example, every 5 minutes get hosts from database and create ping clients)
My task is simple, create a more different protocol client to many count of servers.
Well, for the third question at least, are you talking about using protocols of different classes or multiple protocol instances of the same class? Protocol instances can communicate between each other by having the factory that creates them store their data, like the following:
class p(Protocol):
factory = None
...
class f(Factory):
protocol = p
data = None
def buildProtocol(self, addr):
returnValue = p()
returnValue.factory = self
return returnValue
From there you can save data to self.factory.data from within a protocol instance, and any other protocol instance can access it. I hope that answered your question.
How to use different protocols with different reactor in one program?
You don't. There is only one reactor per process, and it can handle as many connections as you want it to. The vast majority of libraries don't provide a reactor, and the reactor provided by txNetTools is optional. The only thing it provides is this method:
def listenICMP(self, port, protocol, interface="", maxPacketSize=8192):
p = icmp.Port(port, protocol, interface, maxPacketSize, self)
p.startListening()
return p
If you want to use another reactor, then you can just instantiate an icmp.Port yourself.
How to start more one client in one time?
The same way you start one, but repeated. For example, here's ten concurrent pingers (incorporating the answer to the first question):
for i in range(10):
p = icmp.Port(0, Pinger(), reactor=reactor)
p.startListening()
reactor.run()
chameco gives a fine answer to the last question.

Tornado WebSocket Question

Finally decided to go with Tornado as a WebSocket server, but I have a question about how it's implemented.
After following a basic tutorial on creating a working server, I ended up with this:
#!/usr/bin/env python
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application
from tornado.websocket import WebSocketHandler
class Handler(WebSocketHandler):
def open(self):
print "New connection opened."
def on_message(self, message):
print message
def on_close(self):
print "Connection closed."
print "Server started."
HTTPServer(Application([("/", Handler)])).listen(1024)
IOLoop.instance().start()
It works great and all, but I was wondering if the other modules (tornado.httpserver, tornado.ioloop, and tornado.web) are actually needed to run the server.
It's not a huge issue having them, but I just wanted to make sure there wasn't a better way to do whatever they do (I haven't covered those modules at all, yet.).
tornado.httpserver :
A non-blocking, single-threaded HTTP server.
Typical applications have little direct interaction with the HTTPServer class.
HTTPServer is a very basic connection handler. Beyond parsing the HTTP request body and headers, the only HTTP semantics implemented in HTTPServer is HTTP/1.1 keep-alive connections.
tornado.ioloop :
An I/O event loop for non-blocking sockets.
So, the ioloop can be used for setting the time-out of the response.
In general, methods on RequestHandler and elsewhere in tornado are not thread-safe. In particular, methods such as write(), finish(), and flush() must only be called from the main thread. If you use multiple threads it is important to use IOLoop.add_callback to transfer control back to the main thread before finishing the request.
tornado.web :
Provides RequestHandler and Application classes
Helps with additional tools and optimizations to take advantage of the Tornado non-blocking web server and tools.
So, these are the provisions by this module :
Entry points : Hook for subclass initialization.
Input
Output
Cookies
I hope, this will cover the modules you left.
Yes they're needed because you're using each import from each module/package you reference. If you reference something at the top of your source but never use it again in any of the following code then of course you don't need them but in this case you use your imports.

Categories

Resources