I'm trying to make a simple TCP server using Twisted ,which can do some interaction between diffirent client connections.The main code is as below:
#!/usr/bin/env python
from twisted.internet import protocol, reactor
from time import ctime
#global variables
PORT = 22334
connlist = {} #store all the connections
ids = {} #map the from-to relationships
class TSServerProtocol(protocol.Protocol):
def dataReceived(self, data):
from_id,to_id = data.split('|') #get the IDs from standard client input,which looks like "from_id|to_id"
if self.haveConn(from_id): #try to store new connections' informations
pass
else:
self.setConn(from_id)
self.setIds(from_id,to_id)
if to_id in self.csids.keys():
self.connlist[to_id].transport.write(\
"you get a message now!from %s \n" % from_id) #if the to_id target found,push him a message.doesn't work as expected
def setConn(self,sid):
connlist[sid] = self
#some other functions
factory = protocol.Factory()
factory.protocol = TSServerProtocol
print 'waiting from connetction...'
reactor.listenTCP(PORT, factory)
reactor.run()
As the comments mentioned,if a new client connection comes,I'll store its connection handle in a global varaible connlist which is like
connlist = {a_from_id:a_conObj,b_from_id:b_conObj,....}
and also parse the input then map its from-to information in ids.Then I check whether there's a key in the ids matches current "to_id".if does,get the connection handle using connlist[to_id] and push a message to the target connection.But it doesn't work.The message only shows in a same connection.Hope someone can show me some directions about this.
Thanks!
Each time a TCP connection is made, Twisted will create a unique instance of TSServerProtocol to handle that connection. So, you'll only ever see 1 connection in TSServerProtocol. Normally, this is what you want but Factories can be extended to do the connection tracking you're attempting to do here. Specifically, you can subclass Factory and override the buildProtocol() method to track instances of TSServerProtocol. The interrelationship between all the classes in Twisted takes a little time to learn and get used to. In particular, this piece of the standard Twisted documentation should be your best friend for the next while ;-)
Related
I implemented a basic SOCKS4 client with socket, but my Twisted translation isn't coming along too well. Here's my current code:
import struct
import socket
from twisted.python.failure import Failure
from twisted.internet import reactor
from twisted.internet.defer import Deferred
from twisted.internet.protocol import Protocol, ClientFactory
class Socks4Client(Protocol):
VERSION = 4
HOST = "0.0.0.0"
PORT = 80
REQUESTS = {
"CONNECT": 1,
"BIND": 2
}
RESPONSES = {
90: "request granted",
91: "request rejected or failed",
92: "request rejected because SOCKS server cannot connect to identd on the client",
93: "request rejected because the client program and identd report different user-ids"
}
def __init__(self):
self.buffer = ""
def connectionMade(self):
self.connect(self.HOST, self.PORT)
def dataReceived(self, data):
self.buffer += data
if len(self.buffer) == 8:
self.validateResponse(self.buffer)
def connect(self, host, port):
data = struct.pack("!BBH", self.VERSION, self.REQUESTS["CONNECT"], port)
data += socket.inet_aton(host)
data += "\x00"
self.transport.write(data)
def validateResponse(self, data):
version, result_code = struct.unpack("!BB", data[1:3])
if version != 4:
self.factory.protocolError(Exception("invalid version"))
elif result_code == 90:
self.factory.deferred.callback(self.responses[result_code])
elif result_code in self.RESPONSES:
self.factory.protocolError(Exception(self.responses[result_code]))
else:
self.factory.protocolError(Exception())
self.transport.abortConnection()
class Socks4Factory(ClientFactory):
protocol = Socks4Client
def __init__(self, deferred):
self.deferred = deferred
def clientConnectionFailed(self, connector, reason):
self.deferred.errback(reason)
def clientConnectionLost(self, connector, reason):
print "Connection lost:", reason
def protocolError(self, reason):
self.deferred.errback(reason)
def result(result):
print "Success:", result
def error(reason):
print "Error:", reason
if __name__ == "__main__":
d = Deferred()
d.addCallbacks(result, error)
factory = Socks4Factory(d)
reactor.connectTCP('127.0.0.1', 1080, factory)
reactor.run()
I have a feeling that I'm abusing Deferred. Is this the right way to send results from my client?
I've read a few tutorials, looked at the documentation, and read through most of the protocols bundled with Twisted, but I still can't figure it out: what exactly is a ClientFactory for? Am I using it the right way?
clientConnectionLosts gets triggered a lot. Sometimes I lose the connection and get a successful response. How is that so? What does this mean, and should I treat it as an error?
How do I make sure that my deferred calls only one callback/errback?
Any tips are appreciated.
I have a feeling that I'm abusing Deferred. Is this the right way to send results from my client?
It's not ideal, but it's not exactly wrong either. Generally, you should try to keep the code that instantiates a Deferred as close as possible to the code that calls Deferred.callback or Deferred.errback on that Deferred. In this case, those pieces of code are quite far apart - the former is in __main__ while the latter is in a class created by a class created by code in __main__. This is sort of like the law of Demeter - the more steps between these two things, the more tightly coupled, inflexible, and fragile the software.
Consider giving Socks4Client a method that creates and returns this Deferred instance. Then, try using an endpoint to setup the connection so you can more easily call this method:
endpoint = TCP4StreamClientEndpoint(reactor, "127.0.0.1", 1080)
d = endpoint.connect(factory)
def connected(protocol):
return protocol.waitForWhatever()
d.addCallback(connected)
d.addCallbacks(result, error)
One thing to note here is that using an endpoint, the clientConnectionFailed and clientConnectionLost methods of your factory won't be called. The endpoint takes over the former responsibility (not the latter though).
I've read a few tutorials, looked at the documentation, and read through most of the protocols bundled with Twisted, but I still can't figure it out: what exactly is a ClientFactory for? Am I using it the right way?
It's for just what you're doing. :) It creates protocol instances to use with connections. A factory is required because you might create connections to many servers (or many connections to one server). However, a lot of people have trouble with ClientFactory so more recently introduced Twisted APIs don't rely on it. For example, you could also do your connection setup as:
endpoint = TCP4StreamClientEndpoint(reactor, "127.0.0.1", 1080)
d = connectProtocol(endpoint, Socks4Client())
...
ClientFactory is now out of the picture.
clientConnectionLosts gets triggered a lot. Sometimes I lose the connection and get a successful response. How is that so? What does this mean, and should I treat it as an error?
Every connection must eventually be lost. You have to decide on your own whether this is an error or not. If you have finished everything you wanted to do and you called loseConnection, it is probably not an error. Consider a connection to an HTTP server. If you have sent your request and received your response, then losing the connection is probably not a big deal. But if you have only received half the response, that's a problem.
How do I make sure that my deferred calls only one callback/errback?
If you structure your code as I described in response to your first question above, it becomes easier to do this. When the code that uses callback/errback on a Deferred is spread across large parts of your program, then it becomes harder to do this correctly.
It is just a matter of proper state tracking, though. Once you give a Deferred a result, you have to arrange to know that you shouldn't give it another one. A common idiom for this is to drop the reference to the Deferred. For example, if you are saving it as the value of an attribute on a protocol instance, then set that attribute to None when you have given the Deferred its result.
I am working on a Blackjack iPhone app that interacts with a Twisted socket to allow online gameplay. My issue at the moment is finding the right port. Let me explain.
I created a class called "Table". It holds information like a Blackjack table, like positions, players, and the card deck. One table is assigned to one Twisted socket, and one socket is assigned to one port. Right now, I am testing only ports 1025-1034.
What I want to happen is the app requests to get the number of players at each table by going in order of ports ascending. For testing, I only allow 1 user at a table. If 1 user is at a table, I want the socket to return Table_Not_Found, but instead, even if a user is at a table, the socket returns the port a person is at and not the next port that has nobody.
I don't think I am doing something right with the Table class and searching for an open table. How can I find the right port? The app connects to a port, if the port is taken, then it returns Table_Not_Found, then the app requests the next port available. But in my case, the socket always returns the port taken. I can only test with my iMac and MacBook as they are the clients.
Bottom line, how do I search for an available table on port?
Thanks!
class Table:
def __init__(self):
self.players = []
self.positions = []
self.id = 0
self.numberOfPlayers = 0
def setID(self, _id):
self.id = _id
def setActivePlayer(self, player):
player.countdown = 20
while player.count > 0:
print player.countdown
time.sleep(1)
player.countdown -= 1
if player.countdown == 0:
print "Out of time"
moves.surrender(player)
class Socket(Protocol):
table = Table()
def connectionMade(self):
#self.transport.write("""connected""")
self.factory.clients.append(self)
print "Clients are ", self.factory.clients
def connectionLost(self, reason):
self.factory.clients.remove(self)
def dataReceived(self, data):
#print "data is ", data
a = data.split(':')
if len(a) > 1:
command = a[0]
content = a[1]
b = content.split(';')
_UDID = b[0].replace('\n', '')
if command == "Number_of_Players":
if Socket.table.numberOfPlayers == 0:
msg = "%s:TableFound" % _UDID
elif Socket.table.numberOfPlayers == 1:
msg = "%s:Table_Not_Found" % _UDID
print msg
for c in self.factory.clients:
c.message(msg)
def message(self, message):
self.transport.write(message)
NUM_TABLES = 10
factories = [ ]
for i in range(0, NUM_TABLES):
print i
factory = Factory()
factory.protocol = Socket
factory.clients = []
factories.append(factory)
reactor.listenTCP(1025+i, factory)
#print "Blackjack server started"
reactor.run()
The main problem you're having is the table = Table() in your socket class. This means that for all Socket instances ever, there is only one Table.
The quick fix is to store each Table on a Factory so that all connections to that Factory (i.e. that listening TCP port) will share a single Table instance.
This can be accomplished by removing table = Table() line, and then modifying your for loop like so:
for i in range(0, NUM_TABLES):
print i
factory = Factory()
factory.table = Table() # <-- add this line
factory.protocol = Socket
factory.clients = []
factories.append(factory)
reactor.listenTCP(1025+i, factory)
And then adjusting your connectionMade to start like this:
def connectionMade(self):
self.table = self.factory.table
Now each Socket is pointing at its Factory's Table.
However, there are a number of other serious problems with this code:
You don't need, and shouldn't use, multiple ports for this protocol. Each new connection should show up and identify what blackjack game it wants to play in with a message over the protocol itself. Working over multiple ports just makes it harder for people to get through firewalls to play your game. You can use the same strategy, just setting the table attribute on the appropriate Socket instance.
You are expecting dataReceived to be called with whole messages. It won't be, and this is a FAQ that you should read in the Twisted docs. Or rather, it will be when you are testing, but then not when you deploy to the real internet. If you're doing iPhone development, you should use Network Link Conditioner to simulate real internet connections.
Since you don't seem to know how network protocol parsing works, you should use a protocol construction kit like AMP to build up your wire protocol. The API documentation includes a brief tutorial.
You're calling time.sleep. That will block the whole server. This is the wrong way to build a time-sensitive Twisted service. More importantly, Twisted won't process input while it's blocked waiting for your time.sleep to complete, so every player will always instantly surrender rather than being able to play any cards. You should use callLater instead, to schedule timed calls that change the state of the game.
Most of the problems you're having though are object-composition problems, and not things that are super specific to Twisted or to Python. You need to draw out a map for yourself of what instances should be pointing at what other things. The important thing to understand is that when you make a call into the reactor like listenTCP or callLater, what you are setting up is a reference from the reactor to your object. There's nothing magic about it; you're just saying "later, call this method, on this object, under these circumstances". Everything flows out from there; your sockets having references to your tables, your tables having references to their players, and so on.
I have been asked to write a class that connects to a server, asynchronously sends the server various commands, and then provides the returned data to the client. I've been asked to do this in Python, which is a new language to me. I started digging around and found the Twisted framework which offers some very nice abstractions (Protocol, ProtocolFactory, Reactor) that do a lot of the things that I would have to do if I would roll my own socket-based app. It seems like the right choice given the problem that I have to solve.
I've looked through numerous examples on the web (mostly Krondo), but I still haven't seen a good example of creating a client that will send multiple commands across the wire and I maintain the connection I create. The server (of which I have no control over), in this case, doesn't disconnect after it sends the response. So, what's the proper way to design the client so that I can tickle the server in various ways?
Right now I do this:
class TestProtocol(Protocol)
def connectionMade(self):
self.transport.write(self.factory.message)
class TestProtocolFactory(Factory):
message = ''
def setMessage(self, msg):
self.message = msg
def main():
f = TestProtocolFactory()
f.setMessage("my message")
reactor.connectTCP(...)
reactor.run()
What I really want to do is call self.transport.write(...) via the reactor (really, call TestProtocolFactory::setMessage() on-demand from another thread of execution), not just when the connection is made.
Depends. Here are some possibilities:
I'm assuming
Approach 1. You have a list of commands to send the server, and for some reason can't do them all at once. In that case send a new one as the previous answer returns:
class proto(parentProtocol):
def stringReceived(self, data):
self.handle_server_response(data)
next_command = self.command_queue.pop()
# do stuff
Approach 2. What you send to the server is based on what the server sends you:
class proto(parentProtocol):
def stringReceived(self, data):
if data == "this":
self.sendString("that")
elif data == "foo":
self.sendString("bar")
# and so on
Approach 3. You don't care what the server sends to, you just want to periodically send some commands:
class proto(parentProtocol):
def callback(self):
next_command = self.command_queue.pop()
# do stuff
def connectionMade(self):
from twisted.internet import task
self.task_id = task.LoopingCall(self.callback)
self.task_id.start(1.0)
Approach 4: Your edit now mentions triggering from another thread. Feel free to check the twisted documentation to find out if proto.sendString is threadsafe. You may be able to call it directly, but I don't know. Approach 3 is threadsafe though. Just fill the queue (which is threadsafe) from another thread.
Basically you can store any amount of state in your protocol; it will stay around until you are done. The you either send commands to the server as a response to it's messages to you, or you set up some scheduling to do your stuff. Or both.
You may want to use a Service.
Services are pieces of functionality within a Twisted app which are started and stopped, and are nice abstractions for other parts of your code to interact with. For example, in this case you might have a SayStuffToServerService (I know, terrible name, but without knowing more about its job it was the best I could do here :) ) that exposed something like this:
class SayStuffToServerService:
def __init__(self, host, port):
# this is the host and port to connect to
def sendToServer(self, whatToSend):
# send some line to the remote server
def startService(self):
# call me before using the service. starts outgoing connection efforts.
def stopService(self):
# clean reactor shutdowns should call this method. stops outgoing
# connection efforts.
(That might be all the interface you need, but it should be fairly clear where you can add things to this.)
The startService() and stopService() methods here are just what Twisted's Services expose. And helpfully, there is a premade Twisted Service which acts like a TCP client and takes care of all the reactor stuff for you. It's twisted.application.internet.TCPClient, which takes arguments for a remote host and port, along with a ProtocolFactory to take care of handling the actual connection attempt.
Here is the SayStuffToServerService, implemented as a subclass of TCPClient:
from twisted.application import internet
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
# we'll do stuff here
(See below for the SayStuffToServerProtocolFactory.)
Using this Service architecture is convenient in a lot of ways; you can group Services together in one container, so that they all get stopped and started as one when you have different parts of your app that you want active. It may make good sense to implement other parts of your app as separate Services. You can set Services as child services to application- the magic name that twistd looks for in order to know how to initialize, daemonize, and shut down your app. Actually yes, let's add some code to do that now.
from twisted.application import service
...
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
That's all. Now when you run this module under twistd (i.e., for debugging, twistd -noy saystuff.py), that application will be started under the right reactor, and it will in turn start the SayStuffToServerService, which will start a connection effort to localhost:65432, which will use the service's factory attribute to set up the connection and the Protocol. You don't need to call reactor.run() or attach things to the reactor yourself anymore.
So we haven't implemented SayStuffToServerProtocolFactory yet. Since it sounds like you would prefer that your client reconnect if it has lost the connection (so that callers of sendToServer can usually just assume that there's a working connection), I'm going to put this protocol factory on top of ReconnectingClientFactory.
from twisted.internet import protocol
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
This is a pretty nice minimal definition, which will keep trying to make outgoing TCP connections to the host and port we specified, and instantiate a SayStuffToServerProtocol each time. When we fail to connect, this class will do nice, well-behaved exponential backoff so that your network doesn't get hammered (you can set a maximum wait time). It will be the responsibility of the Protocol to assign to _my_live_proto and call this factory's resetDelay() method, so that exponential backoff will continue to work as expected. And here is that Protocol now:
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
This is implemented on top of twisted.protocols.basic.LineReceiver, but would work as well with any other sort of Protocol, in case your protocol isn't line-oriented.
The only thing left is hooking up the Service to the right Protocol instance. This is why the Factory keeps a _my_live_proto attribute, which should be set when a connection is successfully made, and cleared (set to None) when that connection is lost. Here's the new implementation of SayStuffToServerService.sendToServer:
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
...
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
And now to tie it all together in one place:
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.protocols import basic
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
Hopefully that gives enough of a framework with which to start. There is sometimes a lot of plumbing to do to handle client disconnections just the way you want, or to handle out-of-order responses from the server, or handle various sorts of timeout, canceling pending requests, allowing multiple pooled connections, etc, etc, but this should help.
The twisted framework is event-based programming; and by nature, its method is all called in async, and result is get by defer object.
The framework's nature is approprivate for protocol developing, just you have to change your minding from traditional sequential programming. The Protocol class is like a finite state machine with events like: connection make, connection lost, receive data.
You can convert your client code into FSM and then will be easily to fit into the Protocol class.
Below is an rough example of what I want to express. A bit of rouge, but this is i can provide now:
class SyncTransport(Protocol):
# protocol
def dataReceived(self, data):
print 'receive data', data
def connectionMade(self):
print 'i made a sync connection, wow'
self.transport.write('x')
self.state = I_AM_LIVING
def connectionLost(self):
print 'i lost my sync connection, sight'
def send(self, data):
if self.state == I_AM_LIVING:
if data == 'x':
self.transport.write('y')
if data == 'Y':
self.transport.write('z')
self.state = WAITING_DEAD
if self.state == WAITING_DEAD:
self.transport.close()
Im trying to write a program that would be listening for data (simple text messages) on some port (say tcp 6666) and then pass them to one or more different protocols - irc, xmpp and so on. I've tried many approaches and digged the Internet, but I cant find easy and working solution for such task.
The code I am currently fighting with is here: http://pastebin.com/ri7caXih
I would like to know how to from object like:
ircf = ircFactory('asdfasdf', '#asdf666')
get access to self protocol methods, because this:
self.protocol.dupa1(msg)
returns error about self not being passed to active protocol object. Or maybe there is other, better, easier and more kosher way to create single reactor with multiple protocols and have actions triggeres when a message arrives on any of them, and then pass that message to other protocols for handling/processing/sending?
Any help will be highly appreciated!
Here is sample code to read from multiple connections to port 9001 and write out to a connection on port 9000. You would need multiple "PutLine" implementations, one for XMPP, IRC, MSN, etc.
I used a global to store the output connection PutLine but you would want to create a more complex Factory object that would handle this instead.
#!/usr/bin/env python
from twisted.internet.protocol import Protocol, Factory
from twisted.internet.endpoints import clientFromString, serverFromString
from twisted.protocols.basic import LineReceiver
from twisted.internet import reactor
queue = []
putter = None
class GetLine(LineReceiver):
delimiter = '\n'
def lineReceived(self, line):
queue.append(line)
putter.have_data()
self.sendLine(line)
class PutLine(LineReceiver):
def __init__(self):
global putter
putter = self
print 'putline init called %s' % str(self)
def have_data(self):
line = queue.pop()
self.sendLine(line)
def main():
f = Factory()
f.protocol = PutLine
endpoint = clientFromString(reactor, "tcp:host=localhost:port=9000")
endpoint.connect(f)
f = Factory()
f.protocol = GetLine
endpoint2 = serverFromString(reactor, "tcp:port=9001")
endpoint2.listen(f)
reactor.run()
if __name__ == '__main__':
main()
Testing:
nc -l 9000
python test.py
nc 9001
Data entered form any number of nc 9001 (or netcat 9001) will appear on nc -l 9000.
This is answered in the FAQ.
http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#HowdoImakeinputononeconnectionresultinoutputonanother
See doc/core/examples/chatserver.py. There they've added hooks to the Protocol's connectionMade and connectionLost methods to maintain a list of connected clients, and then it iterates through all of them when a message arrives to pass on.
This is problem.
My primary work is : deliver "s" object to "handle" method in TestRequestHandler class.
My first step was : deliver "s" object through "point" method to TestServer class, but here im stuck. How to deliver "s" object to TestRequestHandler? Some suggestions?
import threading
import SocketServer
from socket import *
class TestRequestHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
SocketServer.BaseRequestHandler.__init__(self, request, client_address, server)
return
def setup(self):
return SocketServer.BaseRequestHandler.setup(self)
def handle(self):
data = self.request.recv(1024)
if (data):
self.request.send(data)
print data
def finish(self):
return SocketServer.BaseRequestHandler.finish(self)
class TestServer(SocketServer.TCPServer):
def __init__(self, server_address, handler_class=TestRequestHandler):
print "__init__"
SocketServer.TCPServer.__init__(self, server_address, handler_class)
return
def point(self,obj):
self.obj = obj
print "point"
def server_activate(self):
SocketServer.TCPServer.server_activate(self)
return
def serve_forever(self):
print "serve_forever"
while True:
self.handle_request()
return
def handle_request(self):
return SocketServer.TCPServer.handle_request(self)
if __name__ == '__main__':
s = socket(AF_INET, SOCK_STREAM)
address = ('localhost', 6666)
server = TestServer(address, TestRequestHandler)
server.point(s)
t = threading.Thread(target=server.serve_forever())
t.setDaemon(True)
t.start()
If I understand correctly, I think you perhaps are misunderstanding how the module works. You are already specifying an address of 'localhost:6666' for the server to bind on.
When you start the server via your call to serve_forever(), this is going to cause the server to start listening to a socket on localhost:6666.
According to the documentation, that socket is passed to your RequestHandler as the 'request' object. When data is received on the socket, your 'handle' method should be able to recv/send from/to that object using the documented socket API.
If you want a further abstraction, it looks like your RequestHandler can extend from StreamRequestHandler and read/write to the socket using file-like objects instead.
The point is, there is no need for you to create an additional socket and then try to force your server to use the new one instead. Part of the value of the SocketServer module is that it manages the lifecycle of the socket for you.
On the flip side, if you want to test your server from a client's perspective, then you would want to create a socket that you can read/write your client requests on. But you would never pass this socket to your server, per se. You would probably do this in a completely separate process and test your server via IPC over the socket.
Edit based on new information
To get server A to open a socket to server B when server A receives data one solution is to simply open a socket from inside your RequestHandler. That said, there are likely some other design concerns that you will need to address based on the requirements of your service.
For example, you may want to use a simple connection pool that say opens a few sockets to server B that server A can use like a resource. There may already be some libraries in Python that help with this.
Given your current design, your RequestHandler has access to the server as a member variable so you could do something like this:
class TestServer(SocketServer.TCPServer):
def point (self, socketB):
self.socketB = socketB # hold serverB socket
class TestRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if (data):
self.request.send(data)
print data
self.server.socketB ... # Do whatever with the socketB
But like I said, it may be better for you to have some sort of connection pool or other object that manages your server B socket such that your server A handler can just acquire/release the socket as incoming requests are handled.
This way you can better deal with conditions where server B breaks the socket. Your current design wouldn't be able to handle broken sockets very easily. Just some thoughts...
If the value of s is set once, and not reinitialized - you could make it a class variable as opposed to an instance variable of TestServer, and then have the handler retrieve it via a class method of TestServer in the handler's constructor.
eg: TestServer._mySocket = s
Ok, my main task is this. Construction of the listening server (A-server - localhost, 6666) which during start will open "hard" connection to the different server (B-server - localhost, 7777).
When the customer send data to the A-server this (A-server) sends data (having that hard connection to the B-server) to B-server, the answer receives from the B-server to A-server and answer sends to the customer.
Then again : the customer sends data, A-server receives them, then sends to the B-server, the answer receives data from the B-server and A-server send data to the customer.
And so round and round. The connection to the B-server is closes just when the server A will stop.
All above is the test of making this.