This is problem.
My primary work is : deliver "s" object to "handle" method in TestRequestHandler class.
My first step was : deliver "s" object through "point" method to TestServer class, but here im stuck. How to deliver "s" object to TestRequestHandler? Some suggestions?
import threading
import SocketServer
from socket import *
class TestRequestHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
SocketServer.BaseRequestHandler.__init__(self, request, client_address, server)
return
def setup(self):
return SocketServer.BaseRequestHandler.setup(self)
def handle(self):
data = self.request.recv(1024)
if (data):
self.request.send(data)
print data
def finish(self):
return SocketServer.BaseRequestHandler.finish(self)
class TestServer(SocketServer.TCPServer):
def __init__(self, server_address, handler_class=TestRequestHandler):
print "__init__"
SocketServer.TCPServer.__init__(self, server_address, handler_class)
return
def point(self,obj):
self.obj = obj
print "point"
def server_activate(self):
SocketServer.TCPServer.server_activate(self)
return
def serve_forever(self):
print "serve_forever"
while True:
self.handle_request()
return
def handle_request(self):
return SocketServer.TCPServer.handle_request(self)
if __name__ == '__main__':
s = socket(AF_INET, SOCK_STREAM)
address = ('localhost', 6666)
server = TestServer(address, TestRequestHandler)
server.point(s)
t = threading.Thread(target=server.serve_forever())
t.setDaemon(True)
t.start()
If I understand correctly, I think you perhaps are misunderstanding how the module works. You are already specifying an address of 'localhost:6666' for the server to bind on.
When you start the server via your call to serve_forever(), this is going to cause the server to start listening to a socket on localhost:6666.
According to the documentation, that socket is passed to your RequestHandler as the 'request' object. When data is received on the socket, your 'handle' method should be able to recv/send from/to that object using the documented socket API.
If you want a further abstraction, it looks like your RequestHandler can extend from StreamRequestHandler and read/write to the socket using file-like objects instead.
The point is, there is no need for you to create an additional socket and then try to force your server to use the new one instead. Part of the value of the SocketServer module is that it manages the lifecycle of the socket for you.
On the flip side, if you want to test your server from a client's perspective, then you would want to create a socket that you can read/write your client requests on. But you would never pass this socket to your server, per se. You would probably do this in a completely separate process and test your server via IPC over the socket.
Edit based on new information
To get server A to open a socket to server B when server A receives data one solution is to simply open a socket from inside your RequestHandler. That said, there are likely some other design concerns that you will need to address based on the requirements of your service.
For example, you may want to use a simple connection pool that say opens a few sockets to server B that server A can use like a resource. There may already be some libraries in Python that help with this.
Given your current design, your RequestHandler has access to the server as a member variable so you could do something like this:
class TestServer(SocketServer.TCPServer):
def point (self, socketB):
self.socketB = socketB # hold serverB socket
class TestRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if (data):
self.request.send(data)
print data
self.server.socketB ... # Do whatever with the socketB
But like I said, it may be better for you to have some sort of connection pool or other object that manages your server B socket such that your server A handler can just acquire/release the socket as incoming requests are handled.
This way you can better deal with conditions where server B breaks the socket. Your current design wouldn't be able to handle broken sockets very easily. Just some thoughts...
If the value of s is set once, and not reinitialized - you could make it a class variable as opposed to an instance variable of TestServer, and then have the handler retrieve it via a class method of TestServer in the handler's constructor.
eg: TestServer._mySocket = s
Ok, my main task is this. Construction of the listening server (A-server - localhost, 6666) which during start will open "hard" connection to the different server (B-server - localhost, 7777).
When the customer send data to the A-server this (A-server) sends data (having that hard connection to the B-server) to B-server, the answer receives from the B-server to A-server and answer sends to the customer.
Then again : the customer sends data, A-server receives them, then sends to the B-server, the answer receives data from the B-server and A-server send data to the customer.
And so round and round. The connection to the B-server is closes just when the server A will stop.
All above is the test of making this.
Related
I have been asked to write a class that connects to a server, asynchronously sends the server various commands, and then provides the returned data to the client. I've been asked to do this in Python, which is a new language to me. I started digging around and found the Twisted framework which offers some very nice abstractions (Protocol, ProtocolFactory, Reactor) that do a lot of the things that I would have to do if I would roll my own socket-based app. It seems like the right choice given the problem that I have to solve.
I've looked through numerous examples on the web (mostly Krondo), but I still haven't seen a good example of creating a client that will send multiple commands across the wire and I maintain the connection I create. The server (of which I have no control over), in this case, doesn't disconnect after it sends the response. So, what's the proper way to design the client so that I can tickle the server in various ways?
Right now I do this:
class TestProtocol(Protocol)
def connectionMade(self):
self.transport.write(self.factory.message)
class TestProtocolFactory(Factory):
message = ''
def setMessage(self, msg):
self.message = msg
def main():
f = TestProtocolFactory()
f.setMessage("my message")
reactor.connectTCP(...)
reactor.run()
What I really want to do is call self.transport.write(...) via the reactor (really, call TestProtocolFactory::setMessage() on-demand from another thread of execution), not just when the connection is made.
Depends. Here are some possibilities:
I'm assuming
Approach 1. You have a list of commands to send the server, and for some reason can't do them all at once. In that case send a new one as the previous answer returns:
class proto(parentProtocol):
def stringReceived(self, data):
self.handle_server_response(data)
next_command = self.command_queue.pop()
# do stuff
Approach 2. What you send to the server is based on what the server sends you:
class proto(parentProtocol):
def stringReceived(self, data):
if data == "this":
self.sendString("that")
elif data == "foo":
self.sendString("bar")
# and so on
Approach 3. You don't care what the server sends to, you just want to periodically send some commands:
class proto(parentProtocol):
def callback(self):
next_command = self.command_queue.pop()
# do stuff
def connectionMade(self):
from twisted.internet import task
self.task_id = task.LoopingCall(self.callback)
self.task_id.start(1.0)
Approach 4: Your edit now mentions triggering from another thread. Feel free to check the twisted documentation to find out if proto.sendString is threadsafe. You may be able to call it directly, but I don't know. Approach 3 is threadsafe though. Just fill the queue (which is threadsafe) from another thread.
Basically you can store any amount of state in your protocol; it will stay around until you are done. The you either send commands to the server as a response to it's messages to you, or you set up some scheduling to do your stuff. Or both.
You may want to use a Service.
Services are pieces of functionality within a Twisted app which are started and stopped, and are nice abstractions for other parts of your code to interact with. For example, in this case you might have a SayStuffToServerService (I know, terrible name, but without knowing more about its job it was the best I could do here :) ) that exposed something like this:
class SayStuffToServerService:
def __init__(self, host, port):
# this is the host and port to connect to
def sendToServer(self, whatToSend):
# send some line to the remote server
def startService(self):
# call me before using the service. starts outgoing connection efforts.
def stopService(self):
# clean reactor shutdowns should call this method. stops outgoing
# connection efforts.
(That might be all the interface you need, but it should be fairly clear where you can add things to this.)
The startService() and stopService() methods here are just what Twisted's Services expose. And helpfully, there is a premade Twisted Service which acts like a TCP client and takes care of all the reactor stuff for you. It's twisted.application.internet.TCPClient, which takes arguments for a remote host and port, along with a ProtocolFactory to take care of handling the actual connection attempt.
Here is the SayStuffToServerService, implemented as a subclass of TCPClient:
from twisted.application import internet
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
# we'll do stuff here
(See below for the SayStuffToServerProtocolFactory.)
Using this Service architecture is convenient in a lot of ways; you can group Services together in one container, so that they all get stopped and started as one when you have different parts of your app that you want active. It may make good sense to implement other parts of your app as separate Services. You can set Services as child services to application- the magic name that twistd looks for in order to know how to initialize, daemonize, and shut down your app. Actually yes, let's add some code to do that now.
from twisted.application import service
...
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
That's all. Now when you run this module under twistd (i.e., for debugging, twistd -noy saystuff.py), that application will be started under the right reactor, and it will in turn start the SayStuffToServerService, which will start a connection effort to localhost:65432, which will use the service's factory attribute to set up the connection and the Protocol. You don't need to call reactor.run() or attach things to the reactor yourself anymore.
So we haven't implemented SayStuffToServerProtocolFactory yet. Since it sounds like you would prefer that your client reconnect if it has lost the connection (so that callers of sendToServer can usually just assume that there's a working connection), I'm going to put this protocol factory on top of ReconnectingClientFactory.
from twisted.internet import protocol
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
This is a pretty nice minimal definition, which will keep trying to make outgoing TCP connections to the host and port we specified, and instantiate a SayStuffToServerProtocol each time. When we fail to connect, this class will do nice, well-behaved exponential backoff so that your network doesn't get hammered (you can set a maximum wait time). It will be the responsibility of the Protocol to assign to _my_live_proto and call this factory's resetDelay() method, so that exponential backoff will continue to work as expected. And here is that Protocol now:
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
This is implemented on top of twisted.protocols.basic.LineReceiver, but would work as well with any other sort of Protocol, in case your protocol isn't line-oriented.
The only thing left is hooking up the Service to the right Protocol instance. This is why the Factory keeps a _my_live_proto attribute, which should be set when a connection is successfully made, and cleared (set to None) when that connection is lost. Here's the new implementation of SayStuffToServerService.sendToServer:
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
...
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
And now to tie it all together in one place:
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.protocols import basic
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
Hopefully that gives enough of a framework with which to start. There is sometimes a lot of plumbing to do to handle client disconnections just the way you want, or to handle out-of-order responses from the server, or handle various sorts of timeout, canceling pending requests, allowing multiple pooled connections, etc, etc, but this should help.
The twisted framework is event-based programming; and by nature, its method is all called in async, and result is get by defer object.
The framework's nature is approprivate for protocol developing, just you have to change your minding from traditional sequential programming. The Protocol class is like a finite state machine with events like: connection make, connection lost, receive data.
You can convert your client code into FSM and then will be easily to fit into the Protocol class.
Below is an rough example of what I want to express. A bit of rouge, but this is i can provide now:
class SyncTransport(Protocol):
# protocol
def dataReceived(self, data):
print 'receive data', data
def connectionMade(self):
print 'i made a sync connection, wow'
self.transport.write('x')
self.state = I_AM_LIVING
def connectionLost(self):
print 'i lost my sync connection, sight'
def send(self, data):
if self.state == I_AM_LIVING:
if data == 'x':
self.transport.write('y')
if data == 'Y':
self.transport.write('z')
self.state = WAITING_DEAD
if self.state == WAITING_DEAD:
self.transport.close()
I've just started working with the basics of python socket networking. As an exercise in understanding, I've been trying to hash out a basic server that will ask it's client for a file type, and upon receiving a string of the extension, ask for the actual file. I've found numerous tutorials online that use the asyncore library, specifically asynchat to setup this kind of call and response functionality.
The most basic one I've been following can be found here (I've copied it)
http://effbot.org/librarybook/asynchat.htm
import asyncore, asynchat
import os, socket, string
PORT = 8000
class HTTPChannel(asynchat.async_chat):
def __init__(self, server, sock, addr):
asynchat.async_chat.__init__(self, sock)
self.set_terminator("\r\n")
self.request = None
self.data = ""
self.shutdown = 0
def collect_incoming_data(self, data):
self.data = self.data + data
def found_terminator(self):
if not self.request:
# got the request line
self.request = string.split(self.data, None, 2)
if len(self.request) != 3:
self.shutdown = 1
else:
self.push("HTTP/1.0 200 OK\r\n")
self.push("Content-type: text/html\r\n")
self.push("\r\n")
self.data = self.data + "\r\n"
self.set_terminator("\r\n\r\n") # look for end of headers
else:
# return payload.
self.push("<html><body><pre>\r\n")
self.push(self.data)
self.push("</pre></body></html>\r\n")
self.close_when_done()
class HTTPServer(asyncore.dispatcher):
def __init__(self, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind(("", port))
self.listen(5)
def handle_accept(self):
conn, addr = self.accept()
HTTPChannel(self, conn, addr)
#
# try it out
s = HTTPServer(PORT)
print "serving at port", PORT, "..."
My question has to do with the handle_accept method of the HTTPServer class. If every time a request comes in, the HTTPChannel object is initialized, wouldn't it be impossible in this kind of setup to create a call and response? I was thinking one could set flags for _hastype and _hasfile in the channel object, but since the accept inits it for each individual connection, the object's state is forgotten with every inidividual request. I realize this setup is supposed to be a basic HTTPServer, but my question is, how could I edit it to setup something like what I've described? Would the server object need to inherit asynchat itself and forego dispatcher completely? The channel object would have to have some state to know that the filetype has already been sent, and then ask for the binary of the file instead. I'm very curious to know what the cleanest possible implementation of this might look like.
Thanks a ton - I'm very new to sockets. Please let me know if I haven't been clear.
Normally the connection would be kept open after it's initially created, so all the parts of the communication from the same client go to the same HTTPChannel object - accept is only called when a new connection is created.
I'm writing a simple xmlrpc programe in python. something like the following:
def foo(data):
# I want get the calling client's IP address here... How can I ?
server=SimpleXMLRPCServer.SimpleXMLRPCServer((host, port))
server.register_function(foo)
server.handle_request()
As can be seen in the above, I want to get the client IP address in the registed function "foo", how can I ?
You may do so by subclassing the server (and possibly the handler, too). E.g.:
class MyXMLRPCServer(SimpleXMLRPCServer.SimpleXMLRPCServer):
def process_request(self, request, client_address):
self.client_address = client_address
return SimpleXMLRPCServer.SimpleXMLRPCServer.process_request(
self, request, client_address)
server=SimpleXMLRPCServer.MyXMLRPCServer((host, port))
Now server.client_address gives you the desired data. Note that this direct, short coding only works for the single-threaded case (which you're using anyway by choosing the simple server in your code) -- the need to work with the handler comes in if you want to go multi-threaded.
I'm trying to make a simple TCP server using Twisted ,which can do some interaction between diffirent client connections.The main code is as below:
#!/usr/bin/env python
from twisted.internet import protocol, reactor
from time import ctime
#global variables
PORT = 22334
connlist = {} #store all the connections
ids = {} #map the from-to relationships
class TSServerProtocol(protocol.Protocol):
def dataReceived(self, data):
from_id,to_id = data.split('|') #get the IDs from standard client input,which looks like "from_id|to_id"
if self.haveConn(from_id): #try to store new connections' informations
pass
else:
self.setConn(from_id)
self.setIds(from_id,to_id)
if to_id in self.csids.keys():
self.connlist[to_id].transport.write(\
"you get a message now!from %s \n" % from_id) #if the to_id target found,push him a message.doesn't work as expected
def setConn(self,sid):
connlist[sid] = self
#some other functions
factory = protocol.Factory()
factory.protocol = TSServerProtocol
print 'waiting from connetction...'
reactor.listenTCP(PORT, factory)
reactor.run()
As the comments mentioned,if a new client connection comes,I'll store its connection handle in a global varaible connlist which is like
connlist = {a_from_id:a_conObj,b_from_id:b_conObj,....}
and also parse the input then map its from-to information in ids.Then I check whether there's a key in the ids matches current "to_id".if does,get the connection handle using connlist[to_id] and push a message to the target connection.But it doesn't work.The message only shows in a same connection.Hope someone can show me some directions about this.
Thanks!
Each time a TCP connection is made, Twisted will create a unique instance of TSServerProtocol to handle that connection. So, you'll only ever see 1 connection in TSServerProtocol. Normally, this is what you want but Factories can be extended to do the connection tracking you're attempting to do here. Specifically, you can subclass Factory and override the buildProtocol() method to track instances of TSServerProtocol. The interrelationship between all the classes in Twisted takes a little time to learn and get used to. In particular, this piece of the standard Twisted documentation should be your best friend for the next while ;-)
I am running an HTTP server using the twisted framework. Is there any way I can "manually" ask it to process some payload? For example, if I've constructed some Ethernet frame can I ask twisted's reactor to handle it just as if it had just arrived on my network card?
You can do something like this:
from twisted.web import server
from twisted.web.resource import Resource
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ClientFactory
class SomeWebThing(Resource):
def render_GET(self, request):
return "hello\n"
class SomeClient(Protocol):
def dataReceived(self, data):
p = self.factory.site.buildProtocol(self.transport.addr)
p.transport = self.transport
p.dataReceived(data)
class SomeClientFactory(ClientFactory):
protocol = SomeClient
def __init__(self, site):
self.site = site
if __name__ == '__main__':
root = Resource()
root.putChild('thing', SomeWebThing())
site = server.Site(root)
reactor.listenTCP(8000, site)
factory = SomeClientFactory(site)
reactor.connectTCP('localhost', 9000, factory)
reactor.run()
and save it as simpleinjecter.py, if you then do (from the commandline):
echo -e "GET /thing HTTP/1.1\r\n\r\n" | nc -l 9000 # runs a server, ready to send req to first client connection
python simpleinjecter.py
it should work as expected, with the request from the nc server on port 9000 getting funneled as the payload into the twisted web server, and the response coming back as expected.
The key lines are in SomeClient.dataRecieved(). You'll need a transport object with the right methods -- in the example above, I just steal the object from the client connection. If you aren't going to do that, I imagine you'll have to make one up, as the stack will want to do things like call getPeer() on it.
What is the use-case?
Perhaps you want to create your own Datagram Protocol
At the base, the place where you
actually implement the protocol
parsing and handling, is the
DatagramProtocol class. This class
will usually be decended from twisted.internet.protocol.DatagramProtocol.
Most protocol handlers inherit either
from this class or from one of its
convenience children. The
DatagramProtocol class receives
datagrams, and can send them out over
the network. Received datagrams
include the address they were sent
from, and when sending datagrams the
address to send to must be specified.
If you want to see wire-level transmissions rather than inject them, install and run WireShark, the fantastic, free packet sniffer.