I have about 200 ip controlled power-bars that I need to connect to and control over TCP sockets. So far I can connect to a single power-bar and control it without issue. Where my issue lies is in how to connect to, play ping pong, as well as send and receive commands for all 200 in one client.
I have researched, hopefully exhaustively, and at most all I can find is a pointer towards select, or twisted - but only for multiple clients connecting to a single server (whereas I need the reverse). All I really need is a prod in the right direction. I can create the sockets for all 200, but I cannot for the life of me figure out how to connect to each device using the IP and Port (60000) and send and receive the proper messages in a non-blocking manner.
Any pointers in the general direction would be greatly appreciated. Hopefully this answer will help someone else with a similar issue to solve. Thanks.
You're learning about non-blocking patterns in Python at the right time ;) There's a plethora of ways to do it so I'm not really surprised you're confused. You've named twisted, which is the most mature framework, and there's also asyncio, which is built into Python 3+. Pick which ever one is easiest for you to learn. As you can see, they're very similar in style.
asyncio_client.py
import asyncio
from uuid import uuid4
class Echo(asyncio.Protocol):
def __init__(self):
self.identity = uuid4().hex
def connection_made(self, transport):
message = '{}: hello world'.format(self.identity)
transport.write(message.encode())
def data_received(self, data):
print(data.decode())
def echo_factory():
return Echo()
async def connect_to_server(loop):
await loop.create_connection(echo_factory, host='127.0.0.1', port=6000)
def main():
loop = asyncio.get_event_loop()
loop.create_task(connect_to_server(loop))
loop.create_task(connect_to_server(loop))
loop.create_task(connect_to_server(loop))
loop.run_forever()
main()
twisted_client.py
from uuid import uuid4
from twisted.internet import endpoints, protocol, reactor
class Echo(protocol.Protocol):
def __init__(self):
self.identity = uuid4().hex
def connectionMade(self):
message = '{}: hello world'.format(self.identity)
self.transport.write(message.encode())
def dataReceived(self, data):
print(data.decode())
def connect_to_server(factory):
return endpoints.clientFromString(reactor, 'tcp:6000:host=127.0.0.1').connect(factory)
def main():
factory = protocol.ClientFactory.forProtocol(Echo)
connect_to_server(factory)
connect_to_server(factory)
connect_to_server(factory)
reactor.run()
main()
I have a twisted factory that builds a protocol from LineReceiver, listening to a TCP socket.
I wish to limit the number of client connections to the LineReceiver to 1 and have additional client requests 'refused'.
I have seen some other hints referring to how to make clients wait (How to limit the number of simultaneous connections in Twisted) but I would like the cleanly refuse any additional connections.
My code is a simplification taken from the previous URL on limiting simultaneous connections.
Do I need to refuse connections in the factory or build a protocol and then close the connection straight away?
EDIT: Aha. I think I solved it, in that when connections are received in the protocol, if buildCount > 1 then self.transport.abortConnection() seems to work
Any suggestions/comments on whether this is the right way to do it? Perhaps it would be nicer to have a 'connection refused' rather than the server accept the connection and then close it straight away.
class myProtocol(LineReceiver):
def __init__(self, factory):
self.factory = factory
def connectionMade(self):
if self.factory.buildCount > 1:
self.transport.abortConnection()
else:
self.sendLine("connection accepted".encode())
def connectionLost(self):
# decrement count of connected clients
self.factory.buildCount -= 1
def lineReceived(self, line):
self.sendLine("echo '{0}'".format(line.decode()).encode())
class myFactory(Factory):
def __init__(self, mode, logger):
"""
Factory only runs once
"""
self.buildCount = 0
def buildProtocol(self, addr):
"""
Factory runs buildProtocol for each connection
"""
self.buildCount += 1
return myProtocol(self)
reactor.listenTCP(9000, myFactory())
reactor.run()
I'm working in a project with Python, Twisted and Redis. So the team decided to use txredisapi for the communication between the Python modules and Redis. This project does a lot of different things and we need to subscribe to several channels for listen the messages sent by Redis without the other functionalities stops (asynchronously).
Can one execution handle all the work and listen the messages sent by Redis at the same time or must we separate and execute the code in differents flows?
We use the following code for listen the messages:
import txredisapi as redis
class RedisListenerProtocol(redis.SubscriberProtocol):
def connectionMade(self):
self.subscribe("channelName")
def messageReceived(self, pattern, channel, message):
print "pattern=%s, channel=%s message=%s" %(pattern, channel, message)
def connectionLost(self, reason):
print "lost connection:", reason
class RedisListenerFactory(redis.SubscriberFactory):
maxDelay = 120
continueTrying = True
protocol = RedisListenerProtocol
We try to listen the messages with:
self.connRedisChannels = yield redis.ConnectionPool()
I'm interested to know how can I specify that the Connection must use the "RedisListenerFactory", then I guess that the function "messageReceived" will be fired when a message arrives.
Any suggestions, example or correction will be apreciated.
Thanks!
The following code solves the problem:
from twisted.internet.protocol import ClientCreator
from twisted.internet import reactor
defer = ClientCreator(reactor, RedisListenerProtocol).connectTCP(HOST, PORT)
Thanks to Philippe T. for the help.
If you want to use directly the redis.Connection() may be you can do this before:
redis.SubscriberFactory.protocol = RedisListenerProtocol
the package make internal call to is factory for connection.
other way is to rewrite *Connection class and make*Connection factory to use your factory.
to make the connection on other part of your code you can do something like this :
from twisted.internet.protocol import ClientCreator
from twisted.internet import reactor
# some where :
defer = ClientCreator(reactor, RedisListenerProtocol).connectTCP(__HOST__, __PORT__)
# the defer will have your client when the connection is done
I have been asked to write a class that connects to a server, asynchronously sends the server various commands, and then provides the returned data to the client. I've been asked to do this in Python, which is a new language to me. I started digging around and found the Twisted framework which offers some very nice abstractions (Protocol, ProtocolFactory, Reactor) that do a lot of the things that I would have to do if I would roll my own socket-based app. It seems like the right choice given the problem that I have to solve.
I've looked through numerous examples on the web (mostly Krondo), but I still haven't seen a good example of creating a client that will send multiple commands across the wire and I maintain the connection I create. The server (of which I have no control over), in this case, doesn't disconnect after it sends the response. So, what's the proper way to design the client so that I can tickle the server in various ways?
Right now I do this:
class TestProtocol(Protocol)
def connectionMade(self):
self.transport.write(self.factory.message)
class TestProtocolFactory(Factory):
message = ''
def setMessage(self, msg):
self.message = msg
def main():
f = TestProtocolFactory()
f.setMessage("my message")
reactor.connectTCP(...)
reactor.run()
What I really want to do is call self.transport.write(...) via the reactor (really, call TestProtocolFactory::setMessage() on-demand from another thread of execution), not just when the connection is made.
Depends. Here are some possibilities:
I'm assuming
Approach 1. You have a list of commands to send the server, and for some reason can't do them all at once. In that case send a new one as the previous answer returns:
class proto(parentProtocol):
def stringReceived(self, data):
self.handle_server_response(data)
next_command = self.command_queue.pop()
# do stuff
Approach 2. What you send to the server is based on what the server sends you:
class proto(parentProtocol):
def stringReceived(self, data):
if data == "this":
self.sendString("that")
elif data == "foo":
self.sendString("bar")
# and so on
Approach 3. You don't care what the server sends to, you just want to periodically send some commands:
class proto(parentProtocol):
def callback(self):
next_command = self.command_queue.pop()
# do stuff
def connectionMade(self):
from twisted.internet import task
self.task_id = task.LoopingCall(self.callback)
self.task_id.start(1.0)
Approach 4: Your edit now mentions triggering from another thread. Feel free to check the twisted documentation to find out if proto.sendString is threadsafe. You may be able to call it directly, but I don't know. Approach 3 is threadsafe though. Just fill the queue (which is threadsafe) from another thread.
Basically you can store any amount of state in your protocol; it will stay around until you are done. The you either send commands to the server as a response to it's messages to you, or you set up some scheduling to do your stuff. Or both.
You may want to use a Service.
Services are pieces of functionality within a Twisted app which are started and stopped, and are nice abstractions for other parts of your code to interact with. For example, in this case you might have a SayStuffToServerService (I know, terrible name, but without knowing more about its job it was the best I could do here :) ) that exposed something like this:
class SayStuffToServerService:
def __init__(self, host, port):
# this is the host and port to connect to
def sendToServer(self, whatToSend):
# send some line to the remote server
def startService(self):
# call me before using the service. starts outgoing connection efforts.
def stopService(self):
# clean reactor shutdowns should call this method. stops outgoing
# connection efforts.
(That might be all the interface you need, but it should be fairly clear where you can add things to this.)
The startService() and stopService() methods here are just what Twisted's Services expose. And helpfully, there is a premade Twisted Service which acts like a TCP client and takes care of all the reactor stuff for you. It's twisted.application.internet.TCPClient, which takes arguments for a remote host and port, along with a ProtocolFactory to take care of handling the actual connection attempt.
Here is the SayStuffToServerService, implemented as a subclass of TCPClient:
from twisted.application import internet
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
# we'll do stuff here
(See below for the SayStuffToServerProtocolFactory.)
Using this Service architecture is convenient in a lot of ways; you can group Services together in one container, so that they all get stopped and started as one when you have different parts of your app that you want active. It may make good sense to implement other parts of your app as separate Services. You can set Services as child services to application- the magic name that twistd looks for in order to know how to initialize, daemonize, and shut down your app. Actually yes, let's add some code to do that now.
from twisted.application import service
...
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
That's all. Now when you run this module under twistd (i.e., for debugging, twistd -noy saystuff.py), that application will be started under the right reactor, and it will in turn start the SayStuffToServerService, which will start a connection effort to localhost:65432, which will use the service's factory attribute to set up the connection and the Protocol. You don't need to call reactor.run() or attach things to the reactor yourself anymore.
So we haven't implemented SayStuffToServerProtocolFactory yet. Since it sounds like you would prefer that your client reconnect if it has lost the connection (so that callers of sendToServer can usually just assume that there's a working connection), I'm going to put this protocol factory on top of ReconnectingClientFactory.
from twisted.internet import protocol
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
This is a pretty nice minimal definition, which will keep trying to make outgoing TCP connections to the host and port we specified, and instantiate a SayStuffToServerProtocol each time. When we fail to connect, this class will do nice, well-behaved exponential backoff so that your network doesn't get hammered (you can set a maximum wait time). It will be the responsibility of the Protocol to assign to _my_live_proto and call this factory's resetDelay() method, so that exponential backoff will continue to work as expected. And here is that Protocol now:
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
This is implemented on top of twisted.protocols.basic.LineReceiver, but would work as well with any other sort of Protocol, in case your protocol isn't line-oriented.
The only thing left is hooking up the Service to the right Protocol instance. This is why the Factory keeps a _my_live_proto attribute, which should be set when a connection is successfully made, and cleared (set to None) when that connection is lost. Here's the new implementation of SayStuffToServerService.sendToServer:
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
...
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
And now to tie it all together in one place:
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.protocols import basic
class SayStuffToServerProtocol(basic.LineReceiver):
def connectionMade(self):
# if there are things you need to do on connecting to ensure the
# connection is "all right" (maybe authenticate?) then do that
# before calling:
self.factory.resetDelay()
self.factory._my_live_proto = self
def connectionLost(self, reason):
self.factory._my_live_proto = None
del self.factory
def sayStuff(self, stuff):
self.sendLine(stuff)
def lineReceived(self, line):
# do whatever you want to do with incoming lines. often it makes sense
# to have a queue of Deferreds on a protocol instance like this, and
# each incoming response gets sent to the next queued Deferred (which
# may have been pushed on the queue after sending some outgoing
# message in sayStuff(), or whatever).
pass
class SayStuffToServerProtocolFactory(protocol.ReconnectingClientFactory):
_my_live_proto = None
protocol = SayStuffToServerProtocol
class NotConnectedError(Exception):
pass
class SayStuffToServerService(internet.TCPClient):
factoryclass = SayStuffToServerProtocolFactory
def __init__(self, host, port):
self.factory = self.factoryclass()
internet.TCPClient.__init__(self, host, port, self.factory)
def sendToServer(self, whatToSend):
if self.factory._my_live_proto is None:
# define here whatever behavior is appropriate when there is no
# current connection (in case the client can't connect or
# reconnect)
raise NotConnectedError
self.factory._my_live_proto.sayStuff(whatToSend)
application = service.Application('say-stuff')
sttss = SayStuffToServerService('localhost', 65432)
sttss.setServiceParent(service.IServiceCollection(application))
Hopefully that gives enough of a framework with which to start. There is sometimes a lot of plumbing to do to handle client disconnections just the way you want, or to handle out-of-order responses from the server, or handle various sorts of timeout, canceling pending requests, allowing multiple pooled connections, etc, etc, but this should help.
The twisted framework is event-based programming; and by nature, its method is all called in async, and result is get by defer object.
The framework's nature is approprivate for protocol developing, just you have to change your minding from traditional sequential programming. The Protocol class is like a finite state machine with events like: connection make, connection lost, receive data.
You can convert your client code into FSM and then will be easily to fit into the Protocol class.
Below is an rough example of what I want to express. A bit of rouge, but this is i can provide now:
class SyncTransport(Protocol):
# protocol
def dataReceived(self, data):
print 'receive data', data
def connectionMade(self):
print 'i made a sync connection, wow'
self.transport.write('x')
self.state = I_AM_LIVING
def connectionLost(self):
print 'i lost my sync connection, sight'
def send(self, data):
if self.state == I_AM_LIVING:
if data == 'x':
self.transport.write('y')
if data == 'Y':
self.transport.write('z')
self.state = WAITING_DEAD
if self.state == WAITING_DEAD:
self.transport.close()
I am running an HTTP server using the twisted framework. Is there any way I can "manually" ask it to process some payload? For example, if I've constructed some Ethernet frame can I ask twisted's reactor to handle it just as if it had just arrived on my network card?
You can do something like this:
from twisted.web import server
from twisted.web.resource import Resource
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ClientFactory
class SomeWebThing(Resource):
def render_GET(self, request):
return "hello\n"
class SomeClient(Protocol):
def dataReceived(self, data):
p = self.factory.site.buildProtocol(self.transport.addr)
p.transport = self.transport
p.dataReceived(data)
class SomeClientFactory(ClientFactory):
protocol = SomeClient
def __init__(self, site):
self.site = site
if __name__ == '__main__':
root = Resource()
root.putChild('thing', SomeWebThing())
site = server.Site(root)
reactor.listenTCP(8000, site)
factory = SomeClientFactory(site)
reactor.connectTCP('localhost', 9000, factory)
reactor.run()
and save it as simpleinjecter.py, if you then do (from the commandline):
echo -e "GET /thing HTTP/1.1\r\n\r\n" | nc -l 9000 # runs a server, ready to send req to first client connection
python simpleinjecter.py
it should work as expected, with the request from the nc server on port 9000 getting funneled as the payload into the twisted web server, and the response coming back as expected.
The key lines are in SomeClient.dataRecieved(). You'll need a transport object with the right methods -- in the example above, I just steal the object from the client connection. If you aren't going to do that, I imagine you'll have to make one up, as the stack will want to do things like call getPeer() on it.
What is the use-case?
Perhaps you want to create your own Datagram Protocol
At the base, the place where you
actually implement the protocol
parsing and handling, is the
DatagramProtocol class. This class
will usually be decended from twisted.internet.protocol.DatagramProtocol.
Most protocol handlers inherit either
from this class or from one of its
convenience children. The
DatagramProtocol class receives
datagrams, and can send them out over
the network. Received datagrams
include the address they were sent
from, and when sending datagrams the
address to send to must be specified.
If you want to see wire-level transmissions rather than inject them, install and run WireShark, the fantastic, free packet sniffer.