How to feed information to a Python daemon? - python

I have a Python daemon running on a Linux system.
I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file.
This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something.
What would be the best way to achieve this?
I was thinking about a named pipe or the Queue library but there could be better solutions.

Here is how you can do it with a fifo:
# receiver.py
import os
import sys
import atexit
# Set up the FIFO
thefifo = 'comms.fifo'
os.mkfifo(thefifo)
# Make sure to clean up after ourselves
def cleanup():
os.remove(thefifo)
atexit.register(cleanup)
# Go into reading loop
while True:
with open(thefifo, 'r') as fifo:
for line in fifo:
print "Hello", line.strip()
You can use it like this from a shell session
$ python receiver.py &
$ echo "Alice" >> comms.fifo
Hello Alice
$ echo "Bob" >> comms.fifo
Hello Bob

There are several options
1) If the daemon should accept messages from other systems, make the daemon an RPC server - Use xmlrpc/jsonrpc.
2) If it is all local, you can use either TCP sockets or Named PIPEs.
3) If there will be a huge set of clients connecting concurrently, you can use select.epoll.

python has a built-in rpc library (using xml for data encoding). the documentation is well written; there is a complete example there:
https://docs.python.org/2.7/library/xmlrpclib.html
(python 2.7) or
https://docs.python.org/3.3/library/xmlrpc.server.html#module-xmlrpc.server
(python 3.3)
that may be worth considering.

Everyone mentioned FIFO-s (that's named pipes in Linux terminology) and XML-RPC, but if you learning these things right now, you have to check TCP/UDP/Unix sockets as well, since they are platform independent (at least, TCP/UDP sockets are). You can check this tutorial for a working example or the Python documentation if you want to go deper in this direction. It's also useful since most of the modern communication platforms (XML-RPC, SOAP, REST) uses these basic things.

There are a few mechanisms you could use, but everything boils down to using IPC (inter-process communication).
Now, the actual mechanism you will use depends on the details of what you can achieve, a good solution though would be to use something like zmq.
Check the following example on pub/sub on zmq
http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/patterns/pubsub.html
also this
http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/multisocket/zmqpoller.html
for the non-blocking way.

I'm not good in python so I would like to share
**Universal Inter process communcation **
nc a.k.a netcat is a server client model program which allow to send data such as text,files over network.
Advantages of nc
Very easy to use
IPC even between different programming langauges
Inbuilt on most linux OS
Example
On deamon
nc -l 1234 > output.txt
From other program or shell/terminal/script
echo HELLO | nc 127.0.0.1 1234
nc can be python by using the system command calling function ( may be os.system ) and read the stdout.

Why not use signals?
I am not a python programmer but presumably you can register a signal handler within your daemon and then signal it from the terminal. Just use SIGUSR or SIGHUP or similar.
This is the usual method you use to rotate logfiles or similar.

One solution could be to use the asynchat library which simplify calls between a server and a client.
Here is an example you could use (adapted from this site)
In deamon.py, a ChatServer object is created. Each time a connection is done, a ChatHandler object is created, inherited from asynchat.async_chat. This object collects data and fills it in self.buffer.
When a special string call the terminator is encountered, data is supposed to be complete and method found_terminator is called. It is in this method that you write your own code.
In sender.py, you create a ChatClient object, inherited from asynchat.async_chat, setup the connection in the constructor, define the terminator (in case the server answers !) and call the push method to send your data. You must append your terminator string to your data for the server to know when it can stop reading data.
daemon.py :
import asynchat
import asyncore
import socket
# Terminator string can be changed here
TERMINATOR = '\n'
class ChatHandler(asynchat.async_chat):
def __init__(self, sock):
asynchat.async_chat.__init__(self, sock=sock)
self.set_terminator(TERMINATOR)
self.buffer = []
def collect_incoming_data(self, data):
self.buffer.append(data)
def found_terminator(self):
msg = ''.join(self.buffer)
# Change here what the daemon is supposed to do when a message is retrieved
print 'Hello', msg
self.buffer = []
class ChatServer(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind((host, port))
self.listen(5)
def handle_accept(self):
pair = self.accept()
if pair is not None:
sock, addr = pair
print 'Incoming connection from %s' % repr(addr)
handler = ChatHandler(sock)
server = ChatServer('localhost', 5050)
print 'Serving on localhost:5050'
asyncore.loop()
sender.py :
import asynchat
import asyncore
import socket
import threading
# Terminator string can be changed here
TERMINATOR = '\n'
class ChatClient(asynchat.async_chat):
def __init__(self, host, port):
asynchat.async_chat.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
self.set_terminator(TERMINATOR)
self.buffer = []
def collect_incoming_data(self, data):
pass
def found_terminator(self):
pass
client = ChatClient('localhost', 5050)
# Data sent from here
client.push("Bob" + TERMINATOR)
client.push("Alice" + TERMINATOR)

Related

Writing an external program to interface with wpa_supplicant

I need to interact directly with wpa_supplicant from Python. As I understand it one can connect to wpa_supplicant using Unix sockets and wpa_supplicant control interface (https://w1.fi/wpa_supplicant/devel/ctrl_iface_page.html).
I wrote a simple program that sends a PING command:
import socket
CTRL_SOCKETS = "/home/victor/Research/wpa_supplicant_python/supplicant_conf"
INTERFACE = "wlx84c9b281aa80"
SOCKETFILE = "{}/{}".format(CTRL_SOCKETS, INTERFACE)
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
s.connect(SOCKETFILE)
s.send(b'PING')
while 1:
data = s.recv(1024)
if data:
print(repr(data))
But when I run it, wpa_supplicant reports an error:
wlx84c9b281aa80: ctrl_iface sendto failed: 107 - Transport endpoint is not connected
Could someone please provide an example, how you would do a 'scan' and then print 'scan_results'.
Apparently, the type of socket that wpa_supplicant uses (UNIX datagram) does not provide any way for the server to reply. There are a few ways to get around that. wpa_supplicant in particular seems to support replies through a separate socket (found at a path appended at the end of each message).
Weirdly enough, this seems to be a relatively common practice in Linux: /dev/log seems to work in the same way.
Here's a program that does what you asked for:
import socket, os
from time import sleep
def sendAndReceive(outmsg, csock, ssock_filename):
'''Sends outmsg to wpa_supplicant and returns the reply'''
# the return socket object can be used to send the data
# as long as the address is provided
csock.sendto(str.encode(outmsg), ssock_filename)
(bytes, address) = csock.recvfrom(4096)
inmsg = bytes.decode('utf-8')
return inmsg
wpasock_file = '/var/run/wpa_supplicant/wlp3s0'
retsock_file = '/tmp/return_socket'
if os.path.exists(retsock_file):
os.remove(retsock_file)
retsock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
retsock.bind(retsock_file)
replyToScan = sendAndReceive('SCAN', retsock, wpasock_file)
print(f'SCAN: {replyToScan}')
sleep(5)
replyToScanResults = sendAndReceive('SCAN_RESULTS', retsock, wpasock_file)
print(f'SCAN_RESULTS: {replyToScanResults}')
retsock.close()
os.remove(retsock_file)

Interacting with long-running python script

I have a long running Python script which collect tweets from Twitter, and I would like to know how its doing every once in awhile.
Currently, I am using the signal library to catch interrupts, at which point I call my print function. Something like this:
import signal
def print_info(count):
print "#Tweets:", count
#Print out the process ID so I can interrupt it for info
print 'PID:', os.getpid()
#Start listening for interrupts
signal.signal(signal.SIGUSR1, functools.partial(print_info, tweet_count))
And whenever I want my info, I open up a new terminal and issue my interrupt:
$kill -USR1 <pid>
Is there a better way to do this? I am aware I could have my script something at scheduled intervals, but I am more interested in knowing on demand, and potentially issuing other commands as well.
Sending a signal to process would interrupt the process. Below you will find an approach that uses dedicated thread to emulate python console. The console is exposed as a unix socket.
import traceback
import importlib
from code import InteractiveConsole
import sys
import socket
import os
import threading
from logging import getLogger
# template used to generate file name
SOCK_FILE_TEMPLATE = '%(dir)s/%(prefix)s-%(pid)d.socket'
log = getLogger(__name__)
class SocketConsole(object):
'''
Ported form :eventlet.backdoor.SocketConsole:.
'''
def __init__(self, locals, conn, banner=None): # pylint: diable=W0622
self.locals = locals
self.desc = _fileobject(conn)
self.banner = banner
self.saved = None
def switch(self):
self.saved = sys.stdin, sys.stderr, sys.stdout
sys.stdin = sys.stdout = sys.stderr = self.desc
def switch_out(self):
sys.stdin, sys.stderr, sys.stdout = self.saved
def finalize(self):
self.desc = None
def _run(self):
try:
console = InteractiveConsole(self.locals)
# __builtins__ may either be the __builtin__ module or
# __builtin__.__dict__ in the latter case typing
# locals() at the backdoor prompt spews out lots of
# useless stuff
import __builtin__
console.locals["__builtins__"] = __builtin__
console.interact(banner=self.banner)
except SystemExit: # raised by quit()
sys.exc_clear()
finally:
self.switch_out()
self.finalize()
class _fileobject(socket._fileobject):
def write(self, data):
self._sock.sendall(data)
def isatty(self):
return True
def flush(self):
pass
def readline(self, *a):
return socket._fileobject.readline(self, *a).replace("\r\n", "\n")
def make_threaded_backdoor(prefix=None):
'''
:return: started daemon thread running :main_loop:
'''
socket_file_name = _get_filename(prefix)
db_thread = threading.Thread(target=main_loop, args=(socket_file_name,))
db_thread.setDaemon(True)
db_thread.start()
return db_thread
def _get_filename(prefix):
return SOCK_FILE_TEMPLATE % {
'dir': '/var/run',
'prefix': prefix,
'pid': os.getpid(),
}
def main_loop(socket_filename):
try:
log.debug('Binding backdoor socket to %s', socket_filename)
check_socket(socket_filename)
sockobj = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sockobj.bind(socket_filename)
sockobj.listen(5)
except Exception, e:
log.exception('Failed to init backdoor socket %s', e)
return
while True:
conn = None
try:
conn, _ = sockobj.accept()
console = SocketConsole(locals=None, conn=conn, banner=None)
console.switch()
console._run()
except IOError:
log.debug('IOError closing connection')
finally:
if conn:
conn.close()
def check_socket(socket_filename):
try:
os.unlink(socket_filename)
except OSError:
if os.path.exists(socket_filename):
raise
Example program:
make_threaded_backdoor(prefix='test')
while True:
pass
Example session:
mmatczuk#cactus:~$ rlwrap nc -U /var/run/test-3196.socket
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import os
>>> os.getpid()
3196
>>> quit()
mmatczuk#cactus:~$
This is a pretty robust tool that can be used to:
dump threads,
inspect process memory,
attach debugger on demand, pydev debugger (work for both eclipse and pycharm),
force GC,
monkeypatch function definition on the fly
and even more.
I personally write information to a file so that I have it afterwards, although this has the disadvantage of perhaps being slightly slower because it has to write to a file every time or every few times it retrieves a tweet.
Anyways, if you write it to a file "output.txt", you can open up bash and either type in tail output.txt for the latest 10 lines printed in the file, or you can type tail -f output.txt, which continuously updates the terminal prompt with the lines that you are writing to the file. If you wish to stop, just Ctrl-C.
Here's an example long-running program that also maintains a status socket. When a client connects to the socket, the script writes some status information to the socket.
#!/usr/bin/python
import os
import sys
import argparse
import random
import threading
import socket
import time
import select
val1 = 0
val2 = 0
lastupdate = 0
quit = False
# This function runs in a separate thread. When a client connects,
# we write out some basic status information, close the client socket,
# and wait for the next connection.
def connection_handler(sock):
global val1, val2, lastupdate, quit
while not quit:
# We use select() with a timeout here so that we are able to catch the
# quit flag in a timely manner.
rlist, wlist, xlist = select.select([sock],[],[], 0.5)
if not rlist:
continue
client, clientaddr = sock.accept()
client.send('%s %s %s\n' % (lastupdate, val1, val2))
client.close()
# This function starts the listener thread.
def start_listener():
sock = socket.socket(socket.AF_UNIX)
try:
os.unlink('/var/tmp/myprog.socket')
except OSError:
pass
sock.bind('/var/tmp/myprog.socket')
sock.listen(5)
t = threading.Thread(
target=connection_handler,
args=(sock,))
t.start()
def main():
global val1, val2, lastupdate
start_listener()
# Here is the part of our script that actually does "work".
while True:
print 'updating...'
lastupdate = time.time()
val1 = val1 + random.randint(1,10)
val2 = val2 + random.randint(100,200)
print 'sleeping...'
time.sleep(5)
if __name__ == '__main__':
try:
main()
except (Exception,KeyboardInterrupt,SystemExit):
quit=True
raise
You could write a simple Python client to connect to the socket, or you could use something like socat:
$ socat - unix:/var/tmp/myprog.sock
1403061693.06 6 152
I had write a similar application before.
Here is what I did:
When there are only a few commands needed, I just use signal as you did, just for not making it too complicated. By command, I mean something that you want you application to do, such as print_info in your post.
But when application updated, there are more different commands needed, I began to use a special thread listening on a socket port or reading a local file for accepting commands. Suppose the application need to support prinf_info1 print_info2 print_info3, so you can use a client connect to the target port and write print_info1 to make the application execute command print_info1 (Or just write print_info1 to a local file if you are using the reading local file mechanism).
When using the listening on a socket port mechanism, the disadvantage is it will take a bit more work to write a client to give commands, the advantage is you can give orders anywhere.
When using the reading a local file mechanism, the disadvantage is you have to make the thread check the file in a loop and it will use a bit resource, the advantage is giving orders is very simple (just write a string to a file) and you don't need to write a client and socket listen server.
rpyc is the perfect tool for this task.
In short, you define a rpyc.Service class which exposes the commands you want to expose, and start an rpyc.Server thread.
Your client then connects to your process, and calls the methods which are mapped to the commands your service exposes.
It's as simple and clean as that. No need to worry about sockets, signals, object serialization.
It has other cool features as well, for example the protocol being symmetric.
Your question relates to interprocess communication. You can achieve this by communicating over a unix socket or TCP port, by using a shared memory, or by using a message queue or cache system such as RabbitMQ and Redis.
This post talks about using mmap to achieve shared memory interprocess communication.
Here's how to get started with redis and RabbitMQ, both are rather simple to implement.

How can I write a socket server in a different thread from my main program (using gevent)?

I'm developing a Flask/gevent WSGIserver webserver that needs to communicate (in the background) with a hardware device over two sockets using XML.
One socket is initiated by the client (my application) and I can send XML commands to the device. The device answers on a different port and sends back information that my application has to confirm. So my application has to listen to this second port.
Up until now I have issued a command, opened the second port as a server, waited for a response from the device and closed the second port.
The problem is that it's possible that the device sends multiple responses that I have to confirm. So my solution was to keep the port open and keep responding to incoming requests. However, in the end the device is done sending requests, and my application is still listening (I don't know when the device is done), thereby blocking everything else.
This seemed like a perfect use case for a thread, so that my application launches a listening server in a separate thread. Because I'm already using gevent as a WSGI server for Flask, I can use the greenlets.
The problem is, I have looked for a good example of such a thing, but all I can find is examples of multi-threading handlers for a single socket server. I don't need to handle a lot of connections on the socket server, but I need it launched in a separate thread so it can listen for and handle incoming messages while my main program can keep sending messages.
The second problem I'm running into is that in the server, I need to use some methods from my "main" class. Being relatively new to Python I'm unsure how to structure it in a way to make that possible.
class Device(object):
def __init__(self, ...):
self.clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def _connect_to_device(self):
print "OPEN CONNECTION TO DEVICE"
try:
self.clientsocket.connect((self.ip, 5100))
except socket.error as e:
pass
def _disconnect_from_device(self):
print "CLOSE CONNECTION TO DEVICE"
self.clientsocket.close()
def deviceaction1(self, ...):
# the data that is sent is an XML document that depends on the parameters of this method.
self._connect_to_device()
self._send_data(XMLdoc)
self._wait_for_response()
return True
def _send_data(self, data):
print "SEND:"
print(data)
self.clientsocket.send(data)
def _wait_for_response(self):
print "WAITING FOR REQUESTS FROM DEVICE (CHANNEL 1)"
self.serversocket.bind(('10.0.0.16', 5102))
self.serversocket.listen(5) # listen for answer, maximum 5 connections
connection, address = self.serversocket.accept()
# the data is of a specific length I can calculate
if len(data) > 0:
self._process_response(data)
self.serversocket.close()
def _process_response(self, data):
print "RECEIVED:"
print(data)
# here is some code that processes the incoming data and
# responds to the device
# this may or may not result in more incoming data
if __name__ == '__main__':
machine = Device(ip="10.0.0.240")
Device.deviceaction1(...)
This is (globally, I left out sensitive information) what I'm doing now. As you can see everything is sequential.
If anyone can provide an example of a listening server in a separate thread (preferably using greenlets) and a way to communicate from the listening server back to the spawning thread, it would be of great help.
Thanks.
EDIT:
After trying several methods, I decided to use Pythons default select() method to solve this problem. This worked, so my question regarding the use of threads is no longer relevant. Thanks for the people who provided input for your time and effort.
Hope it can provide some help, In example class if we will call tenMessageSender function then it will fire up an async thread without blocking main loop and then _zmqBasedListener will start listening on separate port untill that thread is alive. and whatever message our tenMessageSender function will send, those will be received by client and respond back to zmqBasedListener.
Server Side
import threading
import zmq
import sys
class Example:
def __init__(self):
self.context = zmq.Context()
self.publisher = self.context.socket(zmq.PUB)
self.publisher.bind('tcp://127.0.0.1:9997')
self.subscriber = self.context.socket(zmq.SUB)
self.thread = threading.Thread(target=self._zmqBasedListener)
def _zmqBasedListener(self):
self.subscriber.connect('tcp://127.0.0.1:9998')
self.subscriber.setsockopt(zmq.SUBSCRIBE, "some_key")
while True:
message = self.subscriber.recv()
print message
sys.exit()
def tenMessageSender(self):
self._decideListener()
for message in range(10):
self.publisher.send("testid : %d: I am a task" %message)
def _decideListener(self):
if not self.thread.is_alive():
print "STARTING THREAD"
self.thread.start()
Client
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9997')
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9998')
subscriber.setsockopt(zmq.SUBSCRIBE, "testid")
count = 0
print "Listener"
while True:
message = subscriber.recv()
print message
publisher.send('some_key : Message received %d' %count)
count+=1
Instead of thread you can use greenlet etc.

How to achieve tcpflow functionality (follow tcp stream) purely within python

I am writing a tool in python (platform is linux), one of the tasks is to capture a live tcp stream and to
apply a function to each line. Currently I'm using
import subprocess
proc = subprocess.Popen(['sudo','tcpflow', '-C', '-i', interface, '-p', 'src', 'host', ip],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
do_something(line)
This works quite well (with the appropriate entry in /etc/sudoers), but I would like to avoid calling an external program.
So far I have looked into the following possibilities:
flowgrep: a python tool which looks just like what I need, BUT: it uses pynids
internally, which is 7 years old and seems pretty much abandoned. There is no pynids package
for my gentoo system and it ships with a patched version of libnids
which I couldn't compile without further tweaking.
scapy: this is a package manipulation program/library for python,
I'm not sure if tcp stream
reassembly is supported.
pypcap or pylibpcap as wrappers for libpcap. Again, libpcap is for packet
capturing, where I need stream reassembly which is not possible according
to this question.
Before I dive deeper into any of these libraries I would like to know if maybe someone
has a working code snippet (this seems like a rather common problem). I'm also grateful if
someone can give advice about the right way to go.
Thanks
Jon Oberheide has led efforts to maintain pynids, which is fairly up to date at:
http://jon.oberheide.org/pynids/
So, this might permit you to further explore flowgrep. Pynids itself handles stream reconstruction rather elegantly.See http://monkey.org/~jose/presentations/pysniff04.d/ for some good examples.
Just as a follow-up: I abandoned the idea to monitor the stream on the tcp layer. Instead I wrote a proxy in python and let the connection I want to monitor (a http session) connect through this proxy. The result is more stable and does not need root privileges to run. This solution depends on pymiproxy.
This goes into a standalone program, e.g. helper_proxy.py
from multiprocessing.connection import Listener
import StringIO
from httplib import HTTPResponse
import threading
import time
from miproxy.proxy import RequestInterceptorPlugin, ResponseInterceptorPlugin, AsyncMitmProxy
class FakeSocket(StringIO.StringIO):
def makefile(self, *args, **kw):
return self
class Interceptor(RequestInterceptorPlugin, ResponseInterceptorPlugin):
conn = None
def do_request(self, data):
# do whatever you need to sent data here, I'm only interested in responses
return data
def do_response(self, data):
if Interceptor.conn: # if the listener is connected, send the response to it
response = HTTPResponse(FakeSocket(data))
response.begin()
Interceptor.conn.send(response.read())
return data
def main():
proxy = AsyncMitmProxy()
proxy.register_interceptor(Interceptor)
ProxyThread = threading.Thread(target=proxy.serve_forever)
ProxyThread.daemon=True
ProxyThread.start()
print "Proxy started."
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='some_secret_password')
while True:
Interceptor.conn = listener.accept()
print "Accepted Connection from", listener.last_accepted
try:
Interceptor.conn.recv()
except: time.sleep(1)
finally:
Interceptor.conn.close()
if __name__ == '__main__':
main()
Start with python helper_proxy.py. This will create a proxy listening for http connections on port 8080 and listening for another python program on port 6000. Once the other python program has connected on that port, the helper proxy will send all http replies to it. This way the helper proxy can continue to run, keeping up the http connection, and the listener can be restarted for debugging.
Here is how the listener works, e.g. listener.py:
from multiprocessing.connection import Client
def main():
address = ('localhost', 6000)
conn = Client(address, authkey='some_secret_password')
while True:
print conn.recv()
if __name__ == '__main__':
main()
This will just print all the replies. Now point your browser to the proxy running on port 8080 and establish the http connection you want to monitor.

Twisted - how to create multi protocol process and send the data between the protocols

Im trying to write a program that would be listening for data (simple text messages) on some port (say tcp 6666) and then pass them to one or more different protocols - irc, xmpp and so on. I've tried many approaches and digged the Internet, but I cant find easy and working solution for such task.
The code I am currently fighting with is here: http://pastebin.com/ri7caXih
I would like to know how to from object like:
ircf = ircFactory('asdfasdf', '#asdf666')
get access to self protocol methods, because this:
self.protocol.dupa1(msg)
returns error about self not being passed to active protocol object. Or maybe there is other, better, easier and more kosher way to create single reactor with multiple protocols and have actions triggeres when a message arrives on any of them, and then pass that message to other protocols for handling/processing/sending?
Any help will be highly appreciated!
Here is sample code to read from multiple connections to port 9001 and write out to a connection on port 9000. You would need multiple "PutLine" implementations, one for XMPP, IRC, MSN, etc.
I used a global to store the output connection PutLine but you would want to create a more complex Factory object that would handle this instead.
#!/usr/bin/env python
from twisted.internet.protocol import Protocol, Factory
from twisted.internet.endpoints import clientFromString, serverFromString
from twisted.protocols.basic import LineReceiver
from twisted.internet import reactor
queue = []
putter = None
class GetLine(LineReceiver):
delimiter = '\n'
def lineReceived(self, line):
queue.append(line)
putter.have_data()
self.sendLine(line)
class PutLine(LineReceiver):
def __init__(self):
global putter
putter = self
print 'putline init called %s' % str(self)
def have_data(self):
line = queue.pop()
self.sendLine(line)
def main():
f = Factory()
f.protocol = PutLine
endpoint = clientFromString(reactor, "tcp:host=localhost:port=9000")
endpoint.connect(f)
f = Factory()
f.protocol = GetLine
endpoint2 = serverFromString(reactor, "tcp:port=9001")
endpoint2.listen(f)
reactor.run()
if __name__ == '__main__':
main()
Testing:
nc -l 9000
python test.py
nc 9001
Data entered form any number of nc 9001 (or netcat 9001) will appear on nc -l 9000.
This is answered in the FAQ.
http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#HowdoImakeinputononeconnectionresultinoutputonanother
See doc/core/examples/chatserver.py. There they've added hooks to the Protocol's connectionMade and connectionLost methods to maintain a list of connected clients, and then it iterates through all of them when a message arrives to pass on.

Categories

Resources