I'm looking for a way to do the equivalent of Perl's HTTP::Async module's next_response method
The HTTP::Async module doesn't spawn any background threads, nor does it use any callbacks. Instead, every time anyone (in my case, the main thread) calls next_response on the object, all the data that has been received by the OS so far is read (blocking, but instantaneous since it only processes data that's already been received). If this is the end of the response, then next_response returns an HTTP::Response object, otherwise it returns undef.
Usage of this module looks something like (pseudocode):
request = HTTP::Async(url)
do:
response = request->next_response()
if not response:
sleep 5 # or process events or whatever
while not response
# Do things with response
As far as I can see, Python's urllib or http.client don't support this style. As for why I want to do it in this style:
This is for an embedded Python environment where I can't spawn threads, nor have Python spawn any.
I'm restricted to a single thread that is actually the embedding application's thread. This means I cannot have any delayed callbacks either - the application decides when to let my Python code run. All I can do is request the embedding application to invoke a callback of my choosing every 50 milliseconds, say.
Is there a way to do this in Python?
For reference, this is an example of the Perl code I have right now and that I'm looking to port to Python:
httpAsync = HTTP::Async->new()
sub httpRequestAsync {
my ($url, $callback) = #_; # $callback will be called with the response text
$httpAsync->add(new HTTP::Request(GET => $url));
# create_timer causes the embedding application to call the supplied callback every 50ms
application::create_timer(50, sub {
my $timer_result = application::keep_timer;
my $response = $httpAsync->next_response;
if ($response) {
my $responseText = $response->decoded_content;
if ($responseText) {
$callback->($responseText);
}
$timer_result = application::remove_timer;
}
# Returning application::keep_timer will preserve the timer to be called again.
# Returning application::remove_timer will remove the timer.
return $timer_result;
});
}
httpRequestAsync('http://www.example.com/', sub {
my $responseText = $_[0];
application::display($responseText);
});
Edit: Given that this is for an embedded Python instance, I'll take all the alternatives I can get (part of the standard library or otherwise) as I'll have to evaluate all of them to make sure they can run under my particular constraints.
Note: If you're interested in only retrieving data when YOU call for data to be received, simply add a flag to handle_receive and add it to the sleep block inside handle_receive thus giving you data only when you call your function.
#!/usr/bin/python
# -*- coding: iso-8859-15 -*-
import asyncore, errno
from socket import AF_INET, SOCK_STREAM
from time import sleep
class sender():
def __init__(self, sock_send):
self.s = sock_send
self.bufferpos = 0
self.buffer = {}
self.alive = 1
def send(self, what):
self.buffer[len(self.buffer)] = what
def writable(self):
return (len(self.buffer) > self.bufferpos)
def run(self):
while self.alive:
if self.writable():
logout = str([self.buffer[self.bufferpos]])
self.s(self.buffer[self.bufferpos])
self.bufferpos += 1
sleep(0.01)
class SOCK(asyncore.dispatcher):
def __init__(self, _s=None, config=None):
self.conf = config
Thread.__init__(self)
self._s = _s
self.inbuffer = ''
#self.buffer = ''
self.lockedbuffer = False
self.is_writable = False
self.autounlockAccounts = {}
if _s:
asyncore.dispatcher.__init__(self, _s)
self.sender = sender(self.send)
else:
asyncore.dispatcher.__init__(self)
self.create_socket(AF_INET, SOCK_STREAM)
#if self.allow_reuse_address:
# self.set_resue_addr()
self.bind((self.conf['SERVER'], self.conf['PORT']))
self.listen(5)
self.sender = None
self.start()
def parse(self):
self.lockedbuffer = True
## Parse here
print self.inbuffer
self.inbuffer = ''
self.lockedbuffer = False
def readable(self):
return True
def handle_connect(self):
pass
def handle_accept(self):
(conn_sock, client_address) = self.accept()
if self.verify_request(conn_sock, client_address):
self.process_request(conn_sock, client_address)
def process_request(self, sock, addr):
x = SOCK(sock, config={'PARSER' : self.conf['PARSER'], 'ADDR' : addr[0], 'NAME' : 'CORE_SUB_SOCK_('+str(addr[0]) + ')'})
def verify_request(self, conn_sock, client_address):
return True
def handle_close(self):
self.close()
if self.sender:
self.sender.alive = False
def handle_read(self):
data = self.recv(8192)
while self.lockedbuffer:
sleep(0.01)
self.inbuffer += data
def writable(self):
return True
def handle_write(self):
pass
def run(self):
if not self._s:
asyncore.loop()
imap = SOCK(config={'SERVER' : '', 'PORT' : 6668})
imap.run()
while 1
sleep(1)
Something along the lines of this?
Asyncore socket that always appends to the inbuffer when there's data to recieve.
You can modify it however you want to, i just pasted a piece of code from another project that happens to be Threaded :)
Last attempt:
class EchoHandler(asyncore.dispatcher_with_send):
def handle_read(self):
data = self.recv(8192)
if data:
self.send(data)
Related
In this question, the questioner said that we can not use the same socket for the same process (Monitor creates a new process for each message with unseen id), but he did the opposite in the code as each time he created a monitor, he created it on the same socket? I guess there should be no problem if we create multiple processes and connect them to the same socket using socket.bind("localhost:8888") please?
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
class Monitor(object):
def __init(self)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.connect("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.somefunc)
def initialize(self,id)
self._id = id
def somefunc(self, something)
"""work here and send back results if any """
import json
jdecoded = json.loads(something)
if self_id == jdecoded['_id']
""" good im the right monitor for you """
work = jdecoded['message']
results = algorithm (work)
self.socket.send(json.dumps(results))
else:
"""let some other process deal with it, not mine """
pass
class Prefect(object):
def __init(self, id)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.bind("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.check_if)
self._id = id
self.monitors = []
def check_if(self,message):
"""find out from message's id whether we have
started a proces for it previously"""
import json
jdecoded = json.loads(message)
this_id = jdecoded['_id']
if this_id in self.monitors:
pass
else:
"""start new process for it should have its won socket """
new = Monitor()
import Process
newp = Process(target=new.initialize,args=(this_id) )
newp.start()
self.monitors.append(this_id) ## ensure its remembered
Original question: here.
I hope the title is appropriate. If not please suggest an alternative. I am working with the following Python Client Class.
import Queue
import socket
import struct
import threading
import time
class ClientCommand(object):
CONNECT, SEND, RECEIVE, CLOSE = range(4)
def __init__(self, type, data=None):
self.type = type
self.data = data
class ClientReply(object):
ERROR, SUCCESS = range(2)
def __init__(self, type, data = None):
self.type = type
self.data = data
class SocketClientThread(threading.Thread):
def __init__(self, cmd_q = Queue.Queue(), reply_q = Queue.Queue()):
super(SocketClientThread, self).__init__()
self.cmd_q = cmd_q
self.reply_q = reply_q
self.alive = threading.Event()
self.alive.set()
self.socket = None
#self.stopped = False
self.handlers = {
ClientCommand.CONNECT: self._handle_CONNECT,
ClientCommand.CLOSE: self._handle_CLOSE,
ClientCommand.SEND: self._handle_SEND,
ClientCommand.RECEIVE: self._handle_RECEIVE
}
def run(self):
while self.alive.isSet():
#while not self.stopped:
try:
cmd = self.cmd_q.get(True, 0.1)
self.handlers[cmd.type](cmd)
except Queue.Empty as e:
continue
def stop(self):
self.alive.clear()
def join(self, timeout=None):
self.alive.clear()
threading.Thread.join(self, timeout)
def _handle_CONNECT(self, cmd):
try:
self.socket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
self.socket.connect((cmd.data[0], cmd.data[1]))
self.reply_q.put(self._success_reply())
except IOError as e:
self.reply_q.put(self._error_reply(str(e)))
def _handle_CLOSE(self, cmd):
self.socket.close()
reply = ClientReply(ClientReply.SUCCESS)
self.reply_q.put(reply)
def _handle_SEND(self, cmd):
try:
print "about to send: ", cmd.data
self.socket.sendall(cmd.data)
print "sending data"
self.reply_q.put(self._success_reply())
except IOError as e:
print "Error in sending"
self.reply_q.put(self._error_reply(str(e)))
def _handle_RECEIVE(self, cmd):
try:
#TODO Add check for len(data)
flag = True
while flag:
print "Receiving Data"
data = self._recv_n_bytes()
if len(data) != '':
self.reply_q.put(self._success_reply(data))
if data == "Stop":
print "Stop command"
flag = False
except IOError as e:
self.reply_q.put(self._error_reply(str(e)))
def _recv_n_bytes(self):
data = self.socket.recv(1024)
return data
def _error_reply(Self, errstr):
return ClientReply(ClientReply.ERROR, errstr)
def _success_reply(self, data = None):
return ClientReply(ClientReply.SUCCESS, data)
My main script code -
import socket
import time
import Queue
import sys
import os
from client import *
sct = SocketClientThread()
sct.start()
host = '127.0.0.1'
port = 1234
sct.cmd_q.put(ClientCommand(ClientCommand.CONNECT, (host, port)))
try:
while True:
sct.cmd_q.put(ClientCommand(ClientCommand.RECEIVE))
reply = sct.reply_q
tmp = reply.get(True)
data = tmp.data
if data != None:
if data != "step1":
//call function to print something
else:
// call_function that prints incoming data till server stops sending data
print "Sending OK msg"
sct.cmd_q.put(ClientCommand(ClientCommand.SEND, "Hello\n"))
print "Done"
else:
print "No Data"
except:
#TODO Add better error handling than a print
print "Server down"
So here is the issue. Once the thread starts, and the Receive handler is called, I get some data, if that data is not "Step1", I just call a function (another script) to print it.
However, if the data is "step1", I call a function which will then continue printing whatever data the server sends next, till the server sends a "Stop" message. At this point, I break out of the "Receive Handler", and try to send an "Ok" message to the Server.
However, as soon as I break out of the "Receive Handler", it automatically calls upon that function again. So while I am trying to send back a message, the client is again waiting for data from the server. So due to the "Receiver function" being called again, the "Send function" blocks.
I can't seem to understand how to switch between receiving and sending. What is wrong with my approach here and how should I fix this? Do I need to re-write the code to have two separate threads for sending and receiving?
If you require any more details please let me know before you decide to flag my question for no reason.
However, as soon as I break out of the "Receive Handler", it
automatically calls upon that function again.
This is because you call sct.cmd_q.put(ClientCommand(ClientCommand.RECEIVE)) within the while True loop that's run through for each single chunk of data received, i. e. for each data before "step1" one more command to call the "Receive Handler" (which itself loops until "Stop") is put into the ClientCommand queue, and those commands are of course then executed before the SEND command. If you place the RECEIVE call before this while True loop, your approach can work.
The error is
if msgid != "step1":
NameError: name 'msgid' is not defined
Instead of
#TODO Add better error handling than a print
print "Server down"
you had better written
raise
and spotted it immediately.
I have a problem to change the data variable in the class NetworkManagerData. Everytime a request with 'SIT' comes to the server the variable 'master_ip' and 'time_updated' are updated. I have chosen a dictionary for my values as a container because it is mutable. But everytime i get a new request it has it old values in it.
Like:
First Request:
>>False
>>True
Second Request:
>>False
>>True
Third Request without 'SIT':
>>False
>>False
Do I have some missunderstanding with these mutables. Or are there some special issues with using dictionarys in multiprocessing?
Code to start the server:
HOST, PORT = "100.0.0.1", 11880
network_manager = NetworkManagerServer((HOST, PORT), NetworkManagerHandler)
network_manager_process =
multiprocessing.Process(target=network_manager.serve_forever)
network_manager_process.daemon = True
network_manager_process.start()
while True:
if '!quit' in input():
network_manager_process.terminate()
sys.exit()
Server:
from multiprocessing import Lock
import os
import socketserver
class NetworkManagerData():
def __init__(self):
self.lock = Lock()
self.data = {'master_ip': '0.0.0.0', 'time_updated': False}
class NetworkManagerServer(socketserver.ForkingMixIn, socketserver.TCPServer):
def __init__(self, nmw_server, RequestHandlerClass):
socketserver.TCPServer.__init__(self, nmw_server, RequestHandlerClass)
self.nmd = NetworkManagerData()
def finish_request(self, request, client_address):
self.RequestHandlerClass(request, client_address, self, self.nmd)
class NetworkManagerHandler(socketserver.StreamRequestHandler):
def __init__(self, request, client_address, server, nmd):
self.request = request
self.client_address = client_address
self.server = server
self.setup()
self.nmd = nmd
try:
self.handle(self.nmd)
finally:
self.finish()
def handle(self, nmd):
print(nmd.data.get('time_updated')) # <<<- False ->>>
while True:
self.data = self.rfile.readline()
if self.data:
ds = self.data.strip().decode('ASCII')
header = ds[0:3]
body = ds[4:]
if 'SIT' in header:
# ...
nmd.lock.acquire()
nmd.data['master_ip'] = self.client_address[0] # <-
nmd.data['time_updated'] = True # <-
nmd.lock.release()
# ...
print(nmd.data.get('time_updated')) # <<<- True ->>>
else:
print("Connection closed: " + self.client_address[0] + ":" +
str(self.client_address[1]))
return
Thanks!
Ok, the use of multiprocessing.Value and multiprocessing.Array have solved my problem. :)
If you give some variables that are not part of the multiprocessing library to a process it will only copy the variables for its own process, there is no connection between the original and the copy. The variable is still mutable, but only in its own copy.
To work on the original variable in the memory you have to use multiprocessing.Array or multiprocessing.Value. There are other things like variable managers or queues to get this done. What you want to use depends on your case.
So I changed the datamanager class:
class NetworkManagerData():
def __init__(self):
self.lock = multiprocessing.Lock()
self.master_ip = multiprocessing.Array('B', (255,255,255,255))
self.time_updated = multiprocessing.Value('B', False)
To set the IP I am using this now:
nmd.lock.acquire()
ip_array = []
for b in self.client_address[0].split('.'):
ip_array.append(int(b))
nmd.master_ip[:] = ip_array
nmd.lock.release()
To read the IP I am using this:
self.wfile.write(("GIP|%s.%s.%s.%s" %
(nmd.master_ip[0], nmd.master_ip[1], nmd.master_ip[2],
nmd.master_ip[3]) + '\n').encode('ASCII'))
Good evening, This is my 1st time on this site, I have been programming a python based user monitoring system for my work for the past 3 months and I am almost done with my 1st release. However I have run into a problem controlling what computer I want to connect to.
If i run the two sample code I put in this post I can receive the client and send commands to client with the server, but only one client at a time, and the server is dictating which client I can send to and which one is next. I am certain the problem is "server side but I am not sure how to fix the problem and a Google search does not turn up anyone having tried this.
I have attached both client and server base networking code in this post.
client:
import asyncore
import socket
import sys
do_restart = False
class client(asyncore.dispatcher):
def __init__(self, host, port=8000):
serv = open("srv.conf","r")
host = serv.read()
serv.close()
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
def writable(self):
return 0
def handle_connect(self):
pass
def handle_read(self):
data = self.recv(4096)
#Rest of code goes here
serv = open("srv.conf","r")
host = serv.read()
serv.close()
request = client(host)
asyncore.loop()
server:
import asyncore
import socket
import sys
class soc(asyncore.dispatcher):
def __init__(self, port=8000):
asyncore.dispatcher.__init__(self)
self.port = port
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind(('', port))
self.listen(5)
def handle_accept(self):
channel, addr = self.accept()
while 1:
j = raw_input(addr)
#Rest of my code is here
server = soc(8000)
asyncore.loop()
Here is a fast and dirty idea that I threw together.
The use of raw_input has been replaced with another dispatcher that is asyncore compatable, referencing this other question here
And I am expanding on the answer given by #user1320237 to defer each new connection to a new dispatcher.
You wanted to have a single command line interface that can send control commands to any of the connected clients. That means you need a way to switch between them. What I have done is created a dict to keep track of the connected clients. Then we also create a set of available commands that map to callbacks for your command line.
This example has the following:
list: list current clients
set <client>: set current client
send <msg>: send a msg to the current client
server.py
import asyncore
import socket
import sys
from weakref import WeakValueDictionary
class Soc(asyncore.dispatcher):
CMDS = {
'list': 'cmd_list',
'set': 'cmd_set_addr',
'send': 'cmd_send',
}
def __init__(self, port=8000):
asyncore.dispatcher.__init__(self)
self._conns = WeakValueDictionary()
self._current = tuple()
self.port = port
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.set_reuse_addr()
self.bind(('', port))
self.listen(5)
self.cmdline = Cmdline(self.handle_input, sys.stdin)
self.cmdline.prompt()
def writable(self):
return False
def handle_input(self, i):
tokens = i.strip().split(None, 1)
cmd = tokens[0]
arg = ""
if len(tokens) > 1:
arg = tokens[1]
cbk = self.CMDS.get(cmd)
if cbk:
getattr(self, cbk)(arg)
self.cmdline.prompt(self._addr_to_key(self._current))
def handle_accept(self):
channel, addr = self.accept()
c = Conn(channel)
self._conns[self._addr_to_key(addr)] = c
def _addr_to_key(self, addr):
return ':'.join(str(i) for i in addr)
def cmd_list(self, *args):
avail = '\n'.join(self._conns.iterkeys())
print "\n%s\n" % avail
def cmd_set_addr(self, addr_str):
conn = self._conns.get(addr_str)
if conn:
self._current = conn.addr
def cmd_send(self, msg):
if self._current:
addr_str = self._addr_to_key(self._current)
conn = self._conns.get(addr_str)
if conn:
conn.buffer += msg
class Cmdline(asyncore.file_dispatcher):
def __init__(self, cbk, f):
asyncore.file_dispatcher.__init__(self, f)
self.cbk = cbk
def prompt(self, msg=''):
sys.stdout.write('%s > ' % msg)
sys.stdout.flush()
def handle_read(self):
self.cbk(self.recv(1024))
class Conn(asyncore.dispatcher):
def __init__(self, *args, **kwargs):
asyncore.dispatcher.__init__(self, *args, **kwargs)
self.buffer = ""
def writable(self):
return len(self.buffer) > 0
def handle_write(self):
self.send(self.buffer)
self.buffer = ''
def handle_read(self):
data = self.recv(4096)
print self.addr, '-', data
server = Soc(8000)
asyncore.loop()
Your main server is now never blocking on stdin, and always accepting new connections. The only work it does is the command handling which should either be a fast operation, or signals the connection objects to handle the message.
Usage:
# start the server
# start 2 clients
>
> list
127.0.0.1:51738
127.0.0.1:51736
> set 127.0.0.1:51736
127.0.0.1:51736 >
127.0.0.1:51736 > send foo
# client 127.0.0.1:51736 receives "foo"
To me
while 1:
j = raw_input(addr)
seems to be the problem:
you only accept a socket an then do something with it until end.
You should create e new dispatcher for every client connecting
class conn(asyncore.dispatcher):
...
def handle_read(self):
...
class soc(asyncore.dispatcher):
def handle_accept(self):
...
c = conn()
c.set_socket(channel)
Asyncore will call you back for every read operation possible.
Asyncore uses only one thread. This is its strength. every dispatcher that has a socket is called one after an other with those handle_* functions.
My code basically needs to start up a simple chat server with a client. Where the server and the client can talk back and forth to each other. I've gotten everything to be implemented correctly, but I can't figure out how to shut down the server whenever I'm done. (I know it's ss.shutdown()).
I'm wanting to end right now based on a keyword shared between the two (something like "bye"), but I don't know if I can somehow send a message to my SocketServer from BaseRequestHandler to shutdown() whenever it receives the message.
Eventually, my goal is to incorporate Tkinter to make a GUI, but I wanted to get everything else to work first, and this is my first time dealing with sockets in Python.
from sys import argv, stderr
from threading import Thread
import socket
import SocketServer
import threading
import sys
class ThreadedRecv(Thread):
def __init__(self,socket):
Thread.__init__(self)
self.__socket = socket
self.__message = ''
self.__done = False
def recv(self):
while self.__message.strip() != "bye" and not self.getStatus():
self.__message = self.__socket.recv(4096)
print 'received',self.__message
self.setStatus(True)
def run(self):
self.recv()
def setStatus(self,status):
self.__done = status
def getStatus(self):
return self.__done
class ThreadedSend(Thread):
def __init__(self,socket):
Thread.__init__(self)
self.__socket = socket
self.__message = ''
self.__done = False
def send(self):
while self.__message != "bye" and not self.getStatus():
self.__message = raw_input()
self.__socket.send(self.__message)
self.setStatus(True)
def run(self):
self.send()
def setStatus(self,status):
self.__done = status
def getStatus(self):
return self.__done
class HostException(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class EchoServer(SocketServer.BaseRequestHandler):
def setup(self):
print self.client_address, 'is connected!'
self.request.send('Hello ' + str(self.client_address) + '\n')
self.__done = False
def handle(self):
sender = ThreadedSend(self.request)
recver = ThreadedRecv(self.request)
sender.start()
recver.start()
while 1:
if recver.getStatus():
sender.setStatus(True)
break
if sender.getStatus():
recver.setStatus(True)
break
def finish(self):
print self.client_address, 'disconnected'
self.request.send('bye client %s\n' % str(self.client_address))
self.setDone(True)
def setDone(self,done):
self.__done = done
def getDone(self):
return self.__done
def setup(arg1, arg2, arg3):
server = False
defaultPort,defaultHost = 2358,"localhost"
hosts = []
port = defaultPort
serverNames = ["TRUE","SERVER","S","YES"]
arg1 = arg1.upper()
arg2 = arg2.upper()
arg3 = arg3.upper()
if arg1 in serverNames or arg2 in serverNames or arg3 in serverNames:
server = True
try:
port = int(arg1)
if arg2 != '':
hosts.append(arg2)
except ValueError:
if arg1 != '':
hosts.append(arg1)
try:
port = int(arg2)
if arg3 != '':
hosts.append(arg3)
except ValueError:
if arg2 != '':
hosts.append(arg2)
try:
port = int(arg3)
except ValueError:
if arg3 != '':
hosts.append(arg3)
port = defaultPort
for sn in serverNames:
if sn in hosts:
hosts.remove(sn)
try:
if len(hosts) != 1:
raise HostException("Either more than one or no host "+ \
"declared. Setting host to localhost.")
except HostException as error:
print error.value, "Setting hosts to default"
return (server,defaultHost,port)
return (server,hosts[0].lower(),port)
def main():
bufsize = 4096
while len(argv[1:4]) < 3:
argv.append('')
settings = setup(*argv[1:4])
connections = (settings[1],settings[2])
print connections
if not settings[0]:
try:
mySocket = socket.socket(socket.AF_INET,\
socket.SOCK_STREAM)
except socket.error, msg:
stderr.write("[ERROR] %s\n" % msg[1])
sys.exit(1)
try:
mySocket.connect(connections)
except socket.error, msg:
stderr.write("[ERROR] %s\n" % msg[1])
sys.exit(2)
message = ""
print "Enter a message to send to the server. "+\
"Enter \"bye\" to quit."
sender = ThreadedSend(mySocket)
recver = ThreadedRecv(mySocket)
sender.start()
recver.start()
while 1:
if sender.getStatus():
recver.setStatus(True)
break
if recver.getStatus():
sender.setStatus(True)
break
else:
xserverhandler = EchoServer
serversocket = SocketServer.ThreadedTCPServer(\
connections,xserverhandler)
server_thread = Thread(target = serversocket.serve_forever)
server_thread.setDaemon(True)
server_thread.start()
# I would like to shut down this server whenever
# I get done talking to it.
"""while 1:
if xserverhandler.getDone():
print 'This is now true!'
serversocket.shutdown()
break"""
if __name__ == '__main__':
main()
Yeah, I know setup() is a terrible function right now with the try's and catches, but it works for now, so I was going to fix it later.
My question is basically: How can I get the server to actually end based on a message that it receives? If possible, is there a way to access the Request Handler after it's started?
Please fix your code so it works, and include some way to use it. You need to add
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
since SocketServer doesn't actually include that class (at least not in my version of 2.6 nor 2.7). Instead, it's an example from the SocketServer definition.
Please include an example of how to start/use the code. In this case to start the server you need to do:
ss.py SERVER localhost 8001
and the client as
ss.py localhost 8001
If you do that then you can't do server_thread.setDaemon(True) because there are no other threads running, which means the server will exit immediately.
Once that's done the solution is to add a call (or two) to self.server.shutdown() insdie of your EchoServer.handle method, like:
while 1:
if recver.getStatus():
sender.setStatus(True)
self.server.shutdown()
break
However, I can't get that to work, and I think it's because I inherited things wrong, or guessed wrong in what you did.
What you should do is search for someone else who has done a chat server in Python. Using Google I found http://www.slideshare.net/didip/socket-programming-in-python and there are certainly others.
Also, if you are going to mix GUI and threaded programming then you should look into examples based on that. There are a number of hits when I searched for "tkinter chat". Also, you might want to look into twisted, which has solved a lot of these problems already.
What problems? Well, for example, you likely want an SO_REUSEADDR socket option.
Request handler object is created for each new request. So you have to store "done" flag in server, not handler. Something like the following:
class EchoServer(SocketServer.BaseRequestHandler):
...
def setDone(self):
self.server.setDone() # or even better directly self.server.shutdown()