zmq/zeromq recv_multipart hangs on large data - python

I'm trying to modify a zeromq example for processing background task and get it working. In particular, I have a xpub/xsub sockets setup, and a client would subscribe to the publisher to receive progress updates and results from the worker.
worker_server.py
proxy = zmq.devices.ThreadDevice(zmq.QUEUE, zmq.XSUB, zmq.XPUB)
proxy.bind_in('tcp://127.0.0.1:5002')
proxy.bind_out('tcp://127.0.0.1:5003')
proxy.start()
client.py
ctx = zmq.Context()
socket = server.create_socket(ctx, 'sub')
socket.setsockopt(zmq.SUBSCRIBE, '')
poller = zmq.Poller()
print 'polling'
poller.register(socket, zmq.POLLIN)
ready = dict(poller.poll())
print 'polling done'
if ready and ready.has_key(socket):
job_id, code, result = socket.recv_multipart()
return {'status': code, 'data': result}
So far, the code works for small messages, however when the worker tries to publish the task results which is large, 35393030 bytes, client does not receive the message and code hangs at ready = dict(poller.poll()) Now, I just started learning to use zmq, but isn't send_multipart supposed to chunk the messages? what is causing the client to not receive results
worker.py
def worker(logger_name, method, **task_kwargs):
job_id = os.getpid()
ctx = zmq.Context()
socket = create_socket(ctx, 'pub')
time.sleep(1)
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
sh = WSLoggingHandler(socket, job_id)
fh = logging.FileHandler(filename=os.path.join(tmp_folder, 'classifier.log.txt'), encoding='utf-8')
logger.addHandler(ch)
logger.addHandler(sh)
logger.addHandler(fh)
modules_arr = method.split('.')
m = __import__(".".join(modules_arr[:-1]), globals(), locals(), -1)
fn = getattr(m, modules_arr[-1])
try:
results = fn(**task_kwargs)
print 'size of data file %s' %len(results)
data = [
str(job_id),
SUCCESS_CODE,
results
]
tracker = socket.send_multipart(data)
print 'sent!!!'
except Exception, e:
print traceback.format_exc()
socket.send_multipart((
str(job_id),
ERROR_CODE,
str(e)
))
finally:
socket.close()
EDIT:
Tried manually splitting up the results into smaller chunks but haven had success.
results = fn(**task_kwargs)
print 'size of data file %s' %len(results)
data = [
str(job_id),
SUCCESS_CODE,
] + [results[i: i + 20] for i in xrange(0, len(results), 20)]
print 'list size %s' %len(data)
tracker = socket.send_multipart(data)
print 'sent!!!'

From the pyzmq documentation:
https://zeromq.github.io/pyzmq/api/zmq.html#zmq.Socket.send_multipart
msg_parts : iterable
A sequence of objects to send as a multipart message. Each element can be any sendable object (Frame, bytes, buffer-providers)
The message doesn't get chunked automatically, each element in the iterable you pass in is the chunk. So the way you have it set up, all of your result data will be one chunk. You'll need to use an iterator that chunks your results into an appropriate size.

Related

how do we know the first 4 bytes read on a tcp socket are the length of the message?

The "Sending and receiving logging events across a network" section of the python logging cookbook demonstrates how a clients can send logs via a TCP session.
Log messages are pickled and sent to the server thanks to the socket handler. The server then unpickle the messages and log them.
The code to get a message from the tcp socket is this one:
class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
"""Handler for a streaming logging request.
This basically logs the record using whatever logging policy is
configured locally.
"""
def handle(self):
"""
Handle multiple requests - each expected to be a 4-byte length,
followed by the LogRecord in pickle format. Logs the record
according to whatever policy is configured locally.
"""
while True:
chunk = self.connection.recv(4)
if len(chunk) < 4:
break
slen = struct.unpack('>L', chunk)[0]
chunk = self.connection.recv(slen)
while len(chunk) < slen:
chunk = chunk + self.connection.recv(slen - len(chunk))
obj = self.unPickle(chunk)
record = logging.makeLogRecord(obj)
self.handleLogRecord(record)
# then, methods to handle the record, but that's not the interesting part
class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
"""
Simple TCP socket-based logging receiver suitable for testing.
"""
allow_reuse_address = 1
def __init__(self, host='localhost',
port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
handler=LogRecordStreamHandler):
SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
self.abort = 0
self.timeout = 1
self.logname = None
def serve_until_stopped(self):
import select
abort = 0
while not abort:
rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout)
if rd:
self.handle_request()
abort = self.abort
What I don't understand in this example is: how do we know that the first 4 bytes we read from the socket constitute the length of the message? I looked at the socket, and logging documentation but could not find mention of it.
Also, the docstring implicitly states that this code is not good enough for production. What is so bad about that code?
As suggested in the comments, the answer is in the sending code (handlers.py in python 2.7.10). I just removed the docstrings/comments that were obvious or irrelevant to this question to make the code more readable.
def makePickle(self, record):
ei = record.exc_info
if ei:
dummy = self.format(record)
record.exc_info = None
d = dict(record.__dict__)
d['msg'] = record.getMessage()
d['args'] = None
s = cPickle.dumps(d, 1)
if ei:
record.exc_info = ei
# slen represents a "long integer", which is usually 32 bits large.
slen = struct.pack(">L", len(s))
# Here is where the 4 byte length is prepended to the message
return slen + s
def emit(self, record):
try:
s = self.makePickle(record)
# s is actually (length of the message) + (message)
self.send(s)
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
def send(self, s):
"""
Send a pickled string to the socket.
This function allows for partial sends which can happen when the
network is busy.
"""
if self.sock is None:
self.createSocket()
if self.sock:
try:
if hasattr(self.sock, "sendall"):
self.sock.sendall(s)
else:
sentsofar = 0
left = len(s)
while left > 0:
sent = self.sock.send(s[sentsofar:])
sentsofar = sentsofar + sent
left = left - sent
except socket.error:
self.sock.close()
self.sock = None # so we can call createSocket next time

Asyncio imap fetch mails python3

I'm testing with the asyncio module, however I need a hint / suggesstion how to fetch large emails in an async way.
I have a list with usernames and passwords for the mail accounts.
data = [
{'usern': 'foo#bar.de', 'passw': 'x'},
{'usern': 'foo2#bar.de', 'passw': 'y'},
{'usern': 'foo3#bar.de', 'passw': 'z'} (...)
]
I thought about:
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait([get_attachment(d) for d in data]))
loop.close()
However, the long part is to download the email attachments.
Email:
#asyncio.coroutine
def get_attachment(d):
username = d['usern']
password = d['passw']
connection = imaplib.IMAP4_SSL('imap.bar.de')
connection.login(username, password)
connection.select()
# list all available mails
typ, data = connection.search(None, 'ALL')
for num in data[0].split():
# fetching each mail
typ, data = connection.fetch(num, '(RFC822)')
raw_string = data[0][1].decode('utf-8')
msg = email.message_from_string(raw_string)
for part in msg.walk():
if part.get_content_maintype() == 'multipart':
continue
if part.get('Content-Disposition') is None:
continue
if part.get_filename():
body = part.get_payload(decode=True)
# do something with the body, async?
connection.close()
connection.logout()
How could I process all (downloading attachments) mails in an async way?
If you don't have an asynchronous I/O-based imap library, you can just use a concurrent.futures.ThreadPoolExecutor to do the I/O in threads. Python will release the GIL during the I/O, so you'll get true concurrency:
def init_connection(d):
username = d['usern']
password = d['passw']
connection = imaplib.IMAP4_SSL('imap.bar.de')
connection.login(username, password)
connection.select()
return connection
local = threading.local() # We use this to get a different connection per thread
def do_fetch(num, d, rfc):
try:
connection = local.connection
except AttributeError:
connnection = local.connection = init_connection(d)
return connnection.fetch(num, rfc)
#asyncio.coroutine
def get_attachment(d, pool):
connection = init_connection(d)
# list all available mails
typ, data = connection.search(None, 'ALL')
# Kick off asynchronous tasks for all the fetches
loop = asyncio.get_event_loop()
futs = [asyncio.create_task(loop.run_in_executor(pool, do_fetch, num, d, '(RFC822)'))
for num in data[0].split()]
# Process each fetch as it completes
for fut in asyncio.as_completed(futs):
typ, data = yield from fut
raw_string = data[0][1].decode('utf-8')
msg = email.message_from_string(raw_string)
for part in msg.walk():
if part.get_content_maintype() == 'multipart':
continue
if part.get('Content-Disposition') is None:
continue
if part.get_filename():
body = part.get_payload(decode=True)
# do something with the body, async?
connection.close()
connection.logout()
loop = asyncio.get_event_loop()
pool = ThreadPoolExecutor(max_workers=5) # You can probably increase max_workers, because the threads are almost exclusively doing I/O.
loop.run_until_complete(asyncio.wait([get_attachment(d, pool) for d in data]))
loop.close()
This isn't quite as nice as a truly asynchronous I/O-based solution, because you've still got the overhead of creating the threads, which limits scalability and adds extra memory overhead. You also do get some GIL slowdown because of all the code wrapping the actual I/O calls. Still, if you're dealing with less than thousands of mails, it should still perform ok.
We use run_in_executor to use the ThreadPoolExecutor as part of the asyncio event loop, asyncio.async to wrap the coroutine object returned in a asyncio.Future, and as_completed to iterate through the futures in the order they complete.
Edit:
It seems imaplib is not thread-safe. I've edited my answer to use thread-local storage via threading.local, which allows us to create one connection object per-thread, which can be re-used for the entire life of the thread (meaning you create num_workers connection objects only, rather than a new connection for every fetch).
I had the same needs : fetching emails with python 3 fully async. If others here are interested I pushed an asyncio IMAP lib here : https://github.com/bamthomas/aioimaplib
You can use it like this :
import asyncio
from aioimaplib import aioimaplib
#asyncio.coroutine
def wait_for_new_message(host, user, password):
imap_client = aioimaplib.IMAP4(host=host)
yield from imap_client.wait_hello_from_server()
yield from imap_client.login(user, password)
yield from imap_client.select()
asyncio.async(imap_client.idle())
id = 0
while True:
msg = yield from imap_client.wait_server_push()
print('--> received from server: %s' % msg)
if 'EXISTS' in msg:
id = msg.split()[0]
imap_client.idle_done()
break
result, data = yield from imap_client.fetch(id, '(RFC822)')
email_message = email.message_from_bytes(data[0])
attachments = []
body = ''
for part in email_message.walk():
if part.get_content_maintype() == 'multipart':
continue
if part.get_content_maintype() == 'text' and 'attachment' not in part.get('Content-Disposition', ''):
body = part.get_payload(decode=True).decode(part.get_param('charset', 'ascii')).strip()
else:
attachments.append(
{'type': part.get_content_type(), 'filename': part.get_filename(), 'size': len(part.as_bytes())})
print('attachments : %s' % attachments)
print('body : %s' % body)
yield from imap_client.logout()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(wait_for_new_message('my.imap.server', 'user', 'pass'))
Large emails with attachments are also downloaded with asyncio.

Send a string message to multiple threads

I have an IRC client that receives messages on a socket.
From this client I have created several bots that connect to other peoples chat channels on twitch. (These are authorized and not Spam bots!).
Each bot is created in a separate thread that takes the channel name along with a few other parameters.
My issue is my IRC socket can only bind to one port and this handles all the IRC messages, each message has a #channel string as the third word-string that directs it to a particular channel. These messages can be handled inside each bot as each one knows the name of its channel.
My problem is; How do I send the string received over the socket to multiple threads?
import time
import socket
import threading
import string
import sys
import os
class IRCBetBot:
#irc ref
irc = None
def __init__(self,IRCRef,playerName,channelName,currencyName):
#assign variables
self.irc = IRCRef
self.channel = '#' + channelName
self.irc.send(('JOIN ' + self.channel + '\r\n') .encode("utf8"))
#create readbuffer to hold strings from IRC
readbuffer = ""
# This is the main loop
while 1:
##readbuffer## <- need to send message from IRC to this variable
for line in temp:
line=str.rstrip(line)
line=str.split(line)
if (len(line) >= 4) and ("PRIVMSG" == line[1]) and (self.channel == line[2]) and not ("jtv" in line[0]):
#call function to handle user message
if(line[0]=="PING"):
self.irc.send(("PONG %s\r\n" % line[0]).encode("utf8"))
def runAsThread(ircref,userName, channelName, currencyPrefix):
print("Got to runAsThread with : " + str(userName) + " " + str(channelName) + " " + str(currencyPrefix))
IRCBetBot(ircref,userName,channelName,currencyPrefix)
# Here we create the IRC connection
#IRC connection variables
nick = 'mybot' #alter this value with the username used to connect to IRC eg: "username".
password = "oauth:mykey" #alter this value with the password used to connect to IRC from the username above.
server = 'irc.twitch.tv'
port = 6667
#create IRC socket
irc = socket.socket()
irc.connect((server, port))
#sends variables for connection to twitch chat
irc.send(('PASS ' + password + '\r\n').encode("utf8"))
irc.send(('USER ' + nick + '\r\n').encode("utf8"))
irc.send(('NICK ' + nick + '\r\n').encode("utf8"))
# Array to hold all the new threads
threads = []
# authorised Channels loaded from file in real program
authorisedChannels = [["user1","#channel1","coin1"],["user2","#channel2","coin2"],["user3","#channel3","coin3"]]
for item in authorisedChannels:
try:
userName = item[0]
channelName = item[1]
currencyPrefix = item [2]
myTuple = (irc,userName,channelName,currencyPrefix)
thread = threading.Thread(target=runAsThread,args = myTuple,)
thread.start()
threads.append(thread)
time.sleep(5) # wait to avoid too many connections to IRC at once from same IP
except Exception as e:
print("An error occurred while creating threads.")
print(str(e))
#create readbuffer to hold strings from IRC
readbuffer = ""
# This is the main loop
while 1:
readbuffer= readbuffer+self.irc.recv(1024).decode("utf-8")
temp=str.split(readbuffer, "\n")
readbuffer=temp.pop( )
#
#Need to send readbuffer to each IRCBetBot() created in runAsThread that contains a while 1: loop to listen for strings in its __init__() method.
#
print ("Waiting...")
for thread in threads:
thread.join()
print ("Complete.")
I need to somehow get the readbuffer from the main loop into each IRCBetBot object created in separate threads? Any ideas?
Here's an example that shows how you can do this using a queue for each thread. Instead of just creating a list of threads, we create a dict of threads with the channel as the key, and store both the thread object and a queue that can be used to talk to the thread in the dict.
#!/usr/bin/python3
import threading
from queue import Queue
class IRCBetBot(threading.Thread):
def __init__(self, q, playerName, channelName, currencyName):
super().__init__()
self.channel = channelName
self.playerName = playerName
self.currencyName = currencyName
self.queue = q
def run(self):
readbuffer = ""
while 1:
readbuffer = self.queue.get() # This will block until a message is sent to the queue.
print("{} got msg {}".format(self.channel, readbuffer))
if __name__ == "__main__":
authorisedChannels = [["user1","#channel1","coin1"],
["user2","#channel2","coin2"],
["user3","#channel3","coin3"]]
threads = {}
for item in authorisedChannels:
try:
userName = item[0]
channelName = item[1]
currencyPrefix = item [2]
myTuple = (userName,channelName,currencyPrefix)
q = Queue()
thread = IRCBetBot(q, *myTuple )
thread.start()
threads[channelName] = (q, thread)
except Exception as e:
print("An error occurred while creating threads.")
print(str(e))
while 1:
a = input("Input your message (channel: msg): ")
channel, msg = a.split(":")
threads[channel][0].put(msg) # Sends a message using the queue object
As you can see, when messages come into the socket, we parse the channel out (which your code already does) and then just pass the message on to the appropriate queue in our thread dict.
Sample output (slightly tweaked so the output isn't scrambled due to the concurrent print calls):
dan#dantop:~$ ./test.py
Input your message (channel: msg): #channel1: hi there
#channel1 got msg hi there
Input your message (channel: msg): #channel2: another one
#channel2 got msg another one
Well one way to do it would be to have an array of readBuffers similar to the array of threads. And then each thread basically is waiting on data on it's particular readbuffer.
When you get in data you can pass it to the thread you're interested in or just copy the data over to all the readbuffers and let the threads process it if they are interested. An observer pattern would work best in this case.

Python, send a stop notification to a blocking loop within a thread

I've read many answers, however I have not found a proper solution.
The problem, I'm reading mixed/replace HTTP streams that will not expire or end by default.
You can try it by yourself using curl:
curl http://agent.mtconnect.org/sample\?interval\=0
So, now I'm using Python threads and requests to read data from multiple streams.
import requests
import uuid
from threading import Thread
tasks = ['http://agent.mtconnect.org/sample?interval=5000',
'http://agent.mtconnect.org/sample?interval=10000']
thread_id = []
def http_handler(thread_id, url, flag):
print 'Starting task %s' % thread_id
try:
requests_stream = requests.get(url, stream=True, timeout=2)
for line in requests_stream.iter_lines():
if line:
print line
if flag and line.endswith('</MTConnectStreams>'):
# Wait until XML message end is reached to receive the full message
break
except requests.exceptions.RequestException as e:
print('error: ', e)
except BaseException as e:
print e
if __name__ == '__main__':
for task in tasks:
uid = str(uuid.uuid4())
thread_id.append(uid)
t = Thread(target=http_handler, args=(uid, task, False), name=uid)
t.start()
print thread_id
# Wait Time X or until user is doing something
# Send flag = to desired thread to indicate the loop should stop after reaching the end.
Any suggestions? What is the best solution? I don't want to kill the thread because I would like to read the ending to have a full XML message.
I found a solution by using threading module and threading.events. Maybe not the best solution, but it works fine currently.
import logging
import threading
import time
import uuid
import requests
logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-10s) %(message)s', )
tasks = ['http://agent.mtconnect.org/sample?interval=5000',
'http://agent.mtconnect.org/sample?interval=10000']
d = dict()
def http_handler(e, url):
logging.debug('wait_for_event starting')
message_buffer = []
filter_namespace = True
try:
requests_stream = requests.get(url, stream=True, timeout=2)
for line in requests_stream.iter_lines():
if line:
message_buffer.append(line)
if e.isSet() and line.endswith('</MTConnectStreams>'):
logging.debug(len(message_buffer))
break
except requests.exceptions.RequestException as e:
print('error: ', e)
except BaseException as e:
print e
if __name__ == '__main__':
logging.debug('Waiting before calling Event.set()')
for task in tasks:
uid = str(uuid.uuid4())
e = threading.Event()
d[uid] = {"stop_event": e}
t = threading.Event(uid)
t = threading.Thread(name=uid,
target=http_handler,
args=(e, task))
t.start()
logging.debug('Waiting 3 seconds before calling Event.set()')
for key in d:
time.sleep(3)
logging.debug(threading.enumerate())
logging.debug(d[key])
d[key]['stop_event'].set()
logging.debug('bye')

python Client hangs when no data to receive from server and hangs in that thread w/o letting client send

I am trying to figure out how to get my client to send and receive data 'simultaneously' and am using threads. My problem is that, depending on the way I set it up, the way here it waits for data from the server in the recieveFromServer function which is in its own thread and cannot stop it when nothing will be sent. The other way it just waits for user input, and will send to the server and then I'd call the function recieveFromServer after the client sends a message to the server which doesn't allow for fluent communication, but cannot get it to alternate automatically. How do I release the thread when the client has nothing to be sent, or there is no more to be received from the server.
It would get to long if I tried to explain everything I have tried. :)
Thanks.
The client:
from socket import *
from threading import *
import thread
import time
from struct import pack,unpack
from networklingo import *
#from exception import *
HOST = '192.168.0.105'
PORT = 21567
BUFFSIZE = 1024
ADDR = (HOST,PORT)
lock = thread.allocate_lock()
class TronClient:
def __init__(self,control=None):
self.tcpSock = socket(AF_INET,SOCK_STREAM)
#self.tcpSock.settimeout(.2)
self.recvBuff = []
def connect(self):
self.tcpSock.connect(ADDR)
self.clientUID = self.tcpSock.recv(BUFFSIZE)
print 'My clientUID is ', self.clientUID
t = Thread(target = self.receiveFromSrv())
t.setDaemon(1)
t.start()
print 'going to main loop'
self.mainLoop()
#t = Thread(target = self.mainLoop())
#t.setName('mainLoop')
#t.setDaemon(1)
#t.start()
def receiveFromSrv(self):
RECIEVING = 1
while RECIEVING:
#print 'Attempting to retrieve more data'
#lock.acquire()
#print 'Lock Aquired in recieveFromSrv'
#try:
data = self.tcpSock.recv(BUFFSIZE)
#except socket.timeout,e:
#print 'Error recieving data, ',e
#continue
#print data
if not data: continue
header = data[:6]
msgType,msgLength,clientID = unpack("hhh",header)
print msgType
print msgLength
print clientID,'\n'
msg = data[6:]
while len(msg) < msgLength:
data = self.tcpSock.recv(BUFFSIZE)
dataLen = len(data)
if dataLen <= msgLength:
msg += data
else:
remLen = msgLength-len(data) #we just need to retrieve first bit of data to complete msg
msg += data[:remLen]
self.recvBuff.append(data[remLen:])
print msg
#else:
#lock.release()
# print 'lock release in receiveFromSrv'
#time.sleep(2)
#RECIEVING = 0
def disconnect(self,data=''):
self.send(DISCONNECT_REQUEST,data)
#self.tcpSock.close()
def send(self,msgType,msg):
header = pack("hhh",msgType,len(msg),self.clientUID)
msg = header+msg
self.tcpSock.send(msg)
def mainLoop(self):
while 1:
try:
#lock.acquire()
#print 'lock aquired in mainLoop'
data = raw_input('> ')
except EOFError: # enter key hit without any data (blank line) so ignore and continue
continue
#if not data or data == '': # no valid data so just continue
# continue
if data=='exit': # client wants to disconnect, so send request to server
self.disconnect()
break
else:
self.send(TRON_CHAT,data)
#lock.release()
#print 'lock released in main loop'
#self.recieveFromSrv()
#data = self.tcpSock.recv(BUFFSIZE)
#t = Thread(target = self.receiveFromSrv())
#t.setDaemon(1)
#t.start()
if __name__ == "__main__":
cli = TronClient()
cli.connect()
#t = Thread(target = cli.connect())
#t.setName('connect')
#t.setDaemon(1)
#t.start()
The server (uses a lock when incrementing or decrementing number of clients):
from socket import *
from threading import *
import thread
from controller import *
from networklingo import *
from struct import pack,unpack
HOST = ''
PORT = 21567
BUFSIZE = 1024
ADDR = (HOST,PORT)
nclntlock = thread.allocate_lock()
class TronServer:
def __init__(self,maxConnect=4,control=None):
self.servSock = socket(AF_INET,SOCK_STREAM)
# ensure that you can restart server quickly when it terminates
self.servSock.setsockopt(SOL_SOCKET,SO_REUSEADDR,1)
self.servSock.bind(ADDR)
self.servSock.listen(maxConnect)
# keep track of number of connected clients
self.clientsConnected = 0
# give each client a unique identfier for this run of server
self.clientUID = 0
# list of all clients to cycle through for sending
self.allClients = {}
# keep track of threads
self.cliThreads = {}
#reference back to controller
self.controller = control
self.recvBuff = []
def removeClient(self,clientID,addr):
if clientID in self.allClients.keys():
self.allClients[clientID].close()
print "Disconnected from", addr
nclntlock.acquire()
self.clientsConnected -= 1
nclntlock.release()
del self.allClients[clientID]
else:
print 'ClientID is not valid'
def recieve(self,clientsock,addr):
RECIEVING = 1
# loop serving the new client
while RECIEVING: # while PLAYING???
try:
data = clientsock.recv(BUFSIZE)
except:
RECIEVING = 0
continue
# if not data: break #no data was recieved
if data != '':
print 'Recieved msg from client: ',data
header = data[:6]
msgType,msgLength,clientID = unpack("hhh",header)
print msgType
print msgLength
print clientID,'\n'
if msgType == DISCONNECT_REQUEST: #handle disconnect request
self.removeClient(clientID,addr)
else: #pass message type and message off to controller
msg = data[6:]
while len(msg) < msgLength:
data = self.tcpSock.recv(BUFSIZE)
dataLen = len(data)
if dataLen <= msgLength:
msg += data
else:
remLen = msgLength-len(data) #we just need to retrieve first bit of data to complete msg
msg += data[:remLen]
self.recvBuff.append(data[remLen:])
print msg
# echo back the same data you just recieved
#clientsock.sendall(data)
self.send(TRON_CHAT,msg,-1) #send to client 0
for k in self.allClients.keys():
if self.allClients[k] == clientsock:
self.removeClient(k,addr)
print 'deleted after hard exit from clientID ', k
#self.cliThreads[k].join()
#del self.cliThreads[k]
# then tell controller to delete player with k
break
def send(self,msgType,msg,clientID=-1):
header = pack("hhh",msgType,len(msg),clientID)
msg = header+msg
if clientID in self.allClients:
self.allClients[clientID].send(msg)
elif clientID==ALL_PLAYERS:
for k in self.allClients.keys():
self.allClients[k].send(msg)
def mainLoop(self):
global nclntlock
try:
while self.controller != None and self.controller.state == WAITING:
print 'awaiting connections'
clientsock, caddy = self.servSock.accept()
nclntlock.acquire()
self.clientsConnected += 1
nclntlock.release()
print 'Client ',self.clientUID,' connected from:',caddy
clientsock.setblocking(0)
clientsock.send(str(self.clientUID))
self.allClients[self.clientUID] = clientsock
t = Thread(target = self.recieve, args = [clientsock,caddy])
t.setName('recieve-' + str(self.clientUID))
self.cliThreads[self.clientUID] = t
self.clientUID += 1
# t.setDaemon(1)
t.start()
finally:
self.servSock.close()
if __name__ == "__main__":
serv = TronServer(control = LocalController(nPlayers = 3, fWidth = 70, fHeight = 10))
t = Thread(target = serv.mainLoop())
t.setName('mainLoop')
# t.setDaemon(1)
t.start()
I think you want to try and set the socket to non-blocking mode:
http://docs.python.org/library/socket.html#socket.socket.setblocking
Set blocking or non-blocking mode of
the socket: if flag is 0, the socket
is set to non-blocking, else to
blocking mode. Initially all sockets
are in blocking mode. In non-blocking
mode, if a recv() call doesn’t find
any data, or if a send() call can’t
immediately dispose of the data, a
error exception is raised; in blocking
mode, the calls block until they can
proceed. s.setblocking(0) is
equivalent to s.settimeout(0);
s.setblocking(1) is equivalent to
s.settimeout(None).
Also, instead of using raw sockets, have you considdered using the multiprocessing module. It is a higher-level abstraction for doing network IO. The section on Pipes & Queues is specific to sending and receiving data between a client/server.

Categories

Resources