Monitor a socket connection - python

I have a socket connection which I want to monitor, it recevies market data with high burst.
while 1:
socket.recv()
print('data recevied')
The while loop should only execute print, once in sixty seconds.

Try this:
from datetime import datetime
last = datetime.now()
while 1:
socket.recv()
if (datetime.now() - last).seconds >= 60:
print("data received")
last = datetime.now()

You want some kind of asynchronous processing here: on one hand you want to continuously receive data, on the other hand you want to display a message every 60 seconds.
So threading would be my first idea: foreground display messages while background receives.
def recv_loop(socket, end_cond):
while True:
socket.recv(1024)
# assuming something ends above loop
end_cond[0] = True
end = False
recv_thr = threading.Thread(target = recv_loop, args = (socket,[end]), daemon = True)
recv_thr.start()
while not end:
time.sleep(60)
print('data received')
You haven't show any interesting data in the message, so I haven't either. But as all global variables are shared, it would be trivial to display for example the number of bytes received since last message
Alternatively, you could use select.select because it gives you a timeout. So you change threading for a more complex timeout processing.
last = datetime.datetime.now()
while True:
timeout = (datetime.datetime.now() - last).seconds + 60
if timeout <= 0:
last = datetime.datetime.now()
print("data received")
rl, _, _ = select.select([socket], [], [], timeout)
if (len(rl) > 0):
sockect.recv(1024)

Related

Issue in calculating frames sent per second opencv

I have the following code, I am calculating time taken to send frames per second from client to server, basically calculating the percentage of frames drop rate and time taken to respond back and communication mode is asynchronous.
Now I am facing some issue in calculating the two metrics, for time taken to respond I have set delay to be more than 5 seconds because due to network and processing speed of server it takes time to send result back to client, therefore, longer delay, but for frames per second, I need to calculate how many frames are sent from the client to server per second, how would I calculate this in the data_rate method. both metrics need different time delay, I cant use same time delay for both metrics. Help is highly appreciated on how to define this in the code.
IMAGE_FOLDER = "videoframe"
FPS = 5
SERVER_A_ADDRESS = "tcp://localhost:5555"
ENDPOINT_HANDLER_ADDRESS = "tcp://*:5553"
SERVER_A_TITLE = "SERVER A"
SERVER_B_TITLE = "SERVER B"
context = zmq.Context()
socket_server_a = context.socket(zmq.PUSH)
socket_server_endpoint = context.socket(zmq.PULL)
socket_server_a.connect(SERVER_A_ADDRESS)
socket_server_endpoint.bind(ENDPOINT_HANDLER_ADDRESS)
destination = {
"currentSocket": socket_server_a,
"currentServersTitle": SERVER_A_TITLE,
"currentEndpoint": SERVER_B_TITLE,}
running = True
endpoint_responses = 0
frame_requests = 0
filenames = [f"{IMAGE_FOLDER}/frame{i}.jpg" for i in range(1, 2522)]
def handle_endpoint_responses():
global destination, running, endpoint_responses
while running:
endpoint_response = socket_server_endpoint.recv().decode()
endpoint_responses += 1
def data_rate():
global destination, running, endpoint_responses, frame_requests
while running:
before_received = endpoint_responses ###
time.sleep(5)
after_received = endpoint_responses
before_sent = frame_requests
time.sleep(1)
after_sent = frame_requests ###
print(25 * "#")
print(f"{time.strftime('%H:%M:%S')} ( i ) : receiving model results: {round((after_received - before_received) / 5, 2)} per second.")
print(f"{time.strftime('%H:%M:%S')} ( i ) : sending frames: {round((after_sent - before_sent) / 1, 2)} per second.")
print(25 * "#")
def send_frame(frame, frame_requests):
global destination, running
try:
frame = cv2.resize(frame, (224, 224))
encoded, buffer = cv2.imencode('.jpg', frame)
jpg_as_text = base64.b64encode(buffer)
destination["currentSocket"].send(jpg_as_text)
except Exception as Error:
running = False
def main():
global destination, running, frame_requests
interval = 1 / FPS
while running:
for img in filenames:
frame = cv2.imread(img)
frame_requests += 1
threading.Thread(target=send_frame, args=(frame, frame_requests)).start()
time.sleep(interval)
destination["currentSocket"].close()
if __name__ == "__main__":
threading.Thread(target=handle_endpoint_responses).start()
threading.Thread(target=data_rate).start()
main()
Not only the server time, opening the image also takes time, using sleep interval = 1/FPS may lead to a frame drop too, i.e. producing less frames than possible (the same if playing offline). For playing, if done with sleep, the interval could be shorter and current time could be checked in the loop, and if the time is appropriate - sending the next frame, if not - just wait. The delay could be adaptive also and that time may be ahead of the linear period, in order to compensate for the transmission delay, if the goal is the server-side to display or do something with the image at actual frame time.
I think you have to synchronize/measure the difference of the clocks of the client and the server with an initial hand-shake or as part of the transactions, each time, and to include and log the time of sending/receiving in the transactions/log.
That way you could measure some average delay of the transmission.
Another thing that may give hints is initially to play the images without sending them. That will show how much time cv2.imread(...) and the preprocessing take.
I.e. commenting/adding another function without destination["currentSocket"].send(jpg_as_text)

The response time is decreasing for some tasks

I've written a code which is supposed to receive some images and make them black & white. I'm measuring the response time for each task (response time = the time each image is received and is turned to black & white). Here is the code:
from __future__ import print_function
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
from select import select
import socket
from struct import pack
from struct import unpack
#from collections import deque
import commands
from PIL import Image
import time
host = commands.getoutput("hostname -I")
port = 5005
backlog = 5
BUFSIZE = 4096
queueList = []
start = []
end = []
temp = []
def processP(q):
i = 0
while q:
name = q.pop(0)
col = Image.open(name)
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save("%d+30.jpg" % (i+1))
end.append(time.time())
#print(temp)
i = i + 1
class Receiver:
''' Buffer binary data from socket conn '''
def __init__(self, conn):
self.conn = conn
self.buff = bytearray()
def get(self, size):
''' Get size bytes from the buffer, reading
from conn when necessary
'''
while len(self.buff) < size:
data = self.conn.recv(BUFSIZE)
if not data:
break
self.buff.extend(data)
# Extract the desired bytes
result = self.buff[:size]
# and remove them from the buffer
del self.buff[:size]
return bytes(result)
def save(self, fname):
''' Save the remaining bytes to file fname '''
with open(fname, 'wb') as f:
if self.buff:
f.write(bytes(self.buff))
while True:
data = self.conn.recv(BUFSIZE)
if not data:
break
f.write(data)
def read_tcp(s):
conn, addr = s.accept()
print('Connected with', *addr)
# Create a buffer for this connection
receiver = Receiver(conn)
# Get the length of the file name
name_size = unpack('B', receiver.get(1))[0]
name = receiver.get(name_size).decode()
# Save the file
receiver.save(name)
conn.close()
print('saved\n')
queueList.append(name)
print('name', name)
start.append(time.time())
if (name == "sample.jpg"):
print('------------ok-------------')
processP(queueList)
print("Start: ", start)
print('--------------------------')
print("End: ", end)
while start:
temp.append(end.pop(0) - start.pop(0))
print('****************************')
print("Start: ", start)
print('--------------------------')
print("End: ", end)
print("Temp: ", temp)
i = 0
while i < len(temp)-1:
if (temp[i]<temp[i+1]):
print('yes')
else:
print('No')
i = i + 1
def read_udp(s):
data,addr = s.recvfrom(1024)
print("received message:", data)
def run():
# create tcp socket
tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
tcp.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
try:
tcp.bind((host,port))
except socket.error as err:
print('Bind failed', err)
return
tcp.listen(1)
# create udp socket
udp = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
udp.bind((host,port))
print('***Socket now listening at***:', host, port)
input = [tcp,udp]
try:
while True:
inputready,outputready,exceptready = select(input,[],[])
for s in inputready:
if s == tcp:
read_tcp(s)
elif s == udp:
read_udp(s)
else:
print("unknown socket:", s)
# Hit Break / Ctrl-C to exit
except KeyboardInterrupt:
print('\nClosing')
raise
tcp.close()
udp.close()
if __name__ == '__main__':
run()
Now for some evaluation purposes, I send a single image many times. When I look at the response times I see that sometimes the response time of the 8th image, for example, is more than the response time of the 9th one.
So my question is that since the size and the time needed for processing each of images are the same (I'm sending a single image several times), Why is the response time for each image variable? Shouldn't the response time of the next image be longer (or at least equal) that the previous one (For example, the response time for 4th image > the response time for 3rd image)?
Your list contains the actual elapsed time it took for each image processing call. This value will be influenced by many things, including the amount of load on the system at that time.
When your program is running, it does not have exclusive access to all of the resources (cpu, ram, disk) of the system it's running on. There could be dozens, hundreds or thousands of other processes being managed by the OS vying for resources. Given this, it is highly unlikely that you would ever see even the same image processed in the exact same amount of time between two runs, when you are measuring with sub-second accuracy. The amount of time it takes can (and will) go up and down with each successive call.

Send client requests to broker only when a worker is available and broker is available

With ZeroMQ for distributed messaging, I'm using the example code provided for the paranoid pirate pattern in python. I have a single client (potentially may have more clients) a broker and multiple workers.
I have modified the example such that the client will continue to send requests to the broker queue even when no workers are available rather than retrying and eventually exiting. They will get distributed to the worker(s) when they eventually become available. In my scenario each worker varies in the amount of time it takes to process a given request.
The problem I'm seeing is that when the broker goes down (becomes unavailable), the client is unable to tell that the broker is unavailable and it continues to .send() requests. These requests get lost. Only new requests are processed after the broker becomes available once again.
Client.py
from random import randint
import time
import zmq
HEARTBEAT_LIVENESS = 3
HEARTBEAT_INTERVAL = 1
INTERVAL_INIT = 1
INTERVAL_MAX = 32
# Paranoid Pirate Protocol constants
PPP_READY = "PPP_READY" # Signals worker is ready
PPP_HEARTBEAT = "PPP_HEARTBEAT" # Signals worker heartbeat
def worker_socket(context, poller):
"""Helper function that returns a new configured socket
connected to the Paranoid Pirate queue"""
worker = context.socket(zmq.DEALER) # DEALER
identity = "%04X-%04X" % (randint(0, 0x10000), randint(0, 0x10000))
worker.setsockopt(zmq.IDENTITY, identity)
poller.register(worker, zmq.POLLIN)
worker.connect("tcp://localhost:5556")
worker.send(PPP_READY)
return worker
context = zmq.Context(1)
poller = zmq.Poller()
liveness = HEARTBEAT_LIVENESS
interval = INTERVAL_INIT
heartbeat_at = time.time() + HEARTBEAT_INTERVAL
worker = worker_socket(context, poller)
cycles = 0
while True:
socks = dict(poller.poll(HEARTBEAT_INTERVAL * 1000))
# Handle worker activity on backend
if socks.get(worker) == zmq.POLLIN:
# Get message
# - 3-part envelope + content -> request
# - 1-part HEARTBEAT -> heartbeat
frames = worker.recv_multipart()
if not frames:
break # Interrupted
if len(frames) == 3:
print "I: Normal reply: ", frames
liveness = HEARTBEAT_LIVENESS
time.sleep(4) # Do some heavy work
worker.send_multipart(frames)
elif len(frames) == 1 and frames[0] == PPP_HEARTBEAT:
# print "I: Queue heartbeat"
liveness = HEARTBEAT_LIVENESS
else:
print "E: Invalid message: %s" % frames
interval = INTERVAL_INIT
else:
liveness -= 1
if liveness == 0:
print "W: Heartbeat failure, can't reach queue"
print ("W: Reconnecting in")
time.sleep(interval)
if interval < INTERVAL_MAX:
interval *= 2
poller.unregister(worker)
worker.setsockopt(zmq.LINGER, 0)
worker.close()
worker = worker_socket(context, poller)
liveness = HEARTBEAT_LIVENESS
if time.time() > heartbeat_at:
heartbeat_at = time.time() + HEARTBEAT_INTERVAL
#print "I: Worker heartbeat"
worker.send(PPP_HEARTBEAT)
Broker.py
from collections import OrderedDict
import time
import threading
import zmq
HEARTBEAT_LIVENESS = 3 # 3..5 is reasonable
HEARTBEAT_INTERVAL = 1.0 # Seconds
# Paranoid Pirate Protocol constants
PPP_READY = "PPP_READY" # Signals worker is ready
PPP_HEARTBEAT = "PPP_HEARTBEAT" # Signals worker heartbeat
PPP_BUSY = "PPP_BUSY"
PPP_FREE = "PPP_FREE"
class Worker(object):
def __init__(self, address):
self.address = address
self.expiry = time.time() + HEARTBEAT_INTERVAL * HEARTBEAT_LIVENESS
class WorkerQueue(object):
def __init__(self):
self.queue = OrderedDict()
def ready(self, worker):
self.queue.pop(worker.address, None)
self.queue[worker.address] = worker
def purge(self):
"""Look for & kill expired workers."""
t = time.time()
expired = []
for address,worker in self.queue.iteritems():
if t > worker.expiry: # Worker expired
expired.append(address)
for address in expired:
print "W: Idle worker expired: %s" % address
self.queue.pop(address, None)
def next(self):
address, worker = self.queue.popitem(False)
return address
context = zmq.Context(1)
clcontext = zmq.Context()
frontend = context.socket(zmq.ROUTER) # ROUTER
backend = context.socket(zmq.ROUTER) # ROUTER
frontend.bind("tcp://*:5555") # For clients
backend.bind("tcp://*:5556") # For workers
poll_workers = zmq.Poller()
poll_workers.register(backend, zmq.POLLIN)
poll_both = zmq.Poller()
poll_both.register(frontend, zmq.POLLIN)
poll_both.register(backend, zmq.POLLIN)
workers = WorkerQueue()
heartbeat_at = time.time() + HEARTBEAT_INTERVAL
while True:
if len(workers.queue) > 0:
poller = poll_both
else:
poller = poll_workers
socks = dict(poller.poll(HEARTBEAT_INTERVAL * 1000))
# Handle worker activity on backend
if socks.get(backend) == zmq.POLLIN:
# Use worker address for LRU routing
frames = backend.recv_multipart()
if not frames:
break
address = frames[0]
workers.ready(Worker(address))
# Validate control message, or return reply to client
msg = frames[1:]
if len(msg) == 1:
if msg[0] not in (PPP_READY, PPP_HEARTBEAT):
print "E: Invalid message from worker: %s" % msg
else:
print ("sending: %s"%msg)
frontend.send_multipart(msg)
# Send heartbeats to idle workers if it's time
if time.time() >= heartbeat_at:
for worker in workers.queue:
msg = [worker, PPP_HEARTBEAT]
backend.send_multipart(msg)
heartbeat_at = time.time() + HEARTBEAT_INTERVAL
if socks.get(frontend) == zmq.POLLIN:
frames = frontend.recv_multipart()
print ("client frames: %s" % frames)
if not frames:
break
frames.insert(0, workers.next())
backend.send_multipart(frames)
workers.purge()
Worker.py
from random import randint
import time
import zmq
HEARTBEAT_LIVENESS = 3
HEARTBEAT_INTERVAL = 1
INTERVAL_INIT = 1
INTERVAL_MAX = 32
# Paranoid Pirate Protocol constants
PPP_READY = "PPP_READY" # Signals worker is ready
PPP_HEARTBEAT = "PPP_HEARTBEAT" # Signals worker heartbeat
def worker_socket(context, poller):
"""Helper function that returns a new configured socket
connected to the Paranoid Pirate queue"""
worker = context.socket(zmq.DEALER) # DEALER
identity = "%04X-%04X" % (randint(0, 0x10000), randint(0, 0x10000))
worker.setsockopt(zmq.IDENTITY, identity)
poller.register(worker, zmq.POLLIN)
worker.connect("tcp://localhost:5556")
worker.send(PPP_READY)
return worker
context = zmq.Context(1)
poller = zmq.Poller()
liveness = HEARTBEAT_LIVENESS
interval = INTERVAL_INIT
heartbeat_at = time.time() + HEARTBEAT_INTERVAL
worker = worker_socket(context, poller)
cycles = 0
while True:
socks = dict(poller.poll(HEARTBEAT_INTERVAL * 1000))
# Handle worker activity on backend
if socks.get(worker) == zmq.POLLIN:
# Get message
# - 3-part envelope + content -> request
# - 1-part HEARTBEAT -> heartbeat
frames = worker.recv_multipart()
if not frames:
break # Interrupted
if len(frames) == 3:
print "I: Normal reply: ", frames
liveness = HEARTBEAT_LIVENESS
time.sleep(4) # Do some heavy work
worker.send_multipart(frames)
elif len(frames) == 1 and frames[0] == PPP_HEARTBEAT:
# print "I: Queue heartbeat"
liveness = HEARTBEAT_LIVENESS
else:
print "E: Invalid message: %s" % frames
interval = INTERVAL_INIT
else:
liveness -= 1
if liveness == 0:
print "W: Heartbeat failure, can't reach queue"
print ("W: Reconnecting in")
time.sleep(interval)
if interval < INTERVAL_MAX:
interval *= 2
poller.unregister(worker)
worker.setsockopt(zmq.LINGER, 0)
worker.close()
worker = worker_socket(context, poller)
liveness = HEARTBEAT_LIVENESS
if time.time() > heartbeat_at:
heartbeat_at = time.time() + HEARTBEAT_INTERVAL
#print "I: Worker heartbeat"
worker.send(PPP_HEARTBEAT)
Add a HeartBeat signalling to the Control Layer
If there is the only
(cit.:) "problem I'm seeing is that when the broker goes down (becomes unavailable), the client is unable to tell that the broker is unavailable and it continues to send requests."
simply add a trivial signalling ( be it of a PUB/SUB or other archetype ) that may keep just the last signalled timestamped presence with newer API using .setsockopt( zmq.CONFLATE ) or have some even more robust, bilateral handshaking, that permits the sender to acquire a reasonable assumption right upon its will to send a next message to the consumer process, that all interested parties are fit and in such a state, that allows to reach the target functionality.
ZeroMQ is a great in this very sense, to help integrate both the smart-signalling and message-handling services into something like a fully-distributed Finite-State-Automata, or a network of statefully cooperating FSA-s, if one wishes to, a truly remarkable step into heterogeneous distributed systems ( queues being some non-core, additional means for doing this smart, fast and in a scaleable manner ).
One may soon realise the sort of additional risks of distributed mutual locking, that have to be incorporated into the professional system designs and validations. This new sort of issues may and does appear both in a naive use-case with REQ/REP across error-prone delivery of messages and, in more complex ways, in more complex distributed signalling/messaging FSA networks.

why are my python threads blocking each other

My current predicament is that I attempted to make a blocking web serving script non blocking to allow for more than one download to take place at any one time but currently it will hang and wait for the first download to complete before starting the second. Before you go out of your way to down vote this because the answer is odious please know that this is my first ever python script and I am self teaching.
In the example below I only post a single "ConnectionProcesser" Because they all contain the same code
if you need more code please just ask
The script has 3 dependinces
import socket # Networking support
import signal # Signal support (server shutdown on signal receive)
import threading #to make the thing run more than one at a time
Please note that the script has been edited and quite a bit of the code is missing but I believe that it is unrelated to the problem.
def ConnectionProcessorC(self):
connC, AddressC = self.socket.accept()
print("C Got connection from:", AddressC)
DataRecivedC = connC.recv(1024) #receive data from client
DataRecivedC = bytes.decode(DataRecivedC) #decode it to string
print(DataRecivedC)
RequestMethod = DataRecivedC.split(' ')[0]
print ("C Method: ", RequestMethod)
if (RequestMethod == 'GET') | (RequestMethod == 'HEAD'):
Response_Headers = 'HTTP/1.1 200 OK\n'
# Current_Date = time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
# Response_Headers += 'Date: ' + current_date +'\n'
Response_Headers += 'Server: Moes-Python-Server\n'
Response_Headers += 'Connection: close\n\n' # signal that the conection wil be closed after complting the request
Server_Response = Response_Headers.encode() # return headers for GET and HEAD
file_handler = open('/usr/share/nginx/html/100mb.dump','rb')
Response_Content = file_handler.read() # read file content
file_handler.close()
URL=DataRecivedC.split(' ')
URL = URL[1] # get 2nd element
#Response_Content="<html><body><p>Charlie TEStin this stuff yehURL:"+URL+"</p></body></html>"
Server_Response += Response_Content
connC.send(Server_Response)
print ("C Closing connection with client")
else:
print("C Unknown HTTP request method:", RequestMethod)
connC.close()
return
def Distrabuteconnections(self):
A=0
""" Main loop awaiting connections """
while True:
print ("Awaiting New connection")
self.socket.listen(10) # maximum number of queued connections #changed to 1 from 3 to try and prevent waiting after closing for ther que to clean up
if (A==0):
ConnectionProcessorA = threading.Thread(target=self.ConnectionProcessorA())
ConnectionProcessorA.start()
A=1
elif (A==1):
ConnectionProcessorB = threading.Thread(target=self.ConnectionProcessorB())
ConnectionProcessorB.start()
A=2
else:
ConnectionProcessorC = threading.Thread(target=self.ConnectionProcessorC())
ConnectionProcessorC.start()
A=0
I think that the problem could be solved by changing while true to something that loops 3 times instead of one.
You should pass a reference to the method you wish to start in a thread. Instead, you are calling the thread, and passing the data returned from that method to the threading.Thread() call.
In short your code should become:
if (A==0):
ConnectionProcessorA = threading.Thread(target=self.ConnectionProcessorA)
ConnectionProcessorA.start()
A=1
elif (A==1):
ConnectionProcessorB = threading.Thread(target=self.ConnectionProcessorB)
ConnectionProcessorB.start()
A=2
else:
ConnectionProcessorC = threading.Thread(target=self.ConnectionProcessorC)
ConnectionProcessorC.start()
A=0
Note the removal of the brackets after self.ConnectionProcessorA etc. This passes a reference to the method to start in the thread, which the threading module will call itself.
Note, it is recommended to store a reference to the thread you create so that it doesn't get garbage collected. I would thus recommend your code becomes:
if (A==0):
self.cpa_thread= threading.Thread(target=self.ConnectionProcessorA)
self.cpa_thread.start()
A=1
elif (A==1):
self.cpb_thread= threading.Thread(target=self.ConnectionProcessorB)
self.cpb_thread.start()
A=2
else:
self.cpc_thread= threading.Thread(target=self.ConnectionProcessorC)
self.cpc_thread.start()
A=0

python Client hangs when no data to receive from server and hangs in that thread w/o letting client send

I am trying to figure out how to get my client to send and receive data 'simultaneously' and am using threads. My problem is that, depending on the way I set it up, the way here it waits for data from the server in the recieveFromServer function which is in its own thread and cannot stop it when nothing will be sent. The other way it just waits for user input, and will send to the server and then I'd call the function recieveFromServer after the client sends a message to the server which doesn't allow for fluent communication, but cannot get it to alternate automatically. How do I release the thread when the client has nothing to be sent, or there is no more to be received from the server.
It would get to long if I tried to explain everything I have tried. :)
Thanks.
The client:
from socket import *
from threading import *
import thread
import time
from struct import pack,unpack
from networklingo import *
#from exception import *
HOST = '192.168.0.105'
PORT = 21567
BUFFSIZE = 1024
ADDR = (HOST,PORT)
lock = thread.allocate_lock()
class TronClient:
def __init__(self,control=None):
self.tcpSock = socket(AF_INET,SOCK_STREAM)
#self.tcpSock.settimeout(.2)
self.recvBuff = []
def connect(self):
self.tcpSock.connect(ADDR)
self.clientUID = self.tcpSock.recv(BUFFSIZE)
print 'My clientUID is ', self.clientUID
t = Thread(target = self.receiveFromSrv())
t.setDaemon(1)
t.start()
print 'going to main loop'
self.mainLoop()
#t = Thread(target = self.mainLoop())
#t.setName('mainLoop')
#t.setDaemon(1)
#t.start()
def receiveFromSrv(self):
RECIEVING = 1
while RECIEVING:
#print 'Attempting to retrieve more data'
#lock.acquire()
#print 'Lock Aquired in recieveFromSrv'
#try:
data = self.tcpSock.recv(BUFFSIZE)
#except socket.timeout,e:
#print 'Error recieving data, ',e
#continue
#print data
if not data: continue
header = data[:6]
msgType,msgLength,clientID = unpack("hhh",header)
print msgType
print msgLength
print clientID,'\n'
msg = data[6:]
while len(msg) < msgLength:
data = self.tcpSock.recv(BUFFSIZE)
dataLen = len(data)
if dataLen <= msgLength:
msg += data
else:
remLen = msgLength-len(data) #we just need to retrieve first bit of data to complete msg
msg += data[:remLen]
self.recvBuff.append(data[remLen:])
print msg
#else:
#lock.release()
# print 'lock release in receiveFromSrv'
#time.sleep(2)
#RECIEVING = 0
def disconnect(self,data=''):
self.send(DISCONNECT_REQUEST,data)
#self.tcpSock.close()
def send(self,msgType,msg):
header = pack("hhh",msgType,len(msg),self.clientUID)
msg = header+msg
self.tcpSock.send(msg)
def mainLoop(self):
while 1:
try:
#lock.acquire()
#print 'lock aquired in mainLoop'
data = raw_input('> ')
except EOFError: # enter key hit without any data (blank line) so ignore and continue
continue
#if not data or data == '': # no valid data so just continue
# continue
if data=='exit': # client wants to disconnect, so send request to server
self.disconnect()
break
else:
self.send(TRON_CHAT,data)
#lock.release()
#print 'lock released in main loop'
#self.recieveFromSrv()
#data = self.tcpSock.recv(BUFFSIZE)
#t = Thread(target = self.receiveFromSrv())
#t.setDaemon(1)
#t.start()
if __name__ == "__main__":
cli = TronClient()
cli.connect()
#t = Thread(target = cli.connect())
#t.setName('connect')
#t.setDaemon(1)
#t.start()
The server (uses a lock when incrementing or decrementing number of clients):
from socket import *
from threading import *
import thread
from controller import *
from networklingo import *
from struct import pack,unpack
HOST = ''
PORT = 21567
BUFSIZE = 1024
ADDR = (HOST,PORT)
nclntlock = thread.allocate_lock()
class TronServer:
def __init__(self,maxConnect=4,control=None):
self.servSock = socket(AF_INET,SOCK_STREAM)
# ensure that you can restart server quickly when it terminates
self.servSock.setsockopt(SOL_SOCKET,SO_REUSEADDR,1)
self.servSock.bind(ADDR)
self.servSock.listen(maxConnect)
# keep track of number of connected clients
self.clientsConnected = 0
# give each client a unique identfier for this run of server
self.clientUID = 0
# list of all clients to cycle through for sending
self.allClients = {}
# keep track of threads
self.cliThreads = {}
#reference back to controller
self.controller = control
self.recvBuff = []
def removeClient(self,clientID,addr):
if clientID in self.allClients.keys():
self.allClients[clientID].close()
print "Disconnected from", addr
nclntlock.acquire()
self.clientsConnected -= 1
nclntlock.release()
del self.allClients[clientID]
else:
print 'ClientID is not valid'
def recieve(self,clientsock,addr):
RECIEVING = 1
# loop serving the new client
while RECIEVING: # while PLAYING???
try:
data = clientsock.recv(BUFSIZE)
except:
RECIEVING = 0
continue
# if not data: break #no data was recieved
if data != '':
print 'Recieved msg from client: ',data
header = data[:6]
msgType,msgLength,clientID = unpack("hhh",header)
print msgType
print msgLength
print clientID,'\n'
if msgType == DISCONNECT_REQUEST: #handle disconnect request
self.removeClient(clientID,addr)
else: #pass message type and message off to controller
msg = data[6:]
while len(msg) < msgLength:
data = self.tcpSock.recv(BUFSIZE)
dataLen = len(data)
if dataLen <= msgLength:
msg += data
else:
remLen = msgLength-len(data) #we just need to retrieve first bit of data to complete msg
msg += data[:remLen]
self.recvBuff.append(data[remLen:])
print msg
# echo back the same data you just recieved
#clientsock.sendall(data)
self.send(TRON_CHAT,msg,-1) #send to client 0
for k in self.allClients.keys():
if self.allClients[k] == clientsock:
self.removeClient(k,addr)
print 'deleted after hard exit from clientID ', k
#self.cliThreads[k].join()
#del self.cliThreads[k]
# then tell controller to delete player with k
break
def send(self,msgType,msg,clientID=-1):
header = pack("hhh",msgType,len(msg),clientID)
msg = header+msg
if clientID in self.allClients:
self.allClients[clientID].send(msg)
elif clientID==ALL_PLAYERS:
for k in self.allClients.keys():
self.allClients[k].send(msg)
def mainLoop(self):
global nclntlock
try:
while self.controller != None and self.controller.state == WAITING:
print 'awaiting connections'
clientsock, caddy = self.servSock.accept()
nclntlock.acquire()
self.clientsConnected += 1
nclntlock.release()
print 'Client ',self.clientUID,' connected from:',caddy
clientsock.setblocking(0)
clientsock.send(str(self.clientUID))
self.allClients[self.clientUID] = clientsock
t = Thread(target = self.recieve, args = [clientsock,caddy])
t.setName('recieve-' + str(self.clientUID))
self.cliThreads[self.clientUID] = t
self.clientUID += 1
# t.setDaemon(1)
t.start()
finally:
self.servSock.close()
if __name__ == "__main__":
serv = TronServer(control = LocalController(nPlayers = 3, fWidth = 70, fHeight = 10))
t = Thread(target = serv.mainLoop())
t.setName('mainLoop')
# t.setDaemon(1)
t.start()
I think you want to try and set the socket to non-blocking mode:
http://docs.python.org/library/socket.html#socket.socket.setblocking
Set blocking or non-blocking mode of
the socket: if flag is 0, the socket
is set to non-blocking, else to
blocking mode. Initially all sockets
are in blocking mode. In non-blocking
mode, if a recv() call doesn’t find
any data, or if a send() call can’t
immediately dispose of the data, a
error exception is raised; in blocking
mode, the calls block until they can
proceed. s.setblocking(0) is
equivalent to s.settimeout(0);
s.setblocking(1) is equivalent to
s.settimeout(None).
Also, instead of using raw sockets, have you considdered using the multiprocessing module. It is a higher-level abstraction for doing network IO. The section on Pipes & Queues is specific to sending and receiving data between a client/server.

Categories

Resources