I'm streaming the video from my raspberryPi using piCamera to a web socket, so that I can view it within my local network.
I want to make my own motion detection script from scratch, therefore I want to get the first image from the video stream (which is going to be the plain background) then compare with a function next frames to the first one to check whether something has changed (I have written those functions separately), I am not really worrying about efficiency here.
MAIN ISSUE:
I want to get the data from those frames in a BytesIO object, then convert them to a 2D numpy array in B&W so I can perform operations. All this while keeping the stream going (I have in fact reduced the frame rate to 4 per second to make it run faster on my computer).
PROBLEM ENCOUNTERED WITH THE FOLLOWING CODE:
One part of the problem that I have identified is that the numbers are way off. In my settings my camera to have a resolution of around 640*480 (= 307,200 length numpy array pixels data in B&W) whereas my computations in len() return less that 100k pixels.
def main():
print('Initializing camera')
base_image = io.BytesIO()
image_captured = io.BytesIO()
with picamera.PiCamera() as camera:
camera.resolution = (WIDTH, HEIGHT)
camera.framerate = FRAMERATE
camera.vflip = VFLIP # flips image rightside up, as needed
camera.hflip = HFLIP # flips image left-right, as needed
sleep(1) # camera warm-up time
print('Initializing websockets server on port %d' % WS_PORT)
WebSocketWSGIHandler.http_version = '1.1'
websocket_server = make_server(
'', WS_PORT,
server_class=WSGIServer,
handler_class=WebSocketWSGIRequestHandler,
app=WebSocketWSGIApplication(handler_cls=StreamingWebSocket))
websocket_server.initialize_websockets_manager()
websocket_thread = Thread(target=websocket_server.serve_forever)
print('Initializing HTTP server on port %d' % HTTP_PORT)
http_server = StreamingHttpServer()
http_thread = Thread(target=http_server.serve_forever)
print('Initializing broadcast thread')
output = BroadcastOutput(camera)
broadcast_thread = BroadcastThread(output.converter, websocket_server)
print('Starting recording')
camera.start_recording(output, 'yuv')
try:
print('Starting websockets thread')
websocket_thread.start()
print('Starting HTTP server thread')
http_thread.start()
print('Starting broadcast thread')
broadcast_thread.start()
time.sleep(0.5)
camera.capture(base_image, use_video_port=True, format='jpeg')
base_data = np.asarray(bytearray(base_image.read()), dtype=np.uint64)
base_img_matrix = cv2.imdecode(base_data, cv2.IMREAD_GRAYSCALE)
while True:
camera.wait_recording(1)
#insert here the code for frame analysis
camera.capture(image_captured, use_video_port=True, format='jpeg')
data_next = np.asarray(bytearray(image_captured.read()), dtype=np.uint64)
next_img_matrix = cv2.imdecode(data_next, cv2.IMREAD_GRAYSCALE)
monitor_changes(base_img_matrix, next_img_matrix)
except KeyboardInterrupt:
pass
finally:
print('Stopping recording')
camera.stop_recording()
print('Waiting for broadcast thread to finish')
broadcast_thread.join()
print('Shutting down HTTP server')
http_server.shutdown()
print('Shutting down websockets server')
websocket_server.shutdown()
print('Waiting for HTTP server thread to finish')
http_thread.join()
print('Waiting for websockets thread to finish')
websocket_thread.join()
if __name__ == '__main__':
main()
Solved, basically the problem was all in the way I was handling data and BytesIO files. First of all I needed to use unsigned int8 as type of the file to decode id. Then I have switched to np.frombuffer to read the files in its entirety, because the base image is not going to change, hence it will read always the same thing, and the next one will be inizialized and eliminated in every while loop. Also I can replace cv2.IMREAD_GRAYSCALE with 0 in the function.
camera.start_recording(output, 'yuv')
base_image = io.BytesIO()
try:
print('Starting websockets thread')
websocket_thread.start()
print('Starting HTTP server thread')
http_thread.start()
print('Starting broadcast thread')
broadcast_thread.start()
time.sleep(0.5)
camera.capture(base_image, use_video_port=True, format='jpeg')
base_data = np.frombuffer(base_image.getvalue(), dtype=np.uint8)
base_img_matrix = cv2.imdecode(base_data, 0)
while True:
camera.wait_recording(0.25)
image_captured = io.BytesIO()
#insert here the code for frame analysis
camera.capture(image_captured, use_video_port=True, format='jpeg')
data_next = np.frombuffer(image_captured.getvalue(), dtype=np.uint8)
next_img_matrix = cv2.imdecode(data_next, cv2.IMREAD_GRAYSCALE)
monitor_changes(base_img_matrix, next_img_matrix)
image_captured.close()
Related
I'm after constantly reading images from an OpenCV camera in Python and reading from the main program the latest image. This is needed because of problematic HW.
After messing around with threads and getting a very low efficiency (duh!), I'd like to switch to multiprocessing.
Here's the threading version:
class WebcamStream:
# initialization method
def __init__(self, stream_id=0):
self.stream_id = stream_id # default is 0 for main camera
# opening video capture stream
self.camera = cv2.VideoCapture(self.stream_id)
self.camera.set(cv2.CAP_PROP_FRAME_WIDTH, 3840)
self.camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 2880)
if self.camera.isOpened() is False:
print("[Exiting]: Error accessing webcam stream.")
exit(0)
# reading a single frame from camera stream for initializing
_, self.frame = self.camera.read()
# self.stopped is initialized to False
self.stopped = True
# thread instantiation
self.t = Thread(target=self.update, args=())
self.t.daemon = True # daemon threads run in background
# method to start thread
def start(self):
self.stopped = False
self.t.start()
# method passed to thread to read next available frame
def update(self):
while True:
if self.stopped is True:
break
_, self.frame = self.camera.read()
self.camera.release()
# method to return latest read frame
def read(self):
return self.frame
# method to stop reading frames
def stop(self):
self.stopped = True
And -
if __name__ == "__main__":
main_camera_stream = WebcamStream(stream_id=0)
main_camera_stream.start()
frame = main_camera_stream.read()
Can someone please help me translate this to multiprocess land ?
Thanks!
I've written several solutions to similar problems, but it's been a little while so here we go:
I would use shared_memory as a buffer to read frames into, which can then be read by another process. My first inclination is to initialize the camera and read frames in the child process, because that seems like it would be a "set it and forget it" kind of thing.
import numpy as np
import cv2
from multiprocessing import Process, Queue
from multiprocessing.shared_memory import SharedMemory
def produce_frames(q):
#get the first frame to calculate size of buffer
cap = cv2.VideoCapture(0)
success, frame = cap.read()
shm = SharedMemory(create=True, size=frame.nbytes)
framebuffer = np.ndarray(frame.shape, frame.dtype, buffer=shm.buf) #could also maybe use array.array instead of numpy, but I'm familiar with numpy
framebuffer[:] = frame #in case you need to send the first frame to the main process
q.put(shm) #send the buffer back to main
q.put(frame.shape) #send the array details
q.put(frame.dtype)
try:
while True:
cap.read(framebuffer)
except KeyboardInterrupt:
pass
finally:
shm.close() #call this in all processes where the shm exists
shm.unlink() #call from only one process
def consume_frames(q):
shm = q.get() #get the shared buffer
shape = q.get()
dtype = q.get()
framebuffer = np.ndarray(shape, dtype, buffer=shm.buf) #reconstruct the array
try:
while True:
cv2.imshow("window title", framebuffer)
cv2.waitKey(100)
except KeyboardInterrupt:
pass
finally:
shm.close()
if __name__ == "__main__":
q = Queue()
producer = Process(target=produce_frames, args=(q,))
producer.start()
consume_frames(q)
import numpy as np
import cv2
import multiprocessing
import time
import random
finish_state = multiprocessing.Event()
#function that requests frames
def actions_func(frame):
while True:
time.sleep(random.randint(1,5))
cv2.imshow('requested_frame_1',frame)
time.sleep(random.randint(1,5))
cv2.imshow('requested_frame_2',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
#function that keeps the camera always on and should return the frame value with the last image only when requested
def capture_cam():
cap = cv2.VideoCapture(1)
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained. The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
while(True):
ret, frame = cap.read()
if ret == True:
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
else:
break
def main_process(finish_state):
thr1, frame = multiprocessing.Process(target=capture_cam)
thr1.start()
thr2 = multiprocessing.Process(target=actions_func, args=(frame,))
thr2.start()
if __name__ == '__main__':
main_process(finish_state)
print("continue the code with other things after all threads/processes except the main one were closed with the loop that started them... ")
I want a webcam to be open all the time capturing an image, for this I have created thread1 where it is supposed to run all the time regardless of the program.
What you need is to fix this program that is supposed to ask for frames from the function that always runs on thread1.
The problem is that I don't know when it might be time to ask thread1 for the last frame it showed, and to represent that I put the random.randint(1,5), although in reality I won't have knowledge of the maximum or minimum time in which the last frame will be requested from thread1
The truth is that I'm getting complicated with this program, and I really don't know if it's convenient to create a thread2 to do the frame requests or if it's better to just have thread1 and have the frame requests inside the main thread
Although they say thread they are actually parallel processes, try with threads but I think it is more convenient to use processes, right?
Traceback (most recent call last):
File "request_frames_thread.py", line 58, in <module>
main_process(finish_state)
File "request_frames_thread.py", line 50, in main_process
thr1, frame = multiprocessing.Process(target=capture_cam)
TypeError: cannot unpack non-iterable Process object
I would have the main process create a full duplex multiprocessing.Pipe instance which returns two multiprocessing.connection.Connection instances and pass one connection to each of your processes. These connections can be used for a simple two way communication vehicle for sending and receiving objects to one another. I would have the capture_cam process start a dameon thread (it will terminate when all your regular threads terminate and so it can be in an infinite loop) that is passed on of these connections to handle requests for the latest frame, which is stored in a global variable.
The only requirement is that a frame be serializable by the pickle module.
import multiprocessing
from threading import Thread
import time
import random
#function that requests frames
def actions_func(conn):
try:
while True:
time.sleep(random.randint(1,5))
# Ask for latest frame by sending any message:
conn.send('frame')
frame = conn.recv() # This is the response
cv2.imshow('requested_frame_1',frame)
time.sleep(random.randint(1,5))
# Ask for latest frame by sending any message:
conn.send('frame')
frame = conn.recv() # This is the response
cv2.imshow('requested_frame_2',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
except BrokenPipeError:
# The capture_cam process has terminated.
pass
def handle_frame_requests(conn):
try:
while True:
# Any message coming in is a request for the latest frame:
request = conn.recv()
conn.send(frame) # The frame must be pickle-able
except EOFError:
# The actions_func process has ended
# and its connection has been closed.
pass
#function that keeps the camera always on and should return the frame value with the last image only when requested
def capture_cam(conn):
global frame
frame = None
# start dameon thread to handle frame requests:
Thread(target=handle_frame_requests, args=(conn,), daemon=True).start()
cap = cv2.VideoCapture(1)
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained. The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
while(True):
ret, frame = cap.read()
if ret == True:
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
else:
break
def main_process(finish_state):
conn1, conn2 = multiprocessing.Pipe(duplex=True)
p1 = multiprocessing.Process(target=capture_cam, args=(conn1,))
p1.start()
p2 = multiprocessing.Process(target=actions_func, args=(conn2,))
p2.start()
if __name__ == '__main__':
finish_state = multiprocessing.Event()
main_process(finish_state)
at the moment I am reading an ip cameras live image by using the following code:
def livestream(self):
print("start")
stream = urlopen('http://192.168.4.1:81/stream')
bytes = b''
while True:
try:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
getliveimage = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
livestreamrotated1 = cv2.rotate(getliveimage, cv2.ROTATE_90_CLOCKWISE) #here I am rotating the image
print(type(livestreamrotated1)) #type at this point is <class 'numpy.ndarray'>
cv2.imshow('video',livestreamrotated1)
if cv2.waitKey(1) ==27: # if user hit esc
exit(0) # exit program
except Exception as e:
print(e)
print("failed at this point")
Now I want to integrate the result-image into Kivy-GUI and want to get rid of the while-loop since it freezes my GUI. Unfortunately the loop is necessary to recreate the image byte-by-byte. I would like to use cv2.VideoCapture instead and schedule this multiple times per second. This is not working at all, I am not able to capture the image from the live stream this way...where am I wrong?
cap = cv2.VideoCapture('http://192.168.4.1:81/stream?dummy.jpg')
ret, frame = cap.read()
cv2.imshow('stream',frame)
I read in some other post that a file-ending like "dummy.jpg" would be necessary at this point, but it is still not working, the program freezes.
Please help. Thank you in advance!
If you want to decouple your reading loop from your GUI loop you can use multithreading to separate the code. You can have a thread running your livestream function and dumping the image out to a global image variable where your GUI loop can pick it up and do whatever to it.
I can't really test out the livestream part of the code, but something like this should work. The read function is an example of how to write a generic looping function that will work with this code.
import cv2
import time
import threading
import numpy as np
# generic threading class
class Reader(threading.Thread):
def __init__(self, func, *args):
threading.Thread.__init__(self, target = func, args = args);
self.start();
# globals for managing shared data
g_stop_threads = False;
g_lock = threading.Lock();
g_frame = None;
# reads frames from vidcap and stores them in g_frame
def read():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# open vidcap
cap = cv2.VideoCapture(0);
# loop
while not g_stop_threads:
# get a frame from camera
ret, frame = cap.read();
# replace the global frame
if ret:
with g_lock:
# copy so that we can quickly drop the lock
g_frame = np.copy(frame);
# sleep so that someone else can use the lock
time.sleep(0.03); # in seconds
# your livestream func
def livestream():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# open stream
stream = urlopen('http://192.168.4.1:81/stream')
bytes = b''
# process stream into opencv image
while not g_stop_threads:
try:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
getliveimage = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
livestreamrotated1 = cv2.rotate(getliveimage, cv2.ROTATE_90_CLOCKWISE) #here I am rotating the image
# acquire lock and replace image
with g_lock:
g_frame = livestreamrotated1;
# sleep to allow other threads to get the lock
time.sleep(0.03); # in seconds
except Exception as e:
print(e)
print("failed at this point")
def main():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# start a thread
# reader = Reader(read);
reader = Reader(livestream);
# show frames from g_frame
my_frame = None;
while True:
# grab lock
with g_lock:
# show
if not g_frame is None:
# copy # we copy here to dump the lock as fast as possible
my_frame = np.copy(g_frame);
# now we can do all the slow manipulation / gui stuff here without the lock
if my_frame is not None:
cv2.imshow("Frame", my_frame);
# break out if 'q' is pressed
if cv2.waitKey(1) == ord('q'):
break;
# stop the threads
g_stop_threads = True;
if __name__ == "__main__":
main();
I've been working on a project where I use a raspberry pi to send a live video feed to my server. This kinda works but not how I'd like it to.
The problem mainly is the speed. Right now I can send a 640x480 video stream with a speed of around 3.5 FPS and a 1920x1080 with around 0.5 FPS, which is terrible. Since I am not a professional I thought there should be a way of improving my code.
The sender (Raspberry pi):
def send_stream():
connection = True
while connection:
ret,frame = cap.read()
if ret:
# You might want to enable this while testing.
# cv2.imshow('camera', frame)
b_frame = pickle.dumps(frame)
b_size = len(b_frame)
try:
s.sendall(struct.pack("<L", b_size) + b_frame)
except socket.error:
print("Socket Error!")
connection = False
else:
print("Received no frame from camera, exiting.")
exit()
The Receiver (Server):
def recv_stream(self):
payload_size = struct.calcsize("<L")
data = b''
while True:
try:
start_time = datetime.datetime.now()
# keep receiving data until it gets the size of the msg.
while len(data) < payload_size:
data += self.connection.recv(4096)
# Get the frame size and remove it from the data.
frame_size = struct.unpack("<L", data[:payload_size])[0]
data = data[payload_size:]
# Keep receiving data until the frame size is reached.
while len(data) < frame_size:
data += self.connection.recv(32768)
# Cut the frame to the beginning of the next frame.
frame_data = data[:frame_size]
data = data[frame_size:]
frame = pickle.loads(frame_data)
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
end_time = datetime.datetime.now()
fps = 1/(end_time-start_time).total_seconds()
print("Fps: ",round(fps,2))
self.detect_motion(frame,fps)
self.current_frame = frame
except (socket.error,socket.timeout) as e:
# The timeout got reached or the client disconnected. Clean up the mess.
print("Cleaning up: ",e)
try:
self.connection.close()
except socket.error:
pass
self.is_connected = False
break
One potential reason could because of I/O latency when reading frames. Since cv2.VideoCapture().read() is a blocking operation, the main program is stalled until a frame is read from the camera device and returned. A method to improve performance would be to spawn another thread to handle grabbing frames in parallel instead of relying on a single thread to grab frames in sequential order. We can improve performance by creating a new thread that only polls for new frames while the main thread handles processing/graphing the most recent frame.
Your current approach (Sequential):
Thread 1: Grab frame -> Process frame -> Plot
Proposed approach (Parallel):
Thread 1: Grab frame
from threading import Thread
import time
def get_frames():
while True:
ret, frame = cap.read()
time.sleep(.01)
thread_frames = Thread(target=self.get_frames, args=())
thread_frames.daemon = True
thread_frames.start()
Thread 2: Process frame -> Plot
def process_frames():
while True:
# Grab most recent frame
# Process/plot frame
...
By having separate threads, your program will be in parallel since there will always be a frame ready to be processed instead of having to wait for a frame to be read in before processing can be done.
Note: This method will give you a performance boost based on I/O latency reduction. This isn't a true increase of FPS as it is a dramatic reduction in latency (a frame is always available for processing; we don't need to poll the camera device and wait for the I/O to complete).
After searching the internet for ages, I found a quick solution which doubled the fps (This is still way too low: 1.1 fps #1080p). What I did was I stopped using pickle and used base64 instead. apparently pickling the image just takes a while. Anyway this is my new code:
The sender (Raspberry pi):
def send_stream():
global connected
connection = True
while connection:
if last_frame is not None:
# You might want to uncomment these lines while testing.
# cv2.imshow('camera', frame)
# cv2.waitKey(1)
frame = last_frame
# The old pickling method.
#b_frame = pickle.dumps(frame)
encoded, buffer = cv2.imencode('.jpg', frame)
b_frame = base64.b64encode(buffer)
b_size = len(b_frame)
print("Frame size = ",b_size)
try:
s.sendall(struct.pack("<L", b_size) + b_frame)
except socket.error:
print("Socket Error!")
connection = False
connected = False
s.close()
return "Socket Error"
else:
return "Received no frame from camera"
The Receiver (Server):
def recv_stream(self):
payload_size = struct.calcsize("<L")
data = b''
while True:
try:
start_time = datetime.datetime.now()
# keep receiving data until it gets the size of the msg.
while len(data) < payload_size:
data += self.connection.recv(4096)
# Get the frame size and remove it from the data.
frame_size = struct.unpack("<L", data[:payload_size])[0]
data = data[payload_size:]
# Keep receiving data until the frame size is reached.
while len(data) < frame_size:
data += self.connection.recv(131072)
# Cut the frame to the beginning of the next frame.
frame_data = data[:frame_size]
data = data[frame_size:]
# using the old pickling method.
# frame = pickle.loads(frame_data)
# Converting the image to be sent.
img = base64.b64decode(frame_data)
npimg = np.fromstring(img, dtype=np.uint8)
frame = cv2.imdecode(npimg, 1)
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
end_time = datetime.datetime.now()
fps = 1/(end_time-start_time).total_seconds()
print("Fps: ",round(fps,2))
self.detect_motion(frame,fps)
self.current_frame = frame
except (socket.error,socket.timeout) as e:
# The timeout got reached or the client disconnected. Clean up the mess.
print("Cleaning up: ",e)
try:
self.connection.close()
except socket.error:
pass
self.is_connected = False
break
I also increased the packet size which increased the fps when sending from my local machine to my local machine while testing, but this didn't change anything whatsoever when using the raspberry pi.
You can see the full code on my github: https://github.com/Ruud14/SecurityCamera
I used python socket to make a server on my Raspberry Pi 3 (Raspbian) and a client on my laptop (Windows 10). The server stream images to the laptop at a rate of 10fps, and can reach 15fps if I push it. The problem is when I want the laptop to send back a command based on the image, the frame rate drop sharply to 3fps. The process is like this:
Pi send img => Laptop receive img => Quick process => Send command based on process result => Pi receive command, print it => Pi send img => ...
The process time for each frame does not cause this (0.02s at most for each frame), so currently I am at a loss as to why the frame rate drop so much. The image is quite large, at around 200kB and the command is only a short string at 3B. The image is in matrix form and is pickled before sending, while the command is sent as is.
Can someone please explain to me why sending back such a short command would make the frame rate drop so much? And if possible, a solution for this problem. I tried making 2 servers, one dedicated to sending images and one for receiving command, but the result is the same.
Server:
import socket
import pickle
import time
import cv2
import numpy as np
from picamera.array import PiRGBArray
from picamera import PiCamera
from SendFrameInOO import PiImageServer
def main():
# initialize the server and time stamp
ImageServer = PiImageServer()
ImageServer2 = PiImageServer()
ImageServer.openServer('192.168.0.89', 50009)
ImageServer2.openServer('192.168.0.89', 50002)
# Initialize the camera object
camera = PiCamera()
camera.resolution = (320, 240)
camera.framerate = 10 # it seems this cannot go higher than 10
# unless special measures are taken, which may
# reduce image quality
camera.exposure_mode = 'sports' #reduce blur
rawCapture = PiRGBArray(camera)
# allow the camera to warmup
time.sleep(1)
# capture frames from the camera
print('<INFO> Preparing to stream video...')
timeStart = time.time()
for frame in camera.capture_continuous(rawCapture, format="bgr",
use_video_port = True):
# grab the raw NumPy array representing the image, then initialize
# the timestamp and occupied/unoccupied text
image = frame.array
imageData = pickle.dumps(image)
ImageServer.sendFrame(imageData) # send the frame data
# receive command from laptop and print it
command = ImageServer2.recvCommand()
if command == 'BYE':
print('BYE received, ending stream session...')
break
print(command)
# clear the stream in preparation for the next one
rawCapture.truncate(0)
print('<INFO> Video stream ended')
ImageServer.closeServer()
elapsedTime = time.time() - timeStart
print('<INFO> Total elapsed time is: ', elapsedTime)
if __name__ == '__main__': main()
Client:
from SupFunctions.ServerClientFunc import PiImageClient
import time
import pickle
import cv2
def main():
# Initialize
result = 'STP'
ImageClient = PiImageClient()
ImageClient2 = PiImageClient()
# Connect to server
ImageClient.connectClient('192.168.0.89', 50009)
ImageClient2.connectClient('192.168.0.89', 50002)
print('<INFO> Connection established, preparing to receive frames...')
timeStart = time.time()
# Receiving and processing frames
while(1):
# Receive and unload a frame
imageData = ImageClient.receiveFrame()
image = pickle.loads(imageData)
cv2.imshow('Frame', image)
key = cv2.waitKey(1) & 0xFF
# Exit when q is pressed
if key == ord('q'):
ImageClient.sendCommand('BYE')
break
ImageClient2.sendCommand(result)
ImageClient.closeClient()
elapsedTime = time.time() - timeStart
print('<INFO> Total elapsed time is: ', elapsedTime)
print('Press any key to exit the program')
#cv2.imshow('Picture from server', image)
cv2.waitKey(0)
if __name__ == '__main__': main()
PiImageServer and PiImageClient:
import socket
import pickle
import time
class PiImageClient:
def __init__(self):
self.s = None
self.counter = 0
def connectClient(self, serverIP, serverPort):
self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.s.connect((serverIP, serverPort))
def closeClient(self):
self.s.close()
def receiveOneImage(self):
imageData = b''
lenData = self.s.recv(8)
length = pickle.loads(lenData) # should be 921764 for 640x480 images
print('Data length is:', length)
while len(imageData) < length:
toRead = length-len(imageData)
imageData += self.s.recv(4096 if toRead>4096 else toRead)
#if len(imageData)%200000 <= 4096:
# print('Received: {} of {}'.format(len(imageData), length))
return imageData
def receiveFrame(self):
imageData = b''
lenData = self.s.recv(8)
length = pickle.loads(lenData)
print('Data length is:', length)
'''length = 921764 # for 640x480 images
length = 230563 # for 320x240 images'''
while len(imageData) < length:
toRead = length-len(imageData)
imageData += self.s.recv(4096 if toRead>4096 else toRead)
#if len(imageData)%200000 <= 4096:
# print('Received: {} of {}'.format(len(imageData), length))
self.counter += 1
if len(imageData) == length:
print('Successfully received frame {}'.format(self.counter))
return imageData
def sendCommand(self, command):
if len(command) != 3:
print('<WARNING> Length of command string is different from 3')
self.s.send(command.encode())
print('Command {} sent'.format(command))
class PiImageServer:
def __init__(self):
self.s = None
self.conn = None
self.addr = None
#self.currentTime = time.time()
self.currentTime = time.asctime(time.localtime(time.time()))
self.counter = 0
def openServer(self, serverIP, serverPort):
print('<INFO> Opening image server at {}:{}'.format(serverIP,
serverPort))
self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.s.bind((serverIP, serverPort))
self.s.listen(1)
print('Waiting for client...')
self.conn, self.addr = self.s.accept()
print('Connected by', self.addr)
def closeServer(self):
print('<INFO> Closing server...')
self.conn.close()
self.s.close()
#self.currentTime = time.time()
self.currentTime = time.asctime(time.localtime(time.time()))
print('Server closed at', self.currentTime)
def sendOneImage(self, imageData):
print('<INFO> Sending only one image...')
imageDataLen = len(imageData)
lenData = pickle.dumps(imageDataLen)
print('Sending image length')
self.conn.send(lenData)
print('Sending image data')
self.conn.send(imageData)
def sendFrame(self, frameData):
self.counter += 1
print('Sending frame ', self.counter)
frameDataLen = len(frameData)
lenData = pickle.dumps(frameDataLen)
self.conn.send(lenData)
self.conn.send(frameData)
def recvCommand(self):
commandData = self.conn.recv(3)
command = commandData.decode()
return command
I believe the problem is two-fold. First, you are serializing all activity: The server is sending a complete image, then instead of continuing on to send the next image (which would better fit the definition of "streaming"), it is stopping, waiting for all bytes of the previous image to make themselves across the network to the client, then for the client to receive all bytes of the image, unpickle it, send a response and for the response to then make its way across the wire to the server.
Is there a reason you need them to be in lockstep like this? If not, try to parallelize the two sides. Have your server create a separate thread to listen for commands coming back (or simply use select to determine when the command socket has something to receive).
Second, you are likely being bitten by Nagle's algorithm (https://en.wikipedia.org/wiki/Nagle%27s_algorithm), which is intended to prevent sending numerous packets with small payloads (but lots of overhead) across the network. So, your client-side kernel has gotten your three-bytes of command data and has buffered it, waiting for you to provide more data before it sends the data to the server (it will eventually send it anyway, after a delay). To change that, you would want to use the TCP_NODELAY socket option on the client side (see https://stackoverflow.com/a/31827588/1076479).