How can I improve my python openCV video-stream? - python

I've been working on a project where I use a raspberry pi to send a live video feed to my server. This kinda works but not how I'd like it to.
The problem mainly is the speed. Right now I can send a 640x480 video stream with a speed of around 3.5 FPS and a 1920x1080 with around 0.5 FPS, which is terrible. Since I am not a professional I thought there should be a way of improving my code.
The sender (Raspberry pi):
def send_stream():
connection = True
while connection:
ret,frame = cap.read()
if ret:
# You might want to enable this while testing.
# cv2.imshow('camera', frame)
b_frame = pickle.dumps(frame)
b_size = len(b_frame)
try:
s.sendall(struct.pack("<L", b_size) + b_frame)
except socket.error:
print("Socket Error!")
connection = False
else:
print("Received no frame from camera, exiting.")
exit()
The Receiver (Server):
def recv_stream(self):
payload_size = struct.calcsize("<L")
data = b''
while True:
try:
start_time = datetime.datetime.now()
# keep receiving data until it gets the size of the msg.
while len(data) < payload_size:
data += self.connection.recv(4096)
# Get the frame size and remove it from the data.
frame_size = struct.unpack("<L", data[:payload_size])[0]
data = data[payload_size:]
# Keep receiving data until the frame size is reached.
while len(data) < frame_size:
data += self.connection.recv(32768)
# Cut the frame to the beginning of the next frame.
frame_data = data[:frame_size]
data = data[frame_size:]
frame = pickle.loads(frame_data)
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
end_time = datetime.datetime.now()
fps = 1/(end_time-start_time).total_seconds()
print("Fps: ",round(fps,2))
self.detect_motion(frame,fps)
self.current_frame = frame
except (socket.error,socket.timeout) as e:
# The timeout got reached or the client disconnected. Clean up the mess.
print("Cleaning up: ",e)
try:
self.connection.close()
except socket.error:
pass
self.is_connected = False
break

One potential reason could because of I/O latency when reading frames. Since cv2.VideoCapture().read() is a blocking operation, the main program is stalled until a frame is read from the camera device and returned. A method to improve performance would be to spawn another thread to handle grabbing frames in parallel instead of relying on a single thread to grab frames in sequential order. We can improve performance by creating a new thread that only polls for new frames while the main thread handles processing/graphing the most recent frame.
Your current approach (Sequential):
Thread 1: Grab frame -> Process frame -> Plot
Proposed approach (Parallel):
Thread 1: Grab frame
from threading import Thread
import time
def get_frames():
while True:
ret, frame = cap.read()
time.sleep(.01)
thread_frames = Thread(target=self.get_frames, args=())
thread_frames.daemon = True
thread_frames.start()
Thread 2: Process frame -> Plot
def process_frames():
while True:
# Grab most recent frame
# Process/plot frame
...
By having separate threads, your program will be in parallel since there will always be a frame ready to be processed instead of having to wait for a frame to be read in before processing can be done.
Note: This method will give you a performance boost based on I/O latency reduction. This isn't a true increase of FPS as it is a dramatic reduction in latency (a frame is always available for processing; we don't need to poll the camera device and wait for the I/O to complete).

After searching the internet for ages, I found a quick solution which doubled the fps (This is still way too low: 1.1 fps #1080p). What I did was I stopped using pickle and used base64 instead. apparently pickling the image just takes a while. Anyway this is my new code:
The sender (Raspberry pi):
def send_stream():
global connected
connection = True
while connection:
if last_frame is not None:
# You might want to uncomment these lines while testing.
# cv2.imshow('camera', frame)
# cv2.waitKey(1)
frame = last_frame
# The old pickling method.
#b_frame = pickle.dumps(frame)
encoded, buffer = cv2.imencode('.jpg', frame)
b_frame = base64.b64encode(buffer)
b_size = len(b_frame)
print("Frame size = ",b_size)
try:
s.sendall(struct.pack("<L", b_size) + b_frame)
except socket.error:
print("Socket Error!")
connection = False
connected = False
s.close()
return "Socket Error"
else:
return "Received no frame from camera"
The Receiver (Server):
def recv_stream(self):
payload_size = struct.calcsize("<L")
data = b''
while True:
try:
start_time = datetime.datetime.now()
# keep receiving data until it gets the size of the msg.
while len(data) < payload_size:
data += self.connection.recv(4096)
# Get the frame size and remove it from the data.
frame_size = struct.unpack("<L", data[:payload_size])[0]
data = data[payload_size:]
# Keep receiving data until the frame size is reached.
while len(data) < frame_size:
data += self.connection.recv(131072)
# Cut the frame to the beginning of the next frame.
frame_data = data[:frame_size]
data = data[frame_size:]
# using the old pickling method.
# frame = pickle.loads(frame_data)
# Converting the image to be sent.
img = base64.b64decode(frame_data)
npimg = np.fromstring(img, dtype=np.uint8)
frame = cv2.imdecode(npimg, 1)
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
end_time = datetime.datetime.now()
fps = 1/(end_time-start_time).total_seconds()
print("Fps: ",round(fps,2))
self.detect_motion(frame,fps)
self.current_frame = frame
except (socket.error,socket.timeout) as e:
# The timeout got reached or the client disconnected. Clean up the mess.
print("Cleaning up: ",e)
try:
self.connection.close()
except socket.error:
pass
self.is_connected = False
break
I also increased the packet size which increased the fps when sending from my local machine to my local machine while testing, but this didn't change anything whatsoever when using the raspberry pi.
You can see the full code on my github: https://github.com/Ruud14/SecurityCamera

Related

How to have a thread/process always active and request the updated value of one of its variables every certain time interval in another thread/process

import numpy as np
import cv2
import multiprocessing
import time
import random
finish_state = multiprocessing.Event()
#function that requests frames
def actions_func(frame):
while True:
time.sleep(random.randint(1,5))
cv2.imshow('requested_frame_1',frame)
time.sleep(random.randint(1,5))
cv2.imshow('requested_frame_2',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
#function that keeps the camera always on and should return the frame value with the last image only when requested
def capture_cam():
cap = cv2.VideoCapture(1)
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained. The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
while(True):
ret, frame = cap.read()
if ret == True:
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
else:
break
def main_process(finish_state):
thr1, frame = multiprocessing.Process(target=capture_cam)
thr1.start()
thr2 = multiprocessing.Process(target=actions_func, args=(frame,))
thr2.start()
if __name__ == '__main__':
main_process(finish_state)
print("continue the code with other things after all threads/processes except the main one were closed with the loop that started them... ")
I want a webcam to be open all the time capturing an image, for this I have created thread1 where it is supposed to run all the time regardless of the program.
What you need is to fix this program that is supposed to ask for frames from the function that always runs on thread1.
The problem is that I don't know when it might be time to ask thread1 for the last frame it showed, and to represent that I put the random.randint(1,5), although in reality I won't have knowledge of the maximum or minimum time in which the last frame will be requested from thread1
The truth is that I'm getting complicated with this program, and I really don't know if it's convenient to create a thread2 to do the frame requests or if it's better to just have thread1 and have the frame requests inside the main thread
Although they say thread they are actually parallel processes, try with threads but I think it is more convenient to use processes, right?
Traceback (most recent call last):
File "request_frames_thread.py", line 58, in <module>
main_process(finish_state)
File "request_frames_thread.py", line 50, in main_process
thr1, frame = multiprocessing.Process(target=capture_cam)
TypeError: cannot unpack non-iterable Process object
I would have the main process create a full duplex multiprocessing.Pipe instance which returns two multiprocessing.connection.Connection instances and pass one connection to each of your processes. These connections can be used for a simple two way communication vehicle for sending and receiving objects to one another. I would have the capture_cam process start a dameon thread (it will terminate when all your regular threads terminate and so it can be in an infinite loop) that is passed on of these connections to handle requests for the latest frame, which is stored in a global variable.
The only requirement is that a frame be serializable by the pickle module.
import multiprocessing
from threading import Thread
import time
import random
#function that requests frames
def actions_func(conn):
try:
while True:
time.sleep(random.randint(1,5))
# Ask for latest frame by sending any message:
conn.send('frame')
frame = conn.recv() # This is the response
cv2.imshow('requested_frame_1',frame)
time.sleep(random.randint(1,5))
# Ask for latest frame by sending any message:
conn.send('frame')
frame = conn.recv() # This is the response
cv2.imshow('requested_frame_2',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
except BrokenPipeError:
# The capture_cam process has terminated.
pass
def handle_frame_requests(conn):
try:
while True:
# Any message coming in is a request for the latest frame:
request = conn.recv()
conn.send(frame) # The frame must be pickle-able
except EOFError:
# The actions_func process has ended
# and its connection has been closed.
pass
#function that keeps the camera always on and should return the frame value with the last image only when requested
def capture_cam(conn):
global frame
frame = None
# start dameon thread to handle frame requests:
Thread(target=handle_frame_requests, args=(conn,), daemon=True).start()
cap = cv2.VideoCapture(1)
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained. The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
while(True):
ret, frame = cap.read()
if ret == True:
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
else:
break
def main_process(finish_state):
conn1, conn2 = multiprocessing.Pipe(duplex=True)
p1 = multiprocessing.Process(target=capture_cam, args=(conn1,))
p1.start()
p2 = multiprocessing.Process(target=actions_func, args=(conn2,))
p2.start()
if __name__ == '__main__':
finish_state = multiprocessing.Event()
main_process(finish_state)

OpenCV Python IP Camera live image

at the moment I am reading an ip cameras live image by using the following code:
def livestream(self):
print("start")
stream = urlopen('http://192.168.4.1:81/stream')
bytes = b''
while True:
try:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
getliveimage = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
livestreamrotated1 = cv2.rotate(getliveimage, cv2.ROTATE_90_CLOCKWISE) #here I am rotating the image
print(type(livestreamrotated1)) #type at this point is <class 'numpy.ndarray'>
cv2.imshow('video',livestreamrotated1)
if cv2.waitKey(1) ==27: # if user hit esc
exit(0) # exit program
except Exception as e:
print(e)
print("failed at this point")
Now I want to integrate the result-image into Kivy-GUI and want to get rid of the while-loop since it freezes my GUI. Unfortunately the loop is necessary to recreate the image byte-by-byte. I would like to use cv2.VideoCapture instead and schedule this multiple times per second. This is not working at all, I am not able to capture the image from the live stream this way...where am I wrong?
cap = cv2.VideoCapture('http://192.168.4.1:81/stream?dummy.jpg')
ret, frame = cap.read()
cv2.imshow('stream',frame)
I read in some other post that a file-ending like "dummy.jpg" would be necessary at this point, but it is still not working, the program freezes.
Please help. Thank you in advance!
If you want to decouple your reading loop from your GUI loop you can use multithreading to separate the code. You can have a thread running your livestream function and dumping the image out to a global image variable where your GUI loop can pick it up and do whatever to it.
I can't really test out the livestream part of the code, but something like this should work. The read function is an example of how to write a generic looping function that will work with this code.
import cv2
import time
import threading
import numpy as np
# generic threading class
class Reader(threading.Thread):
def __init__(self, func, *args):
threading.Thread.__init__(self, target = func, args = args);
self.start();
# globals for managing shared data
g_stop_threads = False;
g_lock = threading.Lock();
g_frame = None;
# reads frames from vidcap and stores them in g_frame
def read():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# open vidcap
cap = cv2.VideoCapture(0);
# loop
while not g_stop_threads:
# get a frame from camera
ret, frame = cap.read();
# replace the global frame
if ret:
with g_lock:
# copy so that we can quickly drop the lock
g_frame = np.copy(frame);
# sleep so that someone else can use the lock
time.sleep(0.03); # in seconds
# your livestream func
def livestream():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# open stream
stream = urlopen('http://192.168.4.1:81/stream')
bytes = b''
# process stream into opencv image
while not g_stop_threads:
try:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
getliveimage = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
livestreamrotated1 = cv2.rotate(getliveimage, cv2.ROTATE_90_CLOCKWISE) #here I am rotating the image
# acquire lock and replace image
with g_lock:
g_frame = livestreamrotated1;
# sleep to allow other threads to get the lock
time.sleep(0.03); # in seconds
except Exception as e:
print(e)
print("failed at this point")
def main():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# start a thread
# reader = Reader(read);
reader = Reader(livestream);
# show frames from g_frame
my_frame = None;
while True:
# grab lock
with g_lock:
# show
if not g_frame is None:
# copy # we copy here to dump the lock as fast as possible
my_frame = np.copy(g_frame);
# now we can do all the slow manipulation / gui stuff here without the lock
if my_frame is not None:
cv2.imshow("Frame", my_frame);
# break out if 'q' is pressed
if cv2.waitKey(1) == ord('q'):
break;
# stop the threads
g_stop_threads = True;
if __name__ == "__main__":
main();

How to use pyav or opencv to decode a live stream of raw H.264 data?

The data was received by socket ,with no more shell , they are pure I P B frames begin with NAL Header(something like 00 00 00 01). I am now using pyav to decode the frames ,but i can only decode the data after the second pps info(in key frame) was received(so the chunk of data I send to my decode thread can begin with pps and sps ), otherwise the decode() or demux() will return error "non-existing PPS 0 referenced decode_slice_header error" .
I want to feed data to a sustaining decoder which can remember the previous P frame , so after feeding one B frame, the decoder return a decoded video frame. Or someform of IO that can be opened as container and keep writing data into it by another thread.
Here is my key code:
#read thread... read until get a key frame, then make a new io.BytesIO() to store the new data.
rawFrames = io.BytesIO()
while flag_get_keyFrame:()
....
content= socket.recv(2048)
rawFrames.write(content)
....
#decode thread... decode content between two key frames
....
rawFrames.seek(0)
container = av.open(rawFrames)
for packet in container.demux():
for frame in packet.decode():
self.frames.append(frame)
....
My code will play the video but with a 3~4 seconds delay. So I am not putting all of it here, because I know it's not actually working for what I want to achieve.
I want to play the video after receiving the first key frame and decode the following frames right after receiving them . Pyav opencv ffmpeg or something else ,how can I achieve my goal?
After hours of finding an answer for this as well. I figure this out myself.
For single thread, you can do the following:
rawData = io.BytesIO()
container = av.open(rawData, format="h264", mode='r')
cur_pos = 0
while True:
data = await websocket.recv()
rawData.write(data)
rawData.seek(cur_pos)
for packet in container.demux():
if packet.size == 0:
continue
cur_pos += packet.size
for frame in packet.decode():
self.frames.append(frame)
That is the basic idea. I have worked out a generic version that has receiving thread and decoding thread separated. The code will also skip frames if the CPU does not keep up with the decoding speed and will start decoding from the next key frame (so you will not have the teared green screen effect). Here is the full version of the code:
import asyncio
import av
import cv2
import io
from multiprocessing import Process, Queue, Event
import time
import websockets
def display_frame(frame, start_time, pts_offset, frame_rate):
if frame.pts is not None:
play_time = (frame.pts - pts_offset) * frame.time_base.numerator / frame.time_base.denominator
if start_time is not None:
current_time = time.time() - start_time
time_diff = play_time - current_time
if time_diff > 1 / frame_rate:
return False
if time_diff > 0:
time.sleep(time_diff)
img = frame.to_ndarray(format='bgr24')
cv2.imshow('Video', img)
return True
def get_pts(frame):
return frame.pts
def render(terminated, data_queue):
rawData = io.BytesIO()
cur_pos = 0
frames_buffer = []
start_time = None
pts_offset = None
got_key_frame = False
while not terminated.is_set():
try:
data = data_queue.get_nowait()
except:
time.sleep(0.01)
continue
rawData.write(data)
rawData.seek(cur_pos)
if cur_pos == 0:
container = av.open(rawData, mode='r')
original_codec_ctx = container.streams.video[0].codec_context
codec = av.codec.CodecContext.create(original_codec_ctx.name, 'r')
cur_pos += len(data)
dts = None
for packet in container.demux():
if packet.size == 0:
continue
dts = packet.dts
if pts_offset is None:
pts_offset = packet.pts
if not got_key_frame and packet.is_keyframe:
got_key_frame = True
if data_queue.qsize() > 8 and not packet.is_keyframe:
got_key_frame = False
continue
if not got_key_frame:
continue
frames = codec.decode(packet)
if start_time is None:
start_time = time.time()
frames_buffer += frames
frames_buffer.sort(key=get_pts)
for frame in frames_buffer:
if display_frame(frame, start_time, pts_offset, codec.framerate):
frames_buffer.remove(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if dts is not None:
container.seek(25000)
rawData.seek(cur_pos)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
terminated.set()
cv2.destroyAllWindows()
async def receive_encoded_video(websocket, path):
data_queue = Queue()
terminated = Event()
p = Process(
target=render,
args=(terminated, data_queue)
)
p.start()
while not terminated.is_set():
try:
data = await websocket.recv()
except:
break
data_queue.put(data)
terminated.set()
Its normal getting 3~4 seconds delay because you are reading encoded data and decoding it takes time via on CPU.
If you have GPU hardware, you can use FFMPEG to decode H264 by GPU. Here is an example.
If you don't have a GPU, decoding H264 on CPU always will cause delays. You can use FFMPEG for effective decoding but this will also decrease total delay almost 10%

Extract and analyze frames while recording using PiCamera, OpenCV

I'm streaming the video from my raspberryPi using piCamera to a web socket, so that I can view it within my local network.
I want to make my own motion detection script from scratch, therefore I want to get the first image from the video stream (which is going to be the plain background) then compare with a function next frames to the first one to check whether something has changed (I have written those functions separately), I am not really worrying about efficiency here.
MAIN ISSUE:
I want to get the data from those frames in a BytesIO object, then convert them to a 2D numpy array in B&W so I can perform operations. All this while keeping the stream going (I have in fact reduced the frame rate to 4 per second to make it run faster on my computer).
PROBLEM ENCOUNTERED WITH THE FOLLOWING CODE:
One part of the problem that I have identified is that the numbers are way off. In my settings my camera to have a resolution of around 640*480 (= 307,200 length numpy array pixels data in B&W) whereas my computations in len() return less that 100k pixels.
def main():
print('Initializing camera')
base_image = io.BytesIO()
image_captured = io.BytesIO()
with picamera.PiCamera() as camera:
camera.resolution = (WIDTH, HEIGHT)
camera.framerate = FRAMERATE
camera.vflip = VFLIP # flips image rightside up, as needed
camera.hflip = HFLIP # flips image left-right, as needed
sleep(1) # camera warm-up time
print('Initializing websockets server on port %d' % WS_PORT)
WebSocketWSGIHandler.http_version = '1.1'
websocket_server = make_server(
'', WS_PORT,
server_class=WSGIServer,
handler_class=WebSocketWSGIRequestHandler,
app=WebSocketWSGIApplication(handler_cls=StreamingWebSocket))
websocket_server.initialize_websockets_manager()
websocket_thread = Thread(target=websocket_server.serve_forever)
print('Initializing HTTP server on port %d' % HTTP_PORT)
http_server = StreamingHttpServer()
http_thread = Thread(target=http_server.serve_forever)
print('Initializing broadcast thread')
output = BroadcastOutput(camera)
broadcast_thread = BroadcastThread(output.converter, websocket_server)
print('Starting recording')
camera.start_recording(output, 'yuv')
try:
print('Starting websockets thread')
websocket_thread.start()
print('Starting HTTP server thread')
http_thread.start()
print('Starting broadcast thread')
broadcast_thread.start()
time.sleep(0.5)
camera.capture(base_image, use_video_port=True, format='jpeg')
base_data = np.asarray(bytearray(base_image.read()), dtype=np.uint64)
base_img_matrix = cv2.imdecode(base_data, cv2.IMREAD_GRAYSCALE)
while True:
camera.wait_recording(1)
#insert here the code for frame analysis
camera.capture(image_captured, use_video_port=True, format='jpeg')
data_next = np.asarray(bytearray(image_captured.read()), dtype=np.uint64)
next_img_matrix = cv2.imdecode(data_next, cv2.IMREAD_GRAYSCALE)
monitor_changes(base_img_matrix, next_img_matrix)
except KeyboardInterrupt:
pass
finally:
print('Stopping recording')
camera.stop_recording()
print('Waiting for broadcast thread to finish')
broadcast_thread.join()
print('Shutting down HTTP server')
http_server.shutdown()
print('Shutting down websockets server')
websocket_server.shutdown()
print('Waiting for HTTP server thread to finish')
http_thread.join()
print('Waiting for websockets thread to finish')
websocket_thread.join()
if __name__ == '__main__':
main()
Solved, basically the problem was all in the way I was handling data and BytesIO files. First of all I needed to use unsigned int8 as type of the file to decode id. Then I have switched to np.frombuffer to read the files in its entirety, because the base image is not going to change, hence it will read always the same thing, and the next one will be inizialized and eliminated in every while loop. Also I can replace cv2.IMREAD_GRAYSCALE with 0 in the function.
camera.start_recording(output, 'yuv')
base_image = io.BytesIO()
try:
print('Starting websockets thread')
websocket_thread.start()
print('Starting HTTP server thread')
http_thread.start()
print('Starting broadcast thread')
broadcast_thread.start()
time.sleep(0.5)
camera.capture(base_image, use_video_port=True, format='jpeg')
base_data = np.frombuffer(base_image.getvalue(), dtype=np.uint8)
base_img_matrix = cv2.imdecode(base_data, 0)
while True:
camera.wait_recording(0.25)
image_captured = io.BytesIO()
#insert here the code for frame analysis
camera.capture(image_captured, use_video_port=True, format='jpeg')
data_next = np.frombuffer(image_captured.getvalue(), dtype=np.uint8)
next_img_matrix = cv2.imdecode(data_next, cv2.IMREAD_GRAYSCALE)
monitor_changes(base_img_matrix, next_img_matrix)
image_captured.close()

Python TCP socket send receive large delay

I used python socket to make a server on my Raspberry Pi 3 (Raspbian) and a client on my laptop (Windows 10). The server stream images to the laptop at a rate of 10fps, and can reach 15fps if I push it. The problem is when I want the laptop to send back a command based on the image, the frame rate drop sharply to 3fps. The process is like this:
Pi send img => Laptop receive img => Quick process => Send command based on process result => Pi receive command, print it => Pi send img => ...
The process time for each frame does not cause this (0.02s at most for each frame), so currently I am at a loss as to why the frame rate drop so much. The image is quite large, at around 200kB and the command is only a short string at 3B. The image is in matrix form and is pickled before sending, while the command is sent as is.
Can someone please explain to me why sending back such a short command would make the frame rate drop so much? And if possible, a solution for this problem. I tried making 2 servers, one dedicated to sending images and one for receiving command, but the result is the same.
Server:
import socket
import pickle
import time
import cv2
import numpy as np
from picamera.array import PiRGBArray
from picamera import PiCamera
from SendFrameInOO import PiImageServer
def main():
# initialize the server and time stamp
ImageServer = PiImageServer()
ImageServer2 = PiImageServer()
ImageServer.openServer('192.168.0.89', 50009)
ImageServer2.openServer('192.168.0.89', 50002)
# Initialize the camera object
camera = PiCamera()
camera.resolution = (320, 240)
camera.framerate = 10 # it seems this cannot go higher than 10
# unless special measures are taken, which may
# reduce image quality
camera.exposure_mode = 'sports' #reduce blur
rawCapture = PiRGBArray(camera)
# allow the camera to warmup
time.sleep(1)
# capture frames from the camera
print('<INFO> Preparing to stream video...')
timeStart = time.time()
for frame in camera.capture_continuous(rawCapture, format="bgr",
use_video_port = True):
# grab the raw NumPy array representing the image, then initialize
# the timestamp and occupied/unoccupied text
image = frame.array
imageData = pickle.dumps(image)
ImageServer.sendFrame(imageData) # send the frame data
# receive command from laptop and print it
command = ImageServer2.recvCommand()
if command == 'BYE':
print('BYE received, ending stream session...')
break
print(command)
# clear the stream in preparation for the next one
rawCapture.truncate(0)
print('<INFO> Video stream ended')
ImageServer.closeServer()
elapsedTime = time.time() - timeStart
print('<INFO> Total elapsed time is: ', elapsedTime)
if __name__ == '__main__': main()
Client:
from SupFunctions.ServerClientFunc import PiImageClient
import time
import pickle
import cv2
def main():
# Initialize
result = 'STP'
ImageClient = PiImageClient()
ImageClient2 = PiImageClient()
# Connect to server
ImageClient.connectClient('192.168.0.89', 50009)
ImageClient2.connectClient('192.168.0.89', 50002)
print('<INFO> Connection established, preparing to receive frames...')
timeStart = time.time()
# Receiving and processing frames
while(1):
# Receive and unload a frame
imageData = ImageClient.receiveFrame()
image = pickle.loads(imageData)
cv2.imshow('Frame', image)
key = cv2.waitKey(1) & 0xFF
# Exit when q is pressed
if key == ord('q'):
ImageClient.sendCommand('BYE')
break
ImageClient2.sendCommand(result)
ImageClient.closeClient()
elapsedTime = time.time() - timeStart
print('<INFO> Total elapsed time is: ', elapsedTime)
print('Press any key to exit the program')
#cv2.imshow('Picture from server', image)
cv2.waitKey(0)
if __name__ == '__main__': main()
PiImageServer and PiImageClient:
import socket
import pickle
import time
class PiImageClient:
def __init__(self):
self.s = None
self.counter = 0
def connectClient(self, serverIP, serverPort):
self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.s.connect((serverIP, serverPort))
def closeClient(self):
self.s.close()
def receiveOneImage(self):
imageData = b''
lenData = self.s.recv(8)
length = pickle.loads(lenData) # should be 921764 for 640x480 images
print('Data length is:', length)
while len(imageData) < length:
toRead = length-len(imageData)
imageData += self.s.recv(4096 if toRead>4096 else toRead)
#if len(imageData)%200000 <= 4096:
# print('Received: {} of {}'.format(len(imageData), length))
return imageData
def receiveFrame(self):
imageData = b''
lenData = self.s.recv(8)
length = pickle.loads(lenData)
print('Data length is:', length)
'''length = 921764 # for 640x480 images
length = 230563 # for 320x240 images'''
while len(imageData) < length:
toRead = length-len(imageData)
imageData += self.s.recv(4096 if toRead>4096 else toRead)
#if len(imageData)%200000 <= 4096:
# print('Received: {} of {}'.format(len(imageData), length))
self.counter += 1
if len(imageData) == length:
print('Successfully received frame {}'.format(self.counter))
return imageData
def sendCommand(self, command):
if len(command) != 3:
print('<WARNING> Length of command string is different from 3')
self.s.send(command.encode())
print('Command {} sent'.format(command))
class PiImageServer:
def __init__(self):
self.s = None
self.conn = None
self.addr = None
#self.currentTime = time.time()
self.currentTime = time.asctime(time.localtime(time.time()))
self.counter = 0
def openServer(self, serverIP, serverPort):
print('<INFO> Opening image server at {}:{}'.format(serverIP,
serverPort))
self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.s.bind((serverIP, serverPort))
self.s.listen(1)
print('Waiting for client...')
self.conn, self.addr = self.s.accept()
print('Connected by', self.addr)
def closeServer(self):
print('<INFO> Closing server...')
self.conn.close()
self.s.close()
#self.currentTime = time.time()
self.currentTime = time.asctime(time.localtime(time.time()))
print('Server closed at', self.currentTime)
def sendOneImage(self, imageData):
print('<INFO> Sending only one image...')
imageDataLen = len(imageData)
lenData = pickle.dumps(imageDataLen)
print('Sending image length')
self.conn.send(lenData)
print('Sending image data')
self.conn.send(imageData)
def sendFrame(self, frameData):
self.counter += 1
print('Sending frame ', self.counter)
frameDataLen = len(frameData)
lenData = pickle.dumps(frameDataLen)
self.conn.send(lenData)
self.conn.send(frameData)
def recvCommand(self):
commandData = self.conn.recv(3)
command = commandData.decode()
return command
I believe the problem is two-fold. First, you are serializing all activity: The server is sending a complete image, then instead of continuing on to send the next image (which would better fit the definition of "streaming"), it is stopping, waiting for all bytes of the previous image to make themselves across the network to the client, then for the client to receive all bytes of the image, unpickle it, send a response and for the response to then make its way across the wire to the server.
Is there a reason you need them to be in lockstep like this? If not, try to parallelize the two sides. Have your server create a separate thread to listen for commands coming back (or simply use select to determine when the command socket has something to receive).
Second, you are likely being bitten by Nagle's algorithm (https://en.wikipedia.org/wiki/Nagle%27s_algorithm), which is intended to prevent sending numerous packets with small payloads (but lots of overhead) across the network. So, your client-side kernel has gotten your three-bytes of command data and has buffered it, waiting for you to provide more data before it sends the data to the server (it will eventually send it anyway, after a delay). To change that, you would want to use the TCP_NODELAY socket option on the client side (see https://stackoverflow.com/a/31827588/1076479).

Categories

Resources