Video Streaming From Raspberry Pi zero using Arducam - python

I am trying to stream live video from Raspberry Pi zero using Arducam B0112.
I have written a function to stream video but the frame rate is low, Can you please suggest some alternative function to stream video and display it on localhost which will have better frame rate.
The video streaming function I have is as follows:
def gen():
"""Video streaming generator function."""
output = np.empty((240, 320, 3), dtype=np.uint8)
while True:
camera.capture(output, 'rgb')
# Construct a numpy array from the stream
ret, buffer = cv2.imencode('.jpg', output, [int(cv2.IMWRITE_JPEG_QUALITY), 85])
frame = buffer.tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')

Related

Error when trying streaming video with opencv python

Error when trying streaming video with opencv
def gen_frame():
cap = cv2.VideoCapture("http://iotctu.ddns.net:8080/cgi-bin/jpeg?connect=start&framerate=10&resolution=640&quality=1&UID=23174&Language=0")
while (True):
frame = cap.read()
convert = cv2.imencode('.jpg', frame)[1].tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + convert + b'\r\n')
The error is
#app.route('/video_feed') def video_feed(): return Response(gen_frame(), mimetype='multipart/x-mixed-replace; boundary=frame') TypeError: Expected Ptr<cv::UMat> for argument 'img'

Using Python and Raspberry and nobile network for live webcam streaming wihtout bigger lags?

is it possible to write in python a webcam stream script for mobile networks. The biggest requirement is that it shall be realtime as possible without bigger lags and delay. I have tried some standard examples I found on google via UDP. While beeing in my private Wifi it works perfect for a 320x240 resolution.
But as soon I switch to my LTE Surfstick where I have a about 3-4 Mbits of upload, the picture is lagging extremly. It has a big delay and lot of frame drops.
I wonder why, because 3 Mbits should be enough...
So my guess is that I need some kind of compresssion? Or I'm missing something essential here and its even not possible without buffering a lot, which would make realtime impossible?
Here is the code i use vor the raspberry:
import socket
import cv2 as cv
addr = ('myserver.xx', 1331)
buf = 512
width = 320
height = 240
cap = cv.VideoCapture(0)
cap.set(3, width)
cap.set(4, height)
cap.set(cv.CAP_PROP_FPS, 25)
cap.set(cv.CAP_PROP_FOURCC, cv.VideoWriter.fourcc('M','J','P','G'))
code = 'start'
code = ('start' + (buf - len(code)) * 'a').encode('utf-8')
if __name__ == '__main__':
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
while(cap.isOpened()):
ret, frame = cap.read()
#frame = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
if ret:
s.sendto(code, addr)
data = frame.tostring()
for i in range(0, len(data), buf):
s.sendto(data[i:i+buf], addr)
# cv.imshow('send', frame)
# if cv.waitKey(1) & 0xFF == ord('q'):
# break
else:
break
# s.close()
# cap.release()
# cv.destroyAllWindows()

Problem with IP Camera Streaming with Flask Web and opencv

I have a problem with my IP camera trying to streaming in Flask Web using opencv. In my html page shows the camera streaming window but it capture a frame every 2-3 minutes...
i got this error: [h264 # 0x130e6e0] error while decoding MB 14 2, bytestream -15
import cv2
camera = cv2.VideoCapture('rtsp://admin:12345#192.168.1.105:554/user=admin_password=12345_channe0_stream=0.sdp')
def gen_frames(): # generate frame by frame from camera
while True:
# Capture frame-by-frame
success, frame = camera.read() # read the camera frame
if not success:
break
else:
ret, buffer = cv2.imencode('.jpg', frame)
frame = buffer.tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n') # concat frame one by one and show result
#app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen_frames(),
mimetype='multipart/x-mixed-replace; boundary=frame')

Streaming Face Recognition Video and Meta-data in Flask

The task is to recognize faces in a video stream and to draw bounding boxes on the video frames and to show the person's name. Need to stream video frames and metadata (names) from the API. The API makes a call to a GPU intensive machine learning subroutine which can be made to return a frame and name pair in a Python tuple. To reduce, computation we tried to make a single function call per frame processed. tuple contains a bytes type frame and a string type name.
How to show stream video frames and metadata (names) from the API?
def get_frame():
recog = VideoFaceRecog(target="/video/m.mp4")
while True:
(ret, frame) = recog.cap.read()
if not ret:
print('end of the video file...')
break
cv2.resize(frame, (640, 480))
frame, names, bounding_boxes = recog.frame_recog(frame)
camera_frame = cv2.imencode('.jpg', frame)[1].tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + camera_frame + b'\r\n')
#app.route('/camera_feed', methods=['GET'])
def video_feed():
return Response(stream_with_context((get_frame())),
mimetype='multipart/x-mixed-replace; boundary=frame')
recog = VideoFaceRecog()
def get_frame(recog):
cap = cv2.VideoCapture(0)
while True:
(ret, frame) = cap.read()
if not ret:
print('end of the video file...')
break
frame, names, bounding_boxes = recog.frame_recog(frame)
recog.add_name(names)
camera_frame = cv2.imencode('.jpg', frame)[1].tobytes()
yield ((b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + camera_frame + b'\r\n'))
Call meta-data in a different endpoint.

Generators are unable to concat with bytes in Python

I have an class that gets frames from a camera using a get_frame method. In a web context, I need to add some data around each frame before streaming it to the browser. When I try to add the extra information (some bytes) to the frame, I get TypeError: can't concat bytes to generator. How do I concatenate this data?
def gen():
camera = VideoCamera()
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
class VideoCamera():
def __init__(self):
self.video = cv2.VideoCapture(0)
def get_frame(self):
while(True):
ret, frame = self.video.read()
#that face is the list, which has all the detected faces in the frame, using dlib library
face = detector(gray, 0)
for (J, rect) in enumerate(face):
ret, jpeg = cv2.imencode('.jpg', frame)
yield jpeg.tobytes()
As written, calling get_frame returns a generator, not an individual frame. You need to iterate over that generator to get individual frames, which you can then yield along with the other data.
def gen():
camera = VideoCamera()
for frame in camera.get_frame():
yield b'--frame\r\nContent-Type: image/jpeg\r\n\r\n'
yield frame
yield b'\r\n\r\n'

Categories

Resources