High FPS livestream over ethernet using Python - python

I plan on building a ROV and I am working on my video feed atm. I will be using fiber optics for all communications and I am tinkering with opencv to stream a webcam feed with python. I might choose to use IP cameras but I wanted to learn more about how to capture frames from a webcam in python first. Since I didn't know what I was going to use in the end I bought a cheap-as-they-get noname USB webcam just to try and get everything working. This camera feed is going to be used for navigation, a seperate video recorder will probably be used for recording video.
Enough about that, now to my issue. I am getting only 8 FPS when I am capturing the frames but I suspect that is due to the cheap webcam. The webcam is connected to a pcduino 3 nano which is connected to a arduino for controlling thrusters and reading sensors. I never thought of how to utilize hardware in encoding and decoding images, I don't know enough about that part yet to tell if I can utilize any of the hardware.
Do you guys believe it's my web cam that is the bottleneck? Is it a better idea to use a IP camera or should I be able to get a decent FPS using a webcam connected to a pcduino 3 nano capturing frames with opencv or perhaps some other way? I tried capturing frames with Pygame which gave me the same result, I also tried mjpg-streamer.
Im programming in Python, this is the test I made:
import cv2, time
FPS = 0
cap = cv2.VideoCapture(0)
last = time.time()
for i in range(0,100):
before = time.time()
rval, frame = cap.read()
now = time.time()
print("cap.read() took: " + str(now - before))
if(now - last >= 1):
print(FPS)
last = now
FPS = 0
else:
FPS += 1
cap.release()
And the result is in the line of:
cap.read() took: 0.118262052536
cap.read() took: 0.118585824966
cap.read() took: 0.121902942657
cap.read() took: 0.116680860519
cap.read() took: 0.119271993637
cap.read() took: 0.117949008942
cap.read() took: 0.119143009186
cap.read() took: 0.122378110886
cap.read() took: 0.116139888763
8

The webcam should explicitly state its frame rate in its specifications, and that will definitively tell you whether the bottleneck is the camera.
However, I would guess that the bottleneck is the pcDuino3. Most likely it can't decode the video very fast and that causes the low frame rate. You can try this exact code on an actual computer to verify this. Also, I believe OpenCV and mjpg-streamer both use libjpeg to decode the jpeg frames, so their similar frame rate is not surprising.

Related

Opencv Video Capture .read function Different result on different PCs

I am having a code where I need to read a video file using opencv and get the frames out of that video. i am using Python for that and doing the following:
video = cv2.VideoCapture(video_path)
if not video.isOpened():
self.logger.error("Error opening video from file {}".format(video_path))
ret, img = video.read()
while ret:
frames.append(img)
ret, img = video.read()
total_nbr_frames = len(frames)
I pass a video on one machine and I get a result of 35 frames. but when I use a different machine I get 7 frames.
Another video I tried was working on the first machine (27 frames) on the other the video was open but I couldn't read the frames (total = 0)
What could be the reason for that? is it hardware related? am I missing a library?
As far as I see this is totally hardware related. There's no library to help you increase the frame read speed.

Is there a way to use video capture/ read in opencv without it taking so long?

There is about a 10-20 second delay between when I run my program and when the web camera actually takes the image. Is there any way to speed up this process?
I have looked several places and haven't found a solution.
video_capture = cv2.VideoCapture(1)
ret, frame = video_capture.read()
I just don't get what is taking these two lines of code so long to execute when I can take a picture with my webcam instantly through the normal camera application.
Ok so it took me a while but the problem was solved by switching the API. I changed the line of code:
video_capture = cv2.VideoCapture(1)
to
video_capture = cv2.VideoCapture(1, cv2.CAP_DSHOW)
by adding this, it now works instantly, removing the delay which was present before.

Reducing frame-rate with Python OpenCV VideoCapture

I have a Raspberry Pi running Raspbian 9. I have OpenCV 4.0.1 installed. I have a USB Webcam attached. The Raspberry is headless, I connect with ssh <user>#<IP> -X. The goal is to get a real-time video stream on my client computer.
The issue is that there is a considerable lag of around 2 seconds. Also the stream playback is unsteady, this means slow and then quick again.
My guess is that SSH just cannot keep up with the camera's default 30fps. I therefore try to reduce the frame-rate as I could live with a lower frame-rate as long as there's no noticeable lag. My own attempts to reduce the frame rate have not worked.
My code, commented the parts that I tried myself to reduce the frame-rate but did not work.
import cv2
#import time
cap = cv2.VideoCapture(0)
#cap.set(cv2.CAP_PROP_FPS, 5)
while(True):
ret, frame = cap.read()
#time.sleep(1)
#cv2.waitKey(100)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
what I tried to reduce the frame rate:
I tried to set cap.set(cv2.CAP_PROP_FPS, 5) (also tried 10 or 1). If I then print(cap.get(cv2.CAP_PROP_FPS)) it gives me the frame-rate I just set but it has no effect on the playback.
I tried to use time.sleep(1) in the while loop but it has no effect on the video.
I tried to use a second cv2.waitKey(100) in the while loop as suggested here on Quora: https://qr.ae/TUSPyN , but this also has no effect.
edit 1 (time.wait and waitKey indeed work):
As pointed out in the comment, time.sleep(1) and cv2.waitKey(1000) should both work and indeed they did after all. It was necessary to put these at the end of the while loop, after cv2.imshow().
As pointed out in the first comment, it might be better to choose a different setup for streaming media, which is what I am looking at now to get rid of the lag.
edit 2 (xpra instead of ssh -X):
We found out that even after all attempts to reduce the frame rate, ssh -X was sill laggy. We found xpra to be a lot quicker, i.e. not requiring any lowering of the frame rate or resolution and not having noticeable lag.

Unable to get OpenCV 3.1 FPS over ~15 FPS

I have some extremely simple performance testing code below for measuring the FPS of my webcam with OpenCV 3.1 + Python3 on a Late 2011 Macbook Pro:
cap = cv2.VideoCapture(0)
count = 0
start_time = time.perf_counter()
end_time = time.perf_counter()
while (start_time + 1) > end_time:
count += 1
cap.read()
# Attempt to force camera FPS to be higher
cap.set(cv2.CAP_PROP_FPS, 30)
end_time = time.perf_counter()
print("Got count", count)
Doing no processing, not even displaying the image or doing this in another thread, I am only getting around 15 FPS.
Trying to access the FPS of the camera with cap.get(cv2.CAP_PROP_FPS) I get 0.0.
Any ideas why?
I've already searched the internet a fair amount for answers, so things I've thought about:
I build OpenCV with release flags, so it shouldn't be doing extra debugging logic
Tried manually setting the FPS each frame (see above)
My FPS with other apps (e.g. Camera toy in Chrome) is 30FPS
There is no work being done in the app on the main thread, so putting the video capture logic in another thread as most other posts suggest shouldn't make a difference
** EDIT **
Additional details: it seems like the first frame I capture is quick, then subsequent frames are slower; seems like this could be a buffer issues (i.e. the camera is being paused after the first frame because a new buffer must be allocated to write to?)
Tweaked the code to calculate the average FPS so far after each read:
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
cap.set(cv2.CAP_PROP_FPS, 30)
start_time = time.perf_counter()
count = 0
cv2.CAP_PROP_FPS
end_time = time.perf_counter()
while True:
count += 1
ret, frame = cap.read()
end_time = time.perf_counter()
print("Reached FPS of: ", count / (end_time - start_time))
And I get one frame around 30FPS, and then subsequent frames are slower:
Reached FPS of: 27.805818385257446
Reached FPS of: 19.736237223924398
Reached FPS of: 18.173748156583795
Reached FPS of: 17.214809956810114
Reached FPS of: 16.94737657138959
Reached FPS of: 16.73624509452099
Reached FPS of: 16.33156408530572
** EDIT **
Still no luck as of 10/20. My best bet is there are some issues with memory transfer since the camera itself can definitively capture at 30 FPS based on the ability of other apps.
IT'S NOT ANSWER. Since the comment in original question is too long for your attention. I post outside instead.
First, it's normal when CV_CAP_PROP_FPS return 0. OpenCV for Python just a wrapper for OpenCV C++. As far as I know, this property only works for video file, not camera. You have to calculate FPS yourself (like your edited).
Second, OpenCV have a bug that always convert the image get from camera to RGB https://github.com/opencv/opencv/issues/4831. Normal camera usually use YUYV color. It's take a lot of time. You can check all supported resolution + fps https://trac.ffmpeg.org/wiki/Capture/Webcam. I see some camera not support RGB color and OpenCV force to get RGB and take terrible FPS. Due to camera limitation, in the same codec, the higher resolution, the slower fps. In different supported codec, the bigger output in same resolution, the slower fps. For example, my camera support yuyv and mjpeg, in HD resolution, YUYV have max 10 fps while MJPEG have max 30 fps.
So, first you can try ffmpeg executable to get frames. After identifying where error come from, if ffmpeg works well, you can use ffmpeg library (not ffmpeg executable) to get frame from your camera (OpenCV using ffmpeg for most video I/O including camera).
Be aware that the I only work with ffmpeg and OpenCV in C++ language, not Python. Using ffmpeg library is another long story.
Good luck!

How to use OpenCV to capture live video feed from multiple cameras? [duplicate]

This question already has an answer here:
How to capture multiple camera streams with OpenCV?
(1 answer)
Closed 10 months ago.
I am using Opencv 3 and python 3.6 for my project work. I want to set up multiple cameras at a time to see video feed from all of them at once. I want to do facial recognition using it. But there is no good way to do this. Here is one link which I followed but nothing happens: Reading from two cameras in OpenCV at once
I have tried this blog post as well but it only can capture one image at a time from video and cannot show the live video.
https://www.pyimagesearch.com/2016/01/18/multiple-cameras-with-the-raspberry-pi-and-opencv/
Previously people have done this with C++ but with python it seems difficult to me.
the below code works and i've tested it, so if u're using two cameras 1 a webcam and another is a usb cam then, (adjust videocapture numbers if both are usb cam)
import cv2
cap1 = cv2.VideoCapture(0)
cap2 = cv2.VideoCapture(1)
while 1:
ret1, img1 = cap1.read()
ret2, img2 = cap2.read()
if ret1 and ret2:
cv2.imshow('img1',img1)
cv2.imshow('img2',img2)
k = cv2.waitKey(100)
if k == 27: #press Esc to exit
break
cap1.release()
cap2.release()
cv2.destroyAllWindows()
my experience with R_Pi & 2 cams showed the limitation was the GPU on the R_Pi.
I used setup to allocate more GPU memory to 512Mb.
It would slow down with more than 10 fps with 2 cams.
Also, the USB ports restricted the video stream.
One solution is to put each camera on it's own usb controller. I did this using a 4 channel PCIe card. The card must have a separate controller for each port. I'm just finishing a project where I snap images from 4 ELP usb cameras, combine the images into one, and write it to disk. I spent days trying to make it work. I found examples for two cameras that worked with my laptop camera and an external camera but not two external cameras. The internal camera is on a different usb controller than the external ports...

Categories

Resources