I am having a code where I need to read a video file using opencv and get the frames out of that video. i am using Python for that and doing the following:
video = cv2.VideoCapture(video_path)
if not video.isOpened():
self.logger.error("Error opening video from file {}".format(video_path))
ret, img = video.read()
while ret:
frames.append(img)
ret, img = video.read()
total_nbr_frames = len(frames)
I pass a video on one machine and I get a result of 35 frames. but when I use a different machine I get 7 frames.
Another video I tried was working on the first machine (27 frames) on the other the video was open but I couldn't read the frames (total = 0)
What could be the reason for that? is it hardware related? am I missing a library?
As far as I see this is totally hardware related. There's no library to help you increase the frame read speed.
Related
There is about a 10-20 second delay between when I run my program and when the web camera actually takes the image. Is there any way to speed up this process?
I have looked several places and haven't found a solution.
video_capture = cv2.VideoCapture(1)
ret, frame = video_capture.read()
I just don't get what is taking these two lines of code so long to execute when I can take a picture with my webcam instantly through the normal camera application.
Ok so it took me a while but the problem was solved by switching the API. I changed the line of code:
video_capture = cv2.VideoCapture(1)
to
video_capture = cv2.VideoCapture(1, cv2.CAP_DSHOW)
by adding this, it now works instantly, removing the delay which was present before.
I am trying to use OpenCV to load a video file or access video stream from webcam but I'm unable to access all frames.
The video file is captured with the FPS of 60 but in my code I am only able to access a few frames per second. I tried using the threaded version of OpenCV, imutils. It works better but I still cannot access full frames.
In my code below, I use the threaded version video reader to load video file and resize it to smaller size to reduce the processing power required.
After the frame is grabbed successfully, I will so some image processing work (in the future). But now even with this boilerplate, I can at most read only a few (10+) frames and the results are stuttering. Is there a way to resolve this?
import cv2
from imutils import resize
from imutils.video import VideoStream
vs = VideoStream(src="./video.MOV").start()
while True:
frame = vs.read()
if frame is not None:
frame = resize(frame, 800)
# some heavy analytics work
cv2.imshow("Frame", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
vs.stop()
Experiment
I ran an experiment to calculate number of frames loaded and average time taken for each imshow function on both my iMac and an Ubuntu machine rocking Intel Core i5-7400 # 3.00Ghz with a 1080p monitor.
The video (h264) has a duration of 1:03 min and of size 185.7MB.
iMac can only load a total of 414 frames while the Ubuntu machine can load a total of 2826 frames.
Average time take for imshow function for both machine is 0.0003s
Basically, you just load the video and display, so there isn't any reason that you get low fps like that except that you are using a Macbook with Retina screen. Maybe it's the reason causes slow video display because the Retina screen has a lot more pixels, and it might take time for the compositing engine to render your image on the screen. I suggest to use an external screen.
I have a problem for writing x264 video(or single frame) on memory buffer. In opencv for images, imencode and imdecode do this task. But i want save x264 video frame for lower memory usage and sending on internet. I am able to with jpeg but jpeg size bigger than x264 video frame and quality much worser. I searched but i can't find how i write video frame on buffer.
Here is the example code to taking frames on webcam
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,320)
cap.set(4,240)
cap.set(5,30)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'x264')
out = cv2.VideoWriter('.....sample.avi',fourcc, 30.0, (320,240))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
cv2.imshow('frame',frame)
out.write(frame) #I want to write memory not disk
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
Unfortunately, there is no way to do with cv2.VideoWriter because you can't reach the encoded video frames before out.release().
The way I have found for my project is implementing cap_ffmpeg_impl.hpp from D:\your_directory\opencv\sources\modules\highgui\src and send your captured frames in that library. You will send encoded frames via UDP or TCP/IP and decode where they reach with the same library. Also remember, you need to compile right ffmpeg version to use it.
I plan on building a ROV and I am working on my video feed atm. I will be using fiber optics for all communications and I am tinkering with opencv to stream a webcam feed with python. I might choose to use IP cameras but I wanted to learn more about how to capture frames from a webcam in python first. Since I didn't know what I was going to use in the end I bought a cheap-as-they-get noname USB webcam just to try and get everything working. This camera feed is going to be used for navigation, a seperate video recorder will probably be used for recording video.
Enough about that, now to my issue. I am getting only 8 FPS when I am capturing the frames but I suspect that is due to the cheap webcam. The webcam is connected to a pcduino 3 nano which is connected to a arduino for controlling thrusters and reading sensors. I never thought of how to utilize hardware in encoding and decoding images, I don't know enough about that part yet to tell if I can utilize any of the hardware.
Do you guys believe it's my web cam that is the bottleneck? Is it a better idea to use a IP camera or should I be able to get a decent FPS using a webcam connected to a pcduino 3 nano capturing frames with opencv or perhaps some other way? I tried capturing frames with Pygame which gave me the same result, I also tried mjpg-streamer.
Im programming in Python, this is the test I made:
import cv2, time
FPS = 0
cap = cv2.VideoCapture(0)
last = time.time()
for i in range(0,100):
before = time.time()
rval, frame = cap.read()
now = time.time()
print("cap.read() took: " + str(now - before))
if(now - last >= 1):
print(FPS)
last = now
FPS = 0
else:
FPS += 1
cap.release()
And the result is in the line of:
cap.read() took: 0.118262052536
cap.read() took: 0.118585824966
cap.read() took: 0.121902942657
cap.read() took: 0.116680860519
cap.read() took: 0.119271993637
cap.read() took: 0.117949008942
cap.read() took: 0.119143009186
cap.read() took: 0.122378110886
cap.read() took: 0.116139888763
8
The webcam should explicitly state its frame rate in its specifications, and that will definitively tell you whether the bottleneck is the camera.
However, I would guess that the bottleneck is the pcDuino3. Most likely it can't decode the video very fast and that causes the low frame rate. You can try this exact code on an actual computer to verify this. Also, I believe OpenCV and mjpg-streamer both use libjpeg to decode the jpeg frames, so their similar frame rate is not surprising.
This question already has an answer here:
How to capture multiple camera streams with OpenCV?
(1 answer)
Closed 10 months ago.
I am using Opencv 3 and python 3.6 for my project work. I want to set up multiple cameras at a time to see video feed from all of them at once. I want to do facial recognition using it. But there is no good way to do this. Here is one link which I followed but nothing happens: Reading from two cameras in OpenCV at once
I have tried this blog post as well but it only can capture one image at a time from video and cannot show the live video.
https://www.pyimagesearch.com/2016/01/18/multiple-cameras-with-the-raspberry-pi-and-opencv/
Previously people have done this with C++ but with python it seems difficult to me.
the below code works and i've tested it, so if u're using two cameras 1 a webcam and another is a usb cam then, (adjust videocapture numbers if both are usb cam)
import cv2
cap1 = cv2.VideoCapture(0)
cap2 = cv2.VideoCapture(1)
while 1:
ret1, img1 = cap1.read()
ret2, img2 = cap2.read()
if ret1 and ret2:
cv2.imshow('img1',img1)
cv2.imshow('img2',img2)
k = cv2.waitKey(100)
if k == 27: #press Esc to exit
break
cap1.release()
cap2.release()
cv2.destroyAllWindows()
my experience with R_Pi & 2 cams showed the limitation was the GPU on the R_Pi.
I used setup to allocate more GPU memory to 512Mb.
It would slow down with more than 10 fps with 2 cams.
Also, the USB ports restricted the video stream.
One solution is to put each camera on it's own usb controller. I did this using a 4 channel PCIe card. The card must have a separate controller for each port. I'm just finishing a project where I snap images from 4 ELP usb cameras, combine the images into one, and write it to disk. I spent days trying to make it work. I found examples for two cameras that worked with my laptop camera and an external camera but not two external cameras. The internal camera is on a different usb controller than the external ports...