Reducing frame-rate with Python OpenCV VideoCapture - python

I have a Raspberry Pi running Raspbian 9. I have OpenCV 4.0.1 installed. I have a USB Webcam attached. The Raspberry is headless, I connect with ssh <user>#<IP> -X. The goal is to get a real-time video stream on my client computer.
The issue is that there is a considerable lag of around 2 seconds. Also the stream playback is unsteady, this means slow and then quick again.
My guess is that SSH just cannot keep up with the camera's default 30fps. I therefore try to reduce the frame-rate as I could live with a lower frame-rate as long as there's no noticeable lag. My own attempts to reduce the frame rate have not worked.
My code, commented the parts that I tried myself to reduce the frame-rate but did not work.
import cv2
#import time
cap = cv2.VideoCapture(0)
#cap.set(cv2.CAP_PROP_FPS, 5)
while(True):
ret, frame = cap.read()
#time.sleep(1)
#cv2.waitKey(100)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
what I tried to reduce the frame rate:
I tried to set cap.set(cv2.CAP_PROP_FPS, 5) (also tried 10 or 1). If I then print(cap.get(cv2.CAP_PROP_FPS)) it gives me the frame-rate I just set but it has no effect on the playback.
I tried to use time.sleep(1) in the while loop but it has no effect on the video.
I tried to use a second cv2.waitKey(100) in the while loop as suggested here on Quora: https://qr.ae/TUSPyN , but this also has no effect.
edit 1 (time.wait and waitKey indeed work):
As pointed out in the comment, time.sleep(1) and cv2.waitKey(1000) should both work and indeed they did after all. It was necessary to put these at the end of the while loop, after cv2.imshow().
As pointed out in the first comment, it might be better to choose a different setup for streaming media, which is what I am looking at now to get rid of the lag.
edit 2 (xpra instead of ssh -X):
We found out that even after all attempts to reduce the frame rate, ssh -X was sill laggy. We found xpra to be a lot quicker, i.e. not requiring any lowering of the frame rate or resolution and not having noticeable lag.

Related

Raspberry Pi Camera shows only a black image on second execution of OpenCV Python program

I'm trying to capture video with a Raspberry Pi 3B and a Raspberry Pi camera using openCV for some real time image processing. I would prefer to use only openCV for image capture to reduce the resources used by the program, which is why I'm trying to use native openCV functions rather than implementing the PiCamera module.
The application will capture imaeges in low light conditions, so I'd like to set the exposure time. I've found that I can set the CAP_PROP_EXPOSURE property using openCV using the set, i.e., camera.set(cv2.CAP_PROP_EXPOSURE, x) method, where x is the exposure time in ms (when run on unix systems). Setting the exposure time like this only works if you first disable the camera's auto exposure using the CAP_PROP_AUTO_EXPOSURE property. The documentation on this is a bit lacking, but I found various places online that says passing 0.25 to this property disables auto exposure, and passing 0.75 re-enables the setting. This is where things don't work for me.
On first run of the program (see code below) the camera works fine, and I can see live images being streamed to my Pi. However, upon restarting the application, I only get a black image, unless I reboot the Raspberry Pi, and by extension, the camera. I have tried turning the auto exposure back on before program termination, using camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75), but that doesn't work. I have also tried the case where 0.75 should be passed to this property to turn auto exposure off, and 0.25 to turn it back on, but that also doesn't work.
The program is written for Python 3.7.3, on the latest Raspberry PI OS Buster, and openCV-python version 4.5.3.56.
My code:
import numpy as np
import cv2
camera = cv2.VideoCapture(-1)
if not(camera.isOpened()):
camera.open(-1)
camera.set(cv2.CAP_PROP_FRAME_WIDTH, 1024) # Sets width of image to 1024px as per SOST
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 1024) # Sets height of image to 1024px as per SOST
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.25) # Needed to set exposure manually
camera.set(cv2.CAP_PROP_EXPOSURE, 900) # 900ms exposure as per SOST
camera.set(cv2.CAP_PROP_FPS, (1/0.9)) # Sets FPS accordingly
while True:
ret, frame = camera.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', img)
if cv2.waitKey(1) == ord('q'):
break
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75) # Sets camera exposure back to auto
camera.release()
cv2.destroyAllWindows()
Note: If replicating the issue on Windows platforms, the x value in camera.set(cv2.CAP_PROP_EXPOSURE, x) is the negative exponent of 2. So for example, camera.set(cv2.CAP_PROP_EXPOSURE, -3) sets the exposure time to 125ms.
Edit: Fixed the exposure time in the code from 1000ms to 900ms.

Is there a way to trigger an action camera using python

trying to make a timelapse using a cheap action cam - model: hyundai cnu3000
my first attampt was using it as a webcam, with a simple opencv script to get the imgs:
import cv2
cap = cv2.VideoCapture(0)
# reseting resolution to max it out
# since opencv has a default of 640X480
cap.set(3,3000)
cap.set(4,3000)
while(True):
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
# save when using 'q' key
cv2.imwrite("testing_webcam.jpg", frame)
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
this resaults in images with a resolution of (1280X720)
which is the maximum 'video recording' resolution for the camera - makes sense since we are streaming live to the computer (actually raspberry pi but windows pc worked fine as well)
now here is the thing - surprisingly the camera is capable of images with a much higher resolution (2592X1944)
but only if i use it manualy (i.e. pressing button thereby saveing to sd card).
i don't mind saving to sd card but i was wandering if there is a way to triger the camera without streaming - getting a higher resolution
tried gphoto with my Pi as well - as expected, doesn't work (i did not find this model in the supported model list)
pi#raspberrypi:~ $ gphoto2 --auto-detect
Model Port
----------------------------------------------------------
Mass Storage Camera disk:/media/pi/7AFB-BDAE
pi#raspberrypi:~ $ gphoto2 --trigger-capture
*** Error ***
This camera can not trigger capture.
ERROR: Could not trigger capture.
*** Error (-6: 'Unsupported operation') ***
For debugging messages, please use the --debug option.
Debugging messages may help finding a solution to your problem.
...
...
...
any help / pointing in direction would be much apreciated :D

How to change frame rate FPS of an existing video using openCV python

I am trying to change the Frame rate i.e., FPS of an existing video using openCV library in python. Below is the code that I am trying to execute. Even after setting the FPS property using cv2.CAP_PROP_FPS the video is not playing faster in the cv2.imshow() method. Even After setting the FPS property the getter returns the older FPS value. So how do I set the FPS value higher and make the video play faster?
Used version:
python = 3.7.4 and
opencv-python - 4.1.0.25
import cv2
video = cv2.VideoCapture("yourVideoPath.mp4");
video.set(cv2.CAP_PROP_FPS, int(60))
if __name__ == '__main__':
print("Frame rate : {0}".format(video.get(cv2.CAP_PROP_FPS)))
while video.isOpened():
ret1, frame2 = video.read()
cv2.imshow("Changed", frame2)
if cv2.waitKey(10) & 0xFF == ord('q'): # press q to quit
break
video.release()
cv2.destroyAllWindows()
If you're only trying to play the video in the displayed window, the limiting factor is not the fps of the video but the time spent waiting with the code waitKey(10) which makes the program wait for 10ms between each frame.
The read() method of the VideoCapture class simply returns the next frame with no concept of waiting or frame rate. The only thing preventing this code running as fast as it can is the waitKey(10) section, which is thus the main factor determining speed. To change the frame rate as seen through the imshow() method, you'd need to edit the time spent waiting. This is likely the dominant factor, but not the only one as the reading of a frame does take time.
If you're actually trying to change the playback rate of an existing file and have that saved to that file, I am unsure if OpenCV actually supports this, and I imagine it would be dependent on what back end you're using - OpenCV implements the VideoCapture class using different 3rd party backends.. As per the documentation of VideoCapture.set() I'd investigate the return value of video.set(cv2.CAP_PROP_FPS, int(60)) as the documentation suggests it will return true if this has changed something.
As an alternative you could investigate using something like FFMPEG which supports this relatively easily. If you want to stick with OpenCV I know from personal experience you can do this with the VideoWriter class. In this method you would read in the video frame by frame using the VideoCapture class, and then save it at the desired frame rate with VideoWriter. I suspect FFMPEG will likely meet your needs however!

How to ensure all frames are read and processed with OpenCV

I am trying to use OpenCV to load a video file or access video stream from webcam but I'm unable to access all frames.
The video file is captured with the FPS of 60 but in my code I am only able to access a few frames per second. I tried using the threaded version of OpenCV, imutils. It works better but I still cannot access full frames.
In my code below, I use the threaded version video reader to load video file and resize it to smaller size to reduce the processing power required.
After the frame is grabbed successfully, I will so some image processing work (in the future). But now even with this boilerplate, I can at most read only a few (10+) frames and the results are stuttering. Is there a way to resolve this?
import cv2
from imutils import resize
from imutils.video import VideoStream
vs = VideoStream(src="./video.MOV").start()
while True:
frame = vs.read()
if frame is not None:
frame = resize(frame, 800)
# some heavy analytics work
cv2.imshow("Frame", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
vs.stop()
Experiment
I ran an experiment to calculate number of frames loaded and average time taken for each imshow function on both my iMac and an Ubuntu machine rocking Intel Core i5-7400 # 3.00Ghz with a 1080p monitor.
The video (h264) has a duration of 1:03 min and of size 185.7MB.
iMac can only load a total of 414 frames while the Ubuntu machine can load a total of 2826 frames.
Average time take for imshow function for both machine is 0.0003s
Basically, you just load the video and display, so there isn't any reason that you get low fps like that except that you are using a Macbook with Retina screen. Maybe it's the reason causes slow video display because the Retina screen has a lot more pixels, and it might take time for the compositing engine to render your image on the screen. I suggest to use an external screen.

High FPS livestream over ethernet using Python

I plan on building a ROV and I am working on my video feed atm. I will be using fiber optics for all communications and I am tinkering with opencv to stream a webcam feed with python. I might choose to use IP cameras but I wanted to learn more about how to capture frames from a webcam in python first. Since I didn't know what I was going to use in the end I bought a cheap-as-they-get noname USB webcam just to try and get everything working. This camera feed is going to be used for navigation, a seperate video recorder will probably be used for recording video.
Enough about that, now to my issue. I am getting only 8 FPS when I am capturing the frames but I suspect that is due to the cheap webcam. The webcam is connected to a pcduino 3 nano which is connected to a arduino for controlling thrusters and reading sensors. I never thought of how to utilize hardware in encoding and decoding images, I don't know enough about that part yet to tell if I can utilize any of the hardware.
Do you guys believe it's my web cam that is the bottleneck? Is it a better idea to use a IP camera or should I be able to get a decent FPS using a webcam connected to a pcduino 3 nano capturing frames with opencv or perhaps some other way? I tried capturing frames with Pygame which gave me the same result, I also tried mjpg-streamer.
Im programming in Python, this is the test I made:
import cv2, time
FPS = 0
cap = cv2.VideoCapture(0)
last = time.time()
for i in range(0,100):
before = time.time()
rval, frame = cap.read()
now = time.time()
print("cap.read() took: " + str(now - before))
if(now - last >= 1):
print(FPS)
last = now
FPS = 0
else:
FPS += 1
cap.release()
And the result is in the line of:
cap.read() took: 0.118262052536
cap.read() took: 0.118585824966
cap.read() took: 0.121902942657
cap.read() took: 0.116680860519
cap.read() took: 0.119271993637
cap.read() took: 0.117949008942
cap.read() took: 0.119143009186
cap.read() took: 0.122378110886
cap.read() took: 0.116139888763
8
The webcam should explicitly state its frame rate in its specifications, and that will definitively tell you whether the bottleneck is the camera.
However, I would guess that the bottleneck is the pcDuino3. Most likely it can't decode the video very fast and that causes the low frame rate. You can try this exact code on an actual computer to verify this. Also, I believe OpenCV and mjpg-streamer both use libjpeg to decode the jpeg frames, so their similar frame rate is not surprising.

Categories

Resources