trying to make a timelapse using a cheap action cam - model: hyundai cnu3000
my first attampt was using it as a webcam, with a simple opencv script to get the imgs:
import cv2
cap = cv2.VideoCapture(0)
# reseting resolution to max it out
# since opencv has a default of 640X480
cap.set(3,3000)
cap.set(4,3000)
while(True):
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
# save when using 'q' key
cv2.imwrite("testing_webcam.jpg", frame)
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
this resaults in images with a resolution of (1280X720)
which is the maximum 'video recording' resolution for the camera - makes sense since we are streaming live to the computer (actually raspberry pi but windows pc worked fine as well)
now here is the thing - surprisingly the camera is capable of images with a much higher resolution (2592X1944)
but only if i use it manualy (i.e. pressing button thereby saveing to sd card).
i don't mind saving to sd card but i was wandering if there is a way to triger the camera without streaming - getting a higher resolution
tried gphoto with my Pi as well - as expected, doesn't work (i did not find this model in the supported model list)
pi#raspberrypi:~ $ gphoto2 --auto-detect
Model Port
----------------------------------------------------------
Mass Storage Camera disk:/media/pi/7AFB-BDAE
pi#raspberrypi:~ $ gphoto2 --trigger-capture
*** Error ***
This camera can not trigger capture.
ERROR: Could not trigger capture.
*** Error (-6: 'Unsupported operation') ***
For debugging messages, please use the --debug option.
Debugging messages may help finding a solution to your problem.
...
...
...
any help / pointing in direction would be much apreciated :D
Related
I'm trying to capture video with a Raspberry Pi 3B and a Raspberry Pi camera using openCV for some real time image processing. I would prefer to use only openCV for image capture to reduce the resources used by the program, which is why I'm trying to use native openCV functions rather than implementing the PiCamera module.
The application will capture imaeges in low light conditions, so I'd like to set the exposure time. I've found that I can set the CAP_PROP_EXPOSURE property using openCV using the set, i.e., camera.set(cv2.CAP_PROP_EXPOSURE, x) method, where x is the exposure time in ms (when run on unix systems). Setting the exposure time like this only works if you first disable the camera's auto exposure using the CAP_PROP_AUTO_EXPOSURE property. The documentation on this is a bit lacking, but I found various places online that says passing 0.25 to this property disables auto exposure, and passing 0.75 re-enables the setting. This is where things don't work for me.
On first run of the program (see code below) the camera works fine, and I can see live images being streamed to my Pi. However, upon restarting the application, I only get a black image, unless I reboot the Raspberry Pi, and by extension, the camera. I have tried turning the auto exposure back on before program termination, using camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75), but that doesn't work. I have also tried the case where 0.75 should be passed to this property to turn auto exposure off, and 0.25 to turn it back on, but that also doesn't work.
The program is written for Python 3.7.3, on the latest Raspberry PI OS Buster, and openCV-python version 4.5.3.56.
My code:
import numpy as np
import cv2
camera = cv2.VideoCapture(-1)
if not(camera.isOpened()):
camera.open(-1)
camera.set(cv2.CAP_PROP_FRAME_WIDTH, 1024) # Sets width of image to 1024px as per SOST
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 1024) # Sets height of image to 1024px as per SOST
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.25) # Needed to set exposure manually
camera.set(cv2.CAP_PROP_EXPOSURE, 900) # 900ms exposure as per SOST
camera.set(cv2.CAP_PROP_FPS, (1/0.9)) # Sets FPS accordingly
while True:
ret, frame = camera.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', img)
if cv2.waitKey(1) == ord('q'):
break
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75) # Sets camera exposure back to auto
camera.release()
cv2.destroyAllWindows()
Note: If replicating the issue on Windows platforms, the x value in camera.set(cv2.CAP_PROP_EXPOSURE, x) is the negative exponent of 2. So for example, camera.set(cv2.CAP_PROP_EXPOSURE, -3) sets the exposure time to 125ms.
Edit: Fixed the exposure time in the code from 1000ms to 900ms.
I am using a Raspberry Pi V2.1 camera. I wanted to control the camera’s exposure time, shutter speed, etc using OpenCV. I am following the OpenCV flags for video I/O documentation. The link is here:
https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html
For ex:
I have tried using
cv2.CAP_PROP_AUTO_EXPOSURE = 0.25 and 0.75
It seems like auto exposure is turning on and off. But when I try to set the value manually using
cv2.CAP_PROP_EXPOSURE = -1 to -13 (according to some online blogs)
the camera is not responding.
The same goes for other flags. Most of them do not seem to be responding at all.
I have read the online documentation and get to know that flags are camera dependent. The OpenCV documentation, in this case, is not helpful at all.
So my question is How can I find out which flags are useful for the Pi camera and What are the valid values of these flags?
Thank you in advance.
I'm no expert on that topic but I managed to manually set the exposure for an RPi 4 with a camera v2.1.
I set the CAP_PROP_AUTO_EXPOSURE to 0.75 and CAP_PROP_EXPOSURE to 0. That left me with a black frame (as expected I guess). Increasing the exposure value gives gradually brighter images. For values above something like 80 it stopped getting any brighter.
This code gradually increases the exposure after each displayed frame and works for me:
import cv2
# Open Pi Camera
cap = cv2.VideoCapture(0)
# Set auto exposure to false
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75)
exposure = 0
while cap.isOpened():
# Grab frame
ret, frame = cap.read()
# Display if there is a frame
if ret:
cv2.imshow('Frame', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# Set exposure manually
cap.set(cv2.CAP_PROP_EXPOSURE, exposure)
# Increase exposure for every frame that is displayed
exposure += 0.5
# Close everything
cap.release()
cv2.destroyAllWindows()
Cheers,
Simon
I have a Raspberry Pi running Raspbian 9. I have OpenCV 4.0.1 installed. I have a USB Webcam attached. The Raspberry is headless, I connect with ssh <user>#<IP> -X. The goal is to get a real-time video stream on my client computer.
The issue is that there is a considerable lag of around 2 seconds. Also the stream playback is unsteady, this means slow and then quick again.
My guess is that SSH just cannot keep up with the camera's default 30fps. I therefore try to reduce the frame-rate as I could live with a lower frame-rate as long as there's no noticeable lag. My own attempts to reduce the frame rate have not worked.
My code, commented the parts that I tried myself to reduce the frame-rate but did not work.
import cv2
#import time
cap = cv2.VideoCapture(0)
#cap.set(cv2.CAP_PROP_FPS, 5)
while(True):
ret, frame = cap.read()
#time.sleep(1)
#cv2.waitKey(100)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
what I tried to reduce the frame rate:
I tried to set cap.set(cv2.CAP_PROP_FPS, 5) (also tried 10 or 1). If I then print(cap.get(cv2.CAP_PROP_FPS)) it gives me the frame-rate I just set but it has no effect on the playback.
I tried to use time.sleep(1) in the while loop but it has no effect on the video.
I tried to use a second cv2.waitKey(100) in the while loop as suggested here on Quora: https://qr.ae/TUSPyN , but this also has no effect.
edit 1 (time.wait and waitKey indeed work):
As pointed out in the comment, time.sleep(1) and cv2.waitKey(1000) should both work and indeed they did after all. It was necessary to put these at the end of the while loop, after cv2.imshow().
As pointed out in the first comment, it might be better to choose a different setup for streaming media, which is what I am looking at now to get rid of the lag.
edit 2 (xpra instead of ssh -X):
We found out that even after all attempts to reduce the frame rate, ssh -X was sill laggy. We found xpra to be a lot quicker, i.e. not requiring any lowering of the frame rate or resolution and not having noticeable lag.
I'm using openCV on OS X with my external webcam (Microsoft Cinema HD Lifecam) and its performance is very low even with the simplest camera readout code.
import cv2
cap = cv2.VideoCapture(1)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("Output", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I tried the same webcam with Photo Booth and it works well with high FPS. Also, I tried the same code with the built-in Facetime camera of my mac and it worked pretty fast. So, it seems like that I have some kind of configuration issue in OpenCV.
Has somebody ever experienced something like this?
Thanks for your answers.
It seems like I could solve my problem.
I just had to decrease the resolution of the camera.
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
I think Photo Booth sets the resolution automatically in order to increase the speed or the readout, however, one has to set this manually in OpenCV. Not sure about correctness of this explanation tough.
Try to enforce specific reader implementation, see here. Options to try CAP_QT and CAP_AVFOUNDATION, full list is here . Note, that OpenCV has to be built to support reader implementations.
This question already has an answer here:
How to capture multiple camera streams with OpenCV?
(1 answer)
Closed 10 months ago.
I am using Opencv 3 and python 3.6 for my project work. I want to set up multiple cameras at a time to see video feed from all of them at once. I want to do facial recognition using it. But there is no good way to do this. Here is one link which I followed but nothing happens: Reading from two cameras in OpenCV at once
I have tried this blog post as well but it only can capture one image at a time from video and cannot show the live video.
https://www.pyimagesearch.com/2016/01/18/multiple-cameras-with-the-raspberry-pi-and-opencv/
Previously people have done this with C++ but with python it seems difficult to me.
the below code works and i've tested it, so if u're using two cameras 1 a webcam and another is a usb cam then, (adjust videocapture numbers if both are usb cam)
import cv2
cap1 = cv2.VideoCapture(0)
cap2 = cv2.VideoCapture(1)
while 1:
ret1, img1 = cap1.read()
ret2, img2 = cap2.read()
if ret1 and ret2:
cv2.imshow('img1',img1)
cv2.imshow('img2',img2)
k = cv2.waitKey(100)
if k == 27: #press Esc to exit
break
cap1.release()
cap2.release()
cv2.destroyAllWindows()
my experience with R_Pi & 2 cams showed the limitation was the GPU on the R_Pi.
I used setup to allocate more GPU memory to 512Mb.
It would slow down with more than 10 fps with 2 cams.
Also, the USB ports restricted the video stream.
One solution is to put each camera on it's own usb controller. I did this using a 4 channel PCIe card. The card must have a separate controller for each port. I'm just finishing a project where I snap images from 4 ELP usb cameras, combine the images into one, and write it to disk. I spent days trying to make it work. I found examples for two cameras that worked with my laptop camera and an external camera but not two external cameras. The internal camera is on a different usb controller than the external ports...