I have a (fairly cheap) webcam which produces images which are far lighter than it should be. The camera does have brightness correction - the adjustments are obvious when moving from light to dark - but it is consistently far to bright.
I am looking for a way to reduce the brightness without iterating over the entire frame (OpenCV Python bindings on a Raspberry Pi). Does that exist? Or better, is there a standard way of sending hints to a webcam to reduce the brightness?
import cv2
# create video capture
cap = cv2.VideoCapture(0)
window = cv2.namedWindow("output", 1)
while True:
# read the frames
_,frame = cap.read()
cv2.imshow("output",frame)
if cv2.waitKey(33)== 27:
break
# Clean up everything before leaving
cv2.destroyAllWindows()
cap.release()
I forgot Raspberry Pi is just running a regular OS. What an awesome machine. Thanks for the code which confirms that you just have a regular cv2 image.
Simple vectorized scaling (without playing with each pixel) should be simple. Below just scales every pixel. It would be easy to add a few lines to normalize the image if it has a major offset.
import numpy
#...
scale = 0.5 # whatever scale you want
frame_darker = (frame * scale).astype(numpy.uint8)
#...
Does that look like the start of what you want?
The standard way to adjust webcam parameters is the VideoCapture set() method (providing your camera supports the interface. Most do in my experience). This avoids the performance overhead of processing the image yourself.
VideoCapture::set
CV_CAP_PROP_BRIGHTNESS or CV_CAP_PROP_SATURATION would appear to be what you want.
Related
I'm trying to capture video with a Raspberry Pi 3B and a Raspberry Pi camera using openCV for some real time image processing. I would prefer to use only openCV for image capture to reduce the resources used by the program, which is why I'm trying to use native openCV functions rather than implementing the PiCamera module.
The application will capture imaeges in low light conditions, so I'd like to set the exposure time. I've found that I can set the CAP_PROP_EXPOSURE property using openCV using the set, i.e., camera.set(cv2.CAP_PROP_EXPOSURE, x) method, where x is the exposure time in ms (when run on unix systems). Setting the exposure time like this only works if you first disable the camera's auto exposure using the CAP_PROP_AUTO_EXPOSURE property. The documentation on this is a bit lacking, but I found various places online that says passing 0.25 to this property disables auto exposure, and passing 0.75 re-enables the setting. This is where things don't work for me.
On first run of the program (see code below) the camera works fine, and I can see live images being streamed to my Pi. However, upon restarting the application, I only get a black image, unless I reboot the Raspberry Pi, and by extension, the camera. I have tried turning the auto exposure back on before program termination, using camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75), but that doesn't work. I have also tried the case where 0.75 should be passed to this property to turn auto exposure off, and 0.25 to turn it back on, but that also doesn't work.
The program is written for Python 3.7.3, on the latest Raspberry PI OS Buster, and openCV-python version 4.5.3.56.
My code:
import numpy as np
import cv2
camera = cv2.VideoCapture(-1)
if not(camera.isOpened()):
camera.open(-1)
camera.set(cv2.CAP_PROP_FRAME_WIDTH, 1024) # Sets width of image to 1024px as per SOST
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 1024) # Sets height of image to 1024px as per SOST
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.25) # Needed to set exposure manually
camera.set(cv2.CAP_PROP_EXPOSURE, 900) # 900ms exposure as per SOST
camera.set(cv2.CAP_PROP_FPS, (1/0.9)) # Sets FPS accordingly
while True:
ret, frame = camera.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', img)
if cv2.waitKey(1) == ord('q'):
break
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0.75) # Sets camera exposure back to auto
camera.release()
cv2.destroyAllWindows()
Note: If replicating the issue on Windows platforms, the x value in camera.set(cv2.CAP_PROP_EXPOSURE, x) is the negative exponent of 2. So for example, camera.set(cv2.CAP_PROP_EXPOSURE, -3) sets the exposure time to 125ms.
Edit: Fixed the exposure time in the code from 1000ms to 900ms.
I want to capture 1920x1080 video from my camera but I've run into two issues
When I initialize a VideoCapture, it changes the width/height to 640/480
When I try to change the width/height in cv2, the image becomes messed up
Images
When setting 1920x1080 in cv2, the image becomes blue and has a glitchy bar at the bottom
cap = cv2.VideoCapture('/dev/video0')
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
Here's what's happening according to v4l2-ctl. The blue image doesn't seem to be a result of a pixelformat change (eg. RGB to BGR)
And finally, here's an example of an image being captured at 640x480 that has the correct colouring. The only difference in the code is that width/height is not set in cv2
Problem:
Actually the camera you are using has 2 mode:
640x480
1920x1080
One is for main stream, one is for sub stream. I also met this problem couple of times and here is the possible reasons why it doesn't work.
Note: I assume you tried different ways to run on full resolution(1920x1080) such as cv2.VideoCapture(0) , cv2.VideoCapture(-1) , cv2.VideoCapture(1) ...
Possible reasons
First reason could be that the camera doesn't support the resolution you desire but in your case we see that it supports 1920x1080 resolution. So this can not be the reason for your isssue.
Second reason which is general reason is that opencv backend doesn't support your camera driver. Since you are using VideoCaptureProperties of opencv, Documentation says:
Reading / writing properties involves many layers. Some unexpected result might happens along this chain. Effective behaviour depends from device hardware, driver and API Backend.
What you can do:
In this case, if you really need to reach that resolution and make compatible with opencv, you should use the SDK of your camera(if it has).
I have a python program which uses opencv VideoCapture to capture webcam image (in my case logitech c922). It has autofocus feature which is great but I don't know when the refocus is done and that makes the image that I capture blurred (not focus yet)
Is there any way to know when the camera already focusses?
Besides interacting with camera hardware that #ZdaR has mentioned, you can determine whether the image is sharp or not every frame. If the image is sharp, most probably the camera is in focus.
There are some great answers here on determining the sharpness of an image.
In the case of having a depth-of-view (the object is sharp while the background is blurry), you can set the threshold on some of the sharpest pixels only (i.e. sharpest 20% pixels). Since a out-of-focus or focusing image should be blurry altogether.
You can set the focus manually so that the camera is focused already when you need to use the camera.
Here is the code:
cap.set(3, 1280) # set the resolution
cap.set(4, 720)
cap.set(cv2.CAP_PROP_AUTOFOCUS, 0)
I have kind of challenging task and spent a lot of time on it but without satisfactory results.
The sense is to perform a background subtraction for future people counting. I am doing this with Python 3 and OpenCV 3.3. I have applied cv2.createBackgroundSubtractorMOG2 but faced two main difficulties:
As background is almost dark, and some people that walk on video are wearing dark staff, subtractor sometimes is unable to detect them properly, it simply skips them (take a look at the image below). Converting image from BGR to HSV made little changes but i expect even better result.
As you can see, a man in grey clothes is not detected well. How is it possible to improve this? If there is more efficient methods, please provide this information, I appreciate and welcome any help! Maybe there is sense to use stereo camera and try to process objects using images depth?
Another question that worries me, is what if a couple of people will be close to each other in case of hard traffic? The region will be simply merged and counted as simple. What can be done in such case?
Thanks in advance for any information!
UPD:
I performed histogram equalization on every channel of the image with HSV colorspace, but even now I am not able to absorb some people with color close to background color.
Here is code updated:
import cv2
import numpy as np
import imutils
cap = cv2.VideoCapture('test4.mp4')
clahe = cv2.createCLAHE(2.0, (8,8))
history = 50
fgbg = cv2.createBackgroundSubtractorMOG2(history=history, detectShadows=True)
while cap.isOpened():
frame = cap.read()[1]
width, height = frame.shape[:2]
frame = imutils.resize(frame, width=min(500, width))
origin = frame.copy()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
for channel in range(3):
hsv[:,:,channel] = clahe.apply(hsv[:,:,channel])
fgmask = fgbg.apply(hsv, learningRate=1 / history)
blur = cv2.medianBlur(fgmask, 5)
cv2.imshow('mask', fgmask)
cv2.imshow('hcv', hsv)
cv2.imshow('origin', origin)
k = cv2.waitKey(30) & 0xff
if k == 27 or k == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I believe following the steps below will solve your first problem to a large extent:
1.Preprocessing:
Image preprocessing is very crucial because a computer does not see an image as we humans perceive. Hence, it is always advised to look for ways to enhance the image rather than working on it directly.
For the given image, the man in a jacket appears to be having the same color as the background. I applied histogram equalization to all the three channels of the image and merged them to get the following:
The man is visible slightly better than before.
2. Color Space:
Your choice of going with HSV color space was right. But why restrict to the three channels together? I obtained the hue channel and got the following:
3. Fine Tuning
Now to the image above, you will have to apply some optimal threshold and then follow it up with a morphological erosion operation to get a better silhouette of the man in the frame.
Note: In order to solve your second problem you can also go with some morphological operations after applying a threshold.
I'm using openCV on OS X with my external webcam (Microsoft Cinema HD Lifecam) and its performance is very low even with the simplest camera readout code.
import cv2
cap = cv2.VideoCapture(1)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("Output", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I tried the same webcam with Photo Booth and it works well with high FPS. Also, I tried the same code with the built-in Facetime camera of my mac and it worked pretty fast. So, it seems like that I have some kind of configuration issue in OpenCV.
Has somebody ever experienced something like this?
Thanks for your answers.
It seems like I could solve my problem.
I just had to decrease the resolution of the camera.
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
I think Photo Booth sets the resolution automatically in order to increase the speed or the readout, however, one has to set this manually in OpenCV. Not sure about correctness of this explanation tough.
Try to enforce specific reader implementation, see here. Options to try CAP_QT and CAP_AVFOUNDATION, full list is here . Note, that OpenCV has to be built to support reader implementations.