Opencv read image when camera is refocus (still blur) - python

I have a python program which uses opencv VideoCapture to capture webcam image (in my case logitech c922). It has autofocus feature which is great but I don't know when the refocus is done and that makes the image that I capture blurred (not focus yet)
Is there any way to know when the camera already focusses?

Besides interacting with camera hardware that #ZdaR has mentioned, you can determine whether the image is sharp or not every frame. If the image is sharp, most probably the camera is in focus.
There are some great answers here on determining the sharpness of an image.
In the case of having a depth-of-view (the object is sharp while the background is blurry), you can set the threshold on some of the sharpest pixels only (i.e. sharpest 20% pixels). Since a out-of-focus or focusing image should be blurry altogether.

You can set the focus manually so that the camera is focused already when you need to use the camera.
Here is the code:
cap.set(3, 1280) # set the resolution
cap.set(4, 720)
cap.set(cv2.CAP_PROP_AUTOFOCUS, 0)

Related

OpenCV changing VideoCapture resolution causes colour issues and glitches

I want to capture 1920x1080 video from my camera but I've run into two issues
When I initialize a VideoCapture, it changes the width/height to 640/480
When I try to change the width/height in cv2, the image becomes messed up
Images
When setting 1920x1080 in cv2, the image becomes blue and has a glitchy bar at the bottom
cap = cv2.VideoCapture('/dev/video0')
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
Here's what's happening according to v4l2-ctl. The blue image doesn't seem to be a result of a pixelformat change (eg. RGB to BGR)
And finally, here's an example of an image being captured at 640x480 that has the correct colouring. The only difference in the code is that width/height is not set in cv2
Problem:
Actually the camera you are using has 2 mode:
640x480
1920x1080
One is for main stream, one is for sub stream. I also met this problem couple of times and here is the possible reasons why it doesn't work.
Note: I assume you tried different ways to run on full resolution(1920x1080) such as cv2.VideoCapture(0) , cv2.VideoCapture(-1) , cv2.VideoCapture(1) ...
Possible reasons
First reason could be that the camera doesn't support the resolution you desire but in your case we see that it supports 1920x1080 resolution. So this can not be the reason for your isssue.
Second reason which is general reason is that opencv backend doesn't support your camera driver. Since you are using VideoCaptureProperties of opencv, Documentation says:
Reading / writing properties involves many layers. Some unexpected result might happens along this chain. Effective behaviour depends from device hardware, driver and API Backend.
What you can do:
In this case, if you really need to reach that resolution and make compatible with opencv, you should use the SDK of your camera(if it has).

How to alter resolution to portrait mode values with opencv-python?

I want to change the video resolution from landscape to portrait mode for output from my inbuilt webcam on the laptop (cv2.VideoCapture(0)). I tried rescaling the frames to get it to work, it does go to portrait mode ( height bigger than width) but the video is skewed/stretched. Is there a way around this ? please help. I am using opencv with python.
Welcome to Stackoverflow. What you want to achieve depends on the webcam you use. The Resolution you want need to be supported by your cam. this small tutorial explains it very good.
If your camera does not support the Resolution you want, you have two possibilites:
You Crop the Image to the Resolution you want.
If your max resolution does not allow your resolution you can crop it to the biggest resultion possible with your wanted ratio and after that upscale it.
Careful with upscaling. You have different interpolation methods available.

Image Processing: Bad Quality of Disparity Image with OpenCV

I want to create a disparity image using two images from low resolution usb cameras. I am using OpenCV 4.0.0. The frames I use are taken from a video. The results I am currently getting are very bad (see below).
Both cameras were calibrated and the calibration data used to undistort the images. Is it because of the low resolution of the left image and right image?
Left:
Right:
To have a better guess there also is an overlay of both images.
Overlay:
The values for the cv2.StereoSGBM_create() function are based on the ones of the example code that comes with OpenCV (located in OpenCV/samples/python/stereo_match.py).
I would be really thankful for any help or suggestions.
Here is my code:
# convert both image to grayscale
left = cv2.cvtColor(left, cv2.COLOR_BGR2GRAY)
right = cv2.cvtColor(right, cv2.COLOR_BGR2GRAY)
# set the disparity matcher
window_size = 3
min_disp = 16
num_disp = 112-min_disp
stereo = cv2.StereoSGBM_create(minDisparity = min_disp,
numDisparities = num_disp,
blockSize = 16,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2,
disp12MaxDiff = 1,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32
)
# compute disparity
dis = stereo.compute(left, right).astype(np.float32) / 16.0
# display the computed disparity image
matploitlib.pyplot.imshow(dis, 'gray')
matploitlib.pyplot.show()
Most stereo algorithms require the input images to be rectified. Rectification transforms images so that corresponding epipolar lines are corresponding horizontal lines in both images. For rectification, you need to know both intrinsic and extrinsic parameters of your cameras.
OpenCV has all the tools required to perform both calibration and rectification. If you need to perform calibration, you need to have a calibration pattern (chessboard) available as well.
In short:
Compute intrinsic camera parameters using calibrateCamera().
Use the intrinsic parameters with stereoCalibrate() to perform extrinsic calibration of the stereo pair.
Using the paramters from stereoCalibrate(), compute rectification parameters with stereoRectify()
Using rectification parameters, calculate maps used for rectification and undistortion with initUndistortRectifyMap()
Now your cameras are calibrated and you can perform rectification and undistortion using remap() for images taken with the camera pair (as long as the cameras do not move relatively to each other). The rectified images calculated by remap() can now be used to calculate disparity images.
Additionally, I recommend checking out some relevant text book on the topic. Learning OpenCV: Computer Vision with the OpenCV Library has a very practical description of the process.
I agree with #Catree's comment and #sebasth's answer, mainly because your images are not rectified at all.
However, another issue may occur and I would like to warn you about this. I tried to leave a comment on #sebasth's answer, but I can't comment yet...
As you said you are using low resolution usb cameras, it makes me believe these cameras have the light exposure made by Rolling Shutter lenses. For scenes in movement and in constant change, the ideal are Global Shutter cameras. This is especially relevant if you intend to use this for scenes in movement.
(Example of Rolling Shutter effect: enter link description here).
So with the Rolling Shutter lenses you will also have to be careful about cameras synchronization.
It can work with Rolling shutter cameras, but you will need to take care with lens synchronization, preferably in a controlled environment (even with little change in lighting).
Also remember to turn off the automatic camera parameters, like: "White Balance" and especially the "Exposure".
Best regards!

How to make shadowed part of background count as background (picture below) with OpenCV in Python?

I am very new to OpenCV(and to StackOverflow). I'm writing a program with OpenCV which takes a picture with an object (i.e. pen(rice, phone) put on paper) and calculates what percent does the object make of the picture.
Problem I'm facing with is when I threshold image (tried adaptive and otsu) photo is a little bit shadow around edges:
Original image
Resulted picture
And here's my code:
import cv2
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
b,g,r = cv2.split(img)
th, thresh = cv2.threshold(b, 100, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
cv2.imwrite("image_bl_wh.png", thresh)
Tried to blur and morphology, but couldn't do it.
How can I make my program count that black parts around the picture as background and is there more better and easier way to do it?
P.S. Sorry for my English grammar mistakes.
This is not a programmatic solution but when you do automatic visual inspection it is the first thing you should try: Improve your set-up. The image is simply darker around the edges so increasing the brightness when recording the images should help.
If that's not an option you could consider having an empty image for comparison. What you are trying to do is background segmentation and there are better ways than simple color thresholding they do however usually require at least one image of the background or multiple images.
If you want a software only solution you should try an edge detector combined with morphological operators.

OpenCV darken oversaturated webcam image

I have a (fairly cheap) webcam which produces images which are far lighter than it should be. The camera does have brightness correction - the adjustments are obvious when moving from light to dark - but it is consistently far to bright.
I am looking for a way to reduce the brightness without iterating over the entire frame (OpenCV Python bindings on a Raspberry Pi). Does that exist? Or better, is there a standard way of sending hints to a webcam to reduce the brightness?
import cv2
# create video capture
cap = cv2.VideoCapture(0)
window = cv2.namedWindow("output", 1)
while True:
# read the frames
_,frame = cap.read()
cv2.imshow("output",frame)
if cv2.waitKey(33)== 27:
break
# Clean up everything before leaving
cv2.destroyAllWindows()
cap.release()
I forgot Raspberry Pi is just running a regular OS. What an awesome machine. Thanks for the code which confirms that you just have a regular cv2 image.
Simple vectorized scaling (without playing with each pixel) should be simple. Below just scales every pixel. It would be easy to add a few lines to normalize the image if it has a major offset.
import numpy
#...
scale = 0.5 # whatever scale you want
frame_darker = (frame * scale).astype(numpy.uint8)
#...
Does that look like the start of what you want?
The standard way to adjust webcam parameters is the VideoCapture set() method (providing your camera supports the interface. Most do in my experience). This avoids the performance overhead of processing the image yourself.
VideoCapture::set
CV_CAP_PROP_BRIGHTNESS or CV_CAP_PROP_SATURATION would appear to be what you want.

Categories

Resources