Image Processing: Bad Quality of Disparity Image with OpenCV - python

I want to create a disparity image using two images from low resolution usb cameras. I am using OpenCV 4.0.0. The frames I use are taken from a video. The results I am currently getting are very bad (see below).
Both cameras were calibrated and the calibration data used to undistort the images. Is it because of the low resolution of the left image and right image?
Left:
Right:
To have a better guess there also is an overlay of both images.
Overlay:
The values for the cv2.StereoSGBM_create() function are based on the ones of the example code that comes with OpenCV (located in OpenCV/samples/python/stereo_match.py).
I would be really thankful for any help or suggestions.
Here is my code:
# convert both image to grayscale
left = cv2.cvtColor(left, cv2.COLOR_BGR2GRAY)
right = cv2.cvtColor(right, cv2.COLOR_BGR2GRAY)
# set the disparity matcher
window_size = 3
min_disp = 16
num_disp = 112-min_disp
stereo = cv2.StereoSGBM_create(minDisparity = min_disp,
numDisparities = num_disp,
blockSize = 16,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2,
disp12MaxDiff = 1,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32
)
# compute disparity
dis = stereo.compute(left, right).astype(np.float32) / 16.0
# display the computed disparity image
matploitlib.pyplot.imshow(dis, 'gray')
matploitlib.pyplot.show()

Most stereo algorithms require the input images to be rectified. Rectification transforms images so that corresponding epipolar lines are corresponding horizontal lines in both images. For rectification, you need to know both intrinsic and extrinsic parameters of your cameras.
OpenCV has all the tools required to perform both calibration and rectification. If you need to perform calibration, you need to have a calibration pattern (chessboard) available as well.
In short:
Compute intrinsic camera parameters using calibrateCamera().
Use the intrinsic parameters with stereoCalibrate() to perform extrinsic calibration of the stereo pair.
Using the paramters from stereoCalibrate(), compute rectification parameters with stereoRectify()
Using rectification parameters, calculate maps used for rectification and undistortion with initUndistortRectifyMap()
Now your cameras are calibrated and you can perform rectification and undistortion using remap() for images taken with the camera pair (as long as the cameras do not move relatively to each other). The rectified images calculated by remap() can now be used to calculate disparity images.
Additionally, I recommend checking out some relevant text book on the topic. Learning OpenCV: Computer Vision with the OpenCV Library has a very practical description of the process.

I agree with #Catree's comment and #sebasth's answer, mainly because your images are not rectified at all.
However, another issue may occur and I would like to warn you about this. I tried to leave a comment on #sebasth's answer, but I can't comment yet...
As you said you are using low resolution usb cameras, it makes me believe these cameras have the light exposure made by Rolling Shutter lenses. For scenes in movement and in constant change, the ideal are Global Shutter cameras. This is especially relevant if you intend to use this for scenes in movement.
(Example of Rolling Shutter effect: enter link description here).
So with the Rolling Shutter lenses you will also have to be careful about cameras synchronization.
It can work with Rolling shutter cameras, but you will need to take care with lens synchronization, preferably in a controlled environment (even with little change in lighting).
Also remember to turn off the automatic camera parameters, like: "White Balance" and especially the "Exposure".
Best regards!

Related

How to implement radial motion blur by Python OpenCV?

I can find motion blur kernel in horizontal and vertical direction, e.g. this link.
However, how can I implement radial motion blur like following pictures? I can find this functionality in Photoshop etc. I cannot find any kernel reference in website. How can I implement it by python opencv? Thanks
I don't think OpenCV has something like this built-in, but DIPlib has: dip.AdaptiveGauss(). It blurs the image with a different Gaussian at every pixel. One image indicates the orientation of the Gaussian, another one indicates the scaling.
This is how I replicated your blurred image:
import diplib as dip
img = dip.ImageRead('rose.jpg')
scale = dip.CreateRadiusCoordinate(img.Sizes()) / 100
angle = dip.CreatePhiCoordinate(img.Sizes())
out = dip.AdaptiveGauss(img, [angle, scale], [1,5])
dip.Show(out)
Disclaimer: I'm an author of DIPlib.

OpenCV: What can cause a mostly black stereovision disparity map?

I have been dipping my toes into OpenCV and the stereovision functions it contains, and am struggling to get good results while following instructions in both the OpenCV documentation and many articles online. Specifically, I believe that at this point I have managed to obtain a decent calibration of my cameras, a decent stereo calibration, and even a decent rectification, but when moving to create the disparity map I seem to get nonsense back.
I am using a set of self-acquired images taken with a Pentax K-3 ii camera using a Loreo Lens-in-a-cap CCD splitter which gives me "two" images taken on one CCD. I can then split the image in half (and trim some of the pixels near the overlap) to have a reliable baseline distance in world coordinates with the camera. I unfortunately have no information on the true focal length of this configuration but I would guess it is around 9cm.
I have performed camera calibration on each split-image set to get camera matrices, distance coefficients, and object and image points for use in epipolar geometry. Then, following the procedure laid out in [1,2], perform stereo calibration and rectification. I do not have the required reputation to embed images, so please click here. By my understanding, the fact that similar features in both images are similar distances to the true horizontal lines I have drawn across them means that this is a good rectification result and should be usable.
However, when I implement the following code to create the disparity map:
# Settings for cv.StereoSGBM_create
minDisparity = 1
numDisparities = 64
blockSize = 1
disp12MaxDiff = 1
uniquenessRatio = 10
speckleWindowSize = 0
speckleRange = 8
stereo = cv.StereoSGBM_create(minDisparity=minDisparity, numDisparities=numDisparities, blockSize=blockSize, disp12MaxDiff=disp12MaxDiff, uniquenessRatio=uniquenessRatio,
speckleWindowSize=speckleWindowSize, speckleRange=speckleRange)
# Calculate the disparity map
disp = stereo.compute(imgL, imgR).astype(np.float32)
# Normalize the values to spread them across the viewable range
disp = cv.normalize(disp,0,255,cv.NORM_MINMAX)
# Resize for display
disp = cv.resize(disp, (1000,1000))
cv.imshow("disparity",disp)
cv.waitKey(0)
The result is disheartening. Intuitively, seeing a lot of black space surrounding edges which actually are fairly well-defined (such as in the chessboard pattern or near my hands) would suggest that there is very little disparity. However it seems clear to me that the images are quite different in terms of translation, so I am a bit confused. I have been delving through the documentation and run out of ideas. I tried reusing the code that produced the initial set of epipolar lines provided here which seemed to work on the original image quite nicely. However, it produces epipolar lines which are certainly not horizontal. This tells me that something is wrong, but I do not understand what could be, especially given the "visual test" I described above. I suspect I am misapplying that section of the code.
One thought I have is that I need to use an ROI to select the valid parts of the image, but I am unsure how to go about this. I think this is supported by the odd streaking behavior at the right edge of the left image post-rectification.
This is a link to a pastebin of all of my code, aside from the initial camera calibration which has significant runtime due to the size of the images.
I would appreciate any help that can be offered as at this point I am going a bit codeblind. I am limited to only 8 links due to my reputation, so please let me know if I can provide better images or documentation of my work.

How to overlay two live images of the same scene having multiple calibrated cameras in python

I have multiple cameras that are closely located to each other, looking at the same scene.
I can calibrate all of them (at once - currently using the openCV algorithm).
What I now want to do is, to overlay for example the following:
Let one camera be a LIDAR depth, the second a grayscale and the third an infrared cam. I want to overlay now the "grayscale scene" in an image format with the depth and infrared information being on the correct pixel. (Similar to depth-grayscale overlays that many 3D-cameras bring).
The cameras have different opening angles and resolutions.
I appreciate any hint or comment :-)
Cheers.

distance calculation between camera and depth pixel?

I need to calculate distance from camera to depth image pixel. I searched through internet but I found stereo image related info and code example where I need info for depth image.
Here, I defined depth image in gray scale(0-255) and I defined a particular value( let range defined 0 pixel value is equal to 5m and 255 pixel value is equal to 500m in gray scale).
camera's intrinsic (focal length, image sensor format) and extrinsic (rotation and transition matrix) is given. I need to calculate distance from different camera orientation and rotation.
I want to do it using opencv python. Is there any specific documentation and code example regarding this?
Or any further info is necessary to find this.
The content of my research is the same as yours, but I have a problem now. I use stereocalibrate() to calibrate the binocular camera, and found that the obtained translation matrix is very different from the actual baseline distance. In addition, the parameters used in stereocalibrate() are obtained by calibrating the two cameras with calibrate().

Why opencv distort the image by undistort()?

I am following the OpenCV Camera Calibration tutorial, I have used about 100 images for the calibration. After getting camera matrix and distance matrix, I use them to undistort other set of images. What I realized is the undistorted image is highly distorted on both sides.
One of the example imgs for camera matrix:
Using the camera matrix to undistort my experimental img gave me very unreasonable results.
Original image:
After applying undistor():
Clearly, the undistortion process only paid attention to the center of the image. How can I make it undistort the image properly?
Thank you very much!
UPDATE:
Using images to cover the Filed of View as much as possible helps. Here is the new result of the same image:
I have one more question: How to know if the calibration returns satisfying calibration results? RMS is a parameter. However, it is not very robust.

Categories

Resources