cv2 SIFT + Brute force matching not giving good results - python

So I'm trying to overlay a thermal image with an rgb image using SIFT to match features and homography so that I can overlay them later on. The code I have works with about 50% of the thermal/rgb sets I have but many sets, such as this one, give horrible results. I think the homography is fine but doesn't work because the matches are way off. I'll attach some code, any advice on how to tune this would be great as I've spent a long time already trying to get this working on my own. Thanks!
MIN_MATCH_COUNT = 10
sift = cv2.xfeatures2d.SIFT_create(sigma=1.6, contrastThreshold=0.04,edgeThreshold = 15)
kp1, des1 = sift.detectAndCompute(rgb, None)
kp2, des2 = sift.detectAndCompute(thermal, None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1, des2, k=2)
good = []
for m, n in matches:
if m.distance < 0.8 * n.distance:
good.append(m)
good = sorted(good, key=lambda x: x.distance)
img3 = cv2.drawMatches(rgb, kp1, thermal, kp2, good, None, flags=2)
which gives the following
Then I do homography with RANSAC on the found matches
if len(good) > MIN_MATCH_COUNT:
src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0, maxIters=1000)
matchesMask = mask.ravel().tolist()
h, w, c = rgb.shape
pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
dst = cv2.perspectiveTransform(pts, M)
thermal = cv2.polylines(thermal, [np.int32(dst)], True, 255, 3, cv2.LINE_AA) # draw lines around as a box
draw_params = dict(matchColor=(0, 255, 0), # draw matches in green color
singlePointColor=None,
matchesMask=matchesMask, # draw only inliers
flags=2)
img3 = cv2.drawMatches(rgb, kp1, thermal, kp2, good, None, **draw_params)
resulting in this
Like I said, I think that this is failing because the BFMatcher is not finding correct matches but I am not sure why. Again any and all help is very appreciated! I've tried using an orb detector, converting the rgb image to grayscale, and pre-sclaing images to similar sizes and still get bad results.
Here is an example of a working rgb-thermal pair to demonstrate what I am trying to do.

The problem with your image is that it's so simple compared to natural images (no color, no major differences in texture, etc.), you cannot reliably use SIFT and other techniques made with normal photos in mind. Most of your wrong matches are actually good matches, since the matches look locally similar to each other (after obtaining the descriptor).
My suggestion is to look at alternatives that match images using structural information, or add information to the image (e.g. using a height rainbow colormap since your images can be seen as bumpmaps; using the distance transform + colormap might work too, or using both mentioned + edge detection as the 3 channels for a very weird but heterogeneous color image) and see if SIFT behaves differently.

Related

How to find orientation of a particular SIFT feature/description in OpenCV?

So I have a template and an image. I want to find the location and orientation of the template inside the image. I am using SIFT to find features and description.
Problem is only one feature is consistently correct at recognizing the image. Homography requires at least 4 features to work. error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function 'cv::findHomography'
Since I am working with 2D image (with same scale), position and rotation of even one correct feature should be enough to provide the location and rotation of the template in the image.
From OpenCV Docs https://docs.opencv.org/3.4/da/df5/tutorial_py_sift_intro.html
OpenCV also provides cv.drawKeyPoints() function which draws the small
circles on the locations of keypoints. If you pass a flag,
cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to it, it will draw a circle
with size of keypoint and it will even show its orientation.
However the image I am working is too low resolution to actually see the circles and I need numbers which can compared.
All the other examples of finding orientation I find on the internet use edge detection. However There is no straight edge whose slope can be easily calculated exist in my template.
This solution can help, however my images could potentially have other unwanted objects which will mess with "minAreaRect". If there is any other solution, please let me know.
I have looked for tutorial, books, documentation on how to crunch the numbers in 'keypoints' and 'description', but I could not find any.
Perhaps I should use SURF -which is faster with 2d, same color images- but it is not available in latest opencv version.
Template to be searched
Image to be searched in
Matched
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
print (des1)
# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
# Apply ratio test
good = []
good_match = []
for m,n in matches:
if m.distance < .5*n.distance:
good.append([m])
good_match.append(m)
print('good matches are')
print(good)
print(good_match)
# cv.drawMatchesKnn expects list of lists as matches.
img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.imshow(img3),plt.show()
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good_match ]).reshape(-1,1,2)
M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv.perspectiveTransform(pts,M)
img2 = cv.polylines(img2,[np.int32(dst)],True,255,3, cv.LINE_AA)
draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
img3 = cv.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
plt.imshow(img3, 'gray'),plt.show()

OpenCV unsatisfying results when finding Homography from ORB feature detection

Even though the ORB Feature Matching seems quite solid and i only take the 20 best matches for cv.findHomography, the resulting polyline is terrible. Note that in the results shown in the attached image, the top right image is a video stream. Therefor the variation in results matched. Is there a library that could be used to receive better results? Or am I doing any major mistakes in my code?
# des1 & des2 are created with cv.ORB_create(10000, 1.2, nlevels=8, edgeThreshold=5)
kp2, des2 = orb.detectAndCompute(gray, None)
matches = bf.knnMatch(des1, des2, k=2)
good = []
for m, n in matches:
if m.distance < 0.75 * n.distance:
good.append(m)
matches = sorted(good, key=lambda x: x.distance)
src_pts = np.float32([kp1[m.queryIdx].pt for m in matches[:20]]).reshape(-1, 1, 2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in matches[:20]]).reshape(-1, 1, 2)
M, mask = cv.findHomography(dst_pts, src_pts, cv.RANSAC, 5.0)
matchesMask = mask.ravel().tolist()
h = src_pts.max(0)[0][1] - src_pts.min(0)[0][1]
w = src_pts.max(0)[0][0] - src_pts.min(0)[0][0]
pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
dst = cv.perspectiveTransform(pts, M)
img3 = None
img3 = cv.drawMatchesKnn(img1, kp1, gray, kp2, good, img3, flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
img3 = cv.polylines(img3, [np.int32(dst)], True, (0, 0, 255), 3, cv.LINE_AA)
# Code for showing img3 would follow
There could be several problems with this setup:
Pattern itself. It has repeated squares, so there could be matches that connect different squares on the first and on the second image. This can produce a lot of outliers so that homography can't fit in a reasonable way
Low image quality. The smaller image has low resolution and a bit blurry, which makes matching more difficult, so more outliers can happen. The image has higher resolution and is just displayed in small scale, so this point is not valid.
Feature points are located in a small region of an image and you try to project corners of the image, which are far from the points. This makes homography estimation very unstable so that uncertainties in coordinates of feature points become magnified several times. Even jitter of less than 1 pixel can result in projection errors of up to about 8 pixels. And this can be even worse, because RANSAC threshold of 5.0 can result in coordinates with lower precision.

Image stitching distorted wrap with multiple images

I am working on a project which requires me to stitch images together. I decided to test this with buildings due to a large number of possible key points that can be calculated. I have been following several guides, but the one with the best results for 2-3 images has been this guide: https://towardsdatascience.com/image-stitching-using-opencv-817779c86a83. The way I decided to stitch multiple images is to stitch the first two, then take the output and then stitch that with the third image, so on and so forth. I am confident in the matching of descriptors for the images. But as I stitch more and more images, the previous stitched part gets pushed further and further into -z axis. Meaning they get distorted and smaller. The code I use to accomplish this is as follows:
import cv2
import numpy as np
import os
os.chdir('images')
img_ = cv2.imread('Output.jpg', cv2.COLOR_BGR2GRAY)
img = cv2.imread('DJI_0019.jpg', cv2.COLOR_BGR2GRAY)
#Setting up orb key point detector
orb = cv2.ORB_create()
#using orb to compute keypoints and descriptors
kp, des = orb.detectAndCompute(img_, None)
kp2, des2 = orb.detectAndCompute(img, None)
print(len(kp))
#Setting up BFmatcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des, des2, k=2) #Find 2 best matches for each descriptors (This is required for ratio test?)
#Using lowes ratio test as suggested in paper at .7-.8
good = []
for m in matches:
if m[0].distance < .8 * m[1].distance:
good.append(m)
matches = np.asarray(good) #matches is essentially a list of matching descriptors
#Aligning the images
if(len(matches)) >= 4:
src = np.float32([kp[m.queryIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
dst = np.float32([kp2[m.trainIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
#Creating the homography and mask
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
print(H)
else:
print("Could not find 4 good matches to find homography")
dst = cv2.warpPerspective(img_, H, (img.shape[1] + 900, img.shape[0]))
dst[0:img.shape[0], 0:img.shape[1]] = img
cv2.imwrite("Output.jpg", dst)
With the output of the 4th+ stitch looking like such:
As you can see the images are getting further and further transformed in a weird way. My theory for such an event happening is due to the camera position and angle at which the images were taken, but I am not sure. If this might be the case, are there optimal parameters that will produce the best images to stitching?
Is there a way to fix this issue where the content can be pushed "flush" against the x axis?
Edit: Adding source images: https://imgur.com/zycPQuV

Removing grid in scanned/photographed medical Documents

I'm a dental student and currently trying to write a script for analyzing and extracting handwritten digits from dental records. I already have a rough version of the script finished but my recognition rate is pretty low. A big problem with analyzing the data is a grid that proves difficult to remove.
Scanned form that I want to analyse (white fields are for anonymity):
Empty form:
I've tried different solutions for this problem (Erosion/Dilation, HoughLineTransform and susbtraction of the Lines).
Using featurematching and substracting with an empty template currently give me the best results.
Results:
Eroding and dilating this image gives even better results
Results:
![][4]
But this needs a new calibration nearly every time i try it.
Do you know of a more elegant solution to my problem.
Could SURF matching give better results?
Thank you very much!
Here's my code so far:
GOOD_MATCH_PERCENT = 0.15
def match_img_to_template(input_img, template_img, MAX_FEATURES, GOOD_MATCH_PERCENT):
# blurring of the input image
template_img = cv2.GaussianBlur(template_img, (3, 3), cv2.BORDER_DEFAULT)
# equalizing the histogramm of the input image
img_preprocessed = cv2.equalizeHist(input_img)
# ORB Detector
orb = cv2.ORB_create(MAX_FEATURES)
kp1, des1 = orb.detectAndCompute(img_preprocessed, None)
kp2, des2 = orb.detectAndCompute(template_img, None)
# Brute Force Matching
matcher= cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(des1, des2, None)
matches.sort(key=lambda x:x.distance, reverse=False)
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = kp1[match.queryIdx].pt
points2[i, :] = kp2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
height, width = template_img.shape
input_warped = cv2.warpPerspective(input_img, h, (width, height))
ret1, input_warped_thresh = cv2.threshold(input_warped,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
diff = cv2.absdiff(template_img, input_warped_thresh)
ret, diff = cv2.threshold(diff, 20, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C + cv2.THRESH_BINARY)
diff = cv2.equalizeHist(diff)
# Create kernels
kernel1 = np.ones((3,3),np.uint8)
kernel2 = np.ones((6,6), np.uint8)
# erode dilate to remove the grid
diff_erode = cv2.erode(diff,kernel1)
diff_dilated = cv2.dilate(diff_erode,kernel2)
# invert diff_dilate
diff_dilated_inv = cv2.bitwise_not(diff_dilated)
return diff_dilated_inv

How to detect a shift between images

I am analyzing multiple images and need to be able to tell if they are shifted compared to a reference image. The purpose is to tell if the camera moved at all in between capturing images. I would ideally like to be able to correct the shift in order to still do the analysis, but at a minimum I need to be able to determine if an image is shifted and discard it if it's beyond a certain threshold.
Here are some examples of the shifts in an image I would like to detect:
I will use the first image as a reference and then compare all of the following images to it to figure out if they are shifted. The images are gray-scale (they are just displayed in color using a heat-map) and are stored in a 2-D numpy array. Any ideas how I can do this? I would prefer to use the packages I already have installed (scipy, numpy, PIL, matplotlib).
As Lukas Graf hints, you are looking for cross-correlation. It works well, if:
The scale of your images does not change considerably.
There is no rotation change in the images.
There is no significant illumination change in the images.
For plain translations cross-correlation is very good.
The simplest cross-correlation tool is scipy.signal.correlate. However, it uses the trivial method for cross-correlation, which is O(n^4) for a two-dimensional image with side length n. In practice, with your images it'll take very long.
The better too is scipy.signal.fftconvolve as convolution and correlation are closely related.
Something like this:
import numpy as np
import scipy.signal
def cross_image(im1, im2):
# get rid of the color channels by performing a grayscale transform
# the type cast into 'float' is to avoid overflows
im1_gray = np.sum(im1.astype('float'), axis=2)
im2_gray = np.sum(im2.astype('float'), axis=2)
# get rid of the averages, otherwise the results are not good
im1_gray -= np.mean(im1_gray)
im2_gray -= np.mean(im2_gray)
# calculate the correlation image; note the flipping of onw of the images
return scipy.signal.fftconvolve(im1_gray, im2_gray[::-1,::-1], mode='same')
The funny-looking indexing of im2_gray[::-1,::-1] rotates it by 180° (mirrors both horizontally and vertically). This is the difference between convolution and correlation, correlation is a convolution with the second signal mirrored.
Now if we just correlate the first (topmost) image with itself, we get:
This gives a measure of self-similarity of the image. The brightest spot is at (201, 200), which is in the center for the (402, 400) image.
The brightest spot coordinates can be found:
np.unravel_index(np.argmax(corr_img), corr_img.shape)
The linear position of the brightest pixel is returned by argmax, but it has to be converted back into the 2D coordinates with unravel_index.
Next, we try the same by correlating the first image with the second image:
The correlation image looks similar, but the best correlation has moved to (149,200), i.e. 52 pixels upwards in the image. This is the offset between the two images.
This seems to work with these simple images. However, there may be false correlation peaks, as well, and any of the problems outlined in the beginning of this answer may ruin the results.
In any case you should consider using a windowing function. The choice of the function is not that important, as long as something is used. Also, if you have problems with small rotation or scale changes, try correlating several small areas agains the surrounding image. That will give you different displacements at different positions of the image.
Another way to solve it is to compute sift points in both images, use RANSAC to get rid of outliers and then solve for translation using a least squares estimator.
as Bharat said as well another is using sift features and Ransac:
import numpy as np
import cv2
from matplotlib import pyplot as plt
def crop_region(path, c_p):
"""
This function crop the match region in the input image
c_p: corner points
"""
# 3 or 4 channel as the original
img = cv2.imread(path, -1)
# mask
mask = np.zeros(img.shape, dtype=np.uint8)
# fill the the match region
channel_count = img.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, c_p, ignore_mask_color)
# apply the mask
matched_region = cv2.bitwise_and(img, mask)
return matched_region
def features_matching(path_temp,path_train):
"""
Function for Feature Matching + Perspective Transformation
"""
img1 = cv2.imread(path_temp, 0) # template
img2 = cv2.imread(path_train, 0) # input image
min_match=10
# SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# extract the keypoints and descriptors with SIFT
kps1, des1 = sift.detectAndCompute(img1,None)
kps2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches (g_matches) as per Lowe's ratio
g_match = []
for m,n in matches:
if m.distance < 0.7 * n.distance:
g_match.append(m)
if len(g_match)>min_match:
src_pts = np.float32([ kps1[m.queryIdx].pt for m in g_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kps2[m.trainIdx].pt for m in g_match ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2, [np.int32(dst)], True, (0,255,255) , 3, cv2.LINE_AA)
else:
print "Not enough matches have been found! - %d/%d" % (len(g_match), min_match)
matchesMask = None
draw_params = dict(matchColor = (0,255,255),
singlePointColor = (0,255,0),
matchesMask = matchesMask, # only inliers
flags = 2)
# region corners
cpoints=np.int32(dst)
a, b,c = cpoints.shape
# reshape to standard format
c_p=cpoints.reshape((b,a,c))
# crop matching region
matching_region = crop_region(path_train, c_p)
img3 = cv2.drawMatches(img1, kps1, img2, kps2, g_match, None, **draw_params)
return (img3,matching_region)

Categories

Resources