How to improve/modify parameters of Feature Matching - SIFT SURF - Opencv - python

I am trying to use feature matching to determine if two images are similar. I have tried SIFT, SURF, and ORB with similar results from each. I am first trying to display the matches between these two images. Image1 Image2 I have tried both brute force matching and knn matching but with both implementations, I get around 10 matches with little to no accuracy. Are the two images too different in terms of scale, transformation, and perspective to generate accurate matches? What parameters could I modify to improve performance? The matching often produces a lot of matches but only a few pass ratio test.
import cv2
import matplotlib.pyplot as plt
import numpy as np
model = cv2.imread("path")
modelg = cv2.cvtColor(model,cv2.COLOR_BGR2GRAY)
modelg = cv2.GaussianBlur(modelg,(7,7),0)
height, width = modelg.shape[:2]
print(str(height)+ " " + str(width))
frame = cv2.imread("Path")
plt.imshow(frame)
plt.show()
#frame = cv2.imread("Path")
frame = cv2.resize(frame,(width,height))
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(frame,(7,7),0)
#blur = cv2.blur(frame,(7,7))
plt.imshow(blur,cmap='gray')
plt.show()
plt.imshow(modelg,cmap='gray')
plt.show()
# Initiate SIFT detector
surf = cv2.xfeatures2d.SURF_create()
# find the keypoints and descriptors with SIFT
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = surf.detectAndCompute(modelg,None)
kp2, des2 = surf.detectAndCompute(blur,None)
#kp1, des1 = surf.detectAndCompute(,None)
#kp2, des2 = surf.detectAndCompute(blur,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
print(len(matches))
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
print(len(good))
# cv.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(modelg,kp1,blur,kp2,good,None,flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.imshow(img3),plt.show()

Related

How do I compute similarity of two images using SIFT/evaluate SIFT results?

I want to compute similarities between two images using SIFT. I have managed to compute matches and visualize it as seen in the image below.
I have one image of the Eiffel tower and another image of a heavily modified Eiffel tower. To me this match looks good but I don't know what metrics, equations or algorithms to use to compute the similarity or to evaluate the match.
I am using the following code to compute the matching.
import cv2
# Read images
img1 = cv2.imread("eiffel_normal.jpeg")
img2 = cv2.imread("eiffel_rotated.jpeg")
#sift
sift = cv2.SIFT_create()
# Get keypoints and descriptors
keypoints_1, descriptors_1 = sift.detectAndCompute(img1, None)
keypoints_2, descriptors_2 = sift.detectAndCompute(img2, None)
#feature matching
bf = cv2.BFMatcher(cv2.NORM_L1, crossCheck=True)
matches = bf.match(descriptors_1,descriptors_2)
matches = sorted(matches, key=lambda x:x.distance)
# Visualize the results
img3 = cv2.drawMatches(img1, keypoints_1, img2, keypoints_2, matches[:30], img2, flags=2)
plt.imshow(img3)
plt.show()
I've tried:
def calculateScore(matches, key_1_len, key_2_len):
return 100 * (matches/min(key_1_len, key_2_len))
similar_regions = [i for i in matches if i.distance < 50]
sift_score= calculateScore(len(matches), len(keypoints_1), len(keypoints_2))
sift_acc = len(similar_regions)/len(matches)
but both sift_score and sift_acc gives bad results.
The evaluator must take in account: Scaling, Rotation and translation
Any ideas?

Image stitching distorted wrap with multiple images

I am working on a project which requires me to stitch images together. I decided to test this with buildings due to a large number of possible key points that can be calculated. I have been following several guides, but the one with the best results for 2-3 images has been this guide: https://towardsdatascience.com/image-stitching-using-opencv-817779c86a83. The way I decided to stitch multiple images is to stitch the first two, then take the output and then stitch that with the third image, so on and so forth. I am confident in the matching of descriptors for the images. But as I stitch more and more images, the previous stitched part gets pushed further and further into -z axis. Meaning they get distorted and smaller. The code I use to accomplish this is as follows:
import cv2
import numpy as np
import os
os.chdir('images')
img_ = cv2.imread('Output.jpg', cv2.COLOR_BGR2GRAY)
img = cv2.imread('DJI_0019.jpg', cv2.COLOR_BGR2GRAY)
#Setting up orb key point detector
orb = cv2.ORB_create()
#using orb to compute keypoints and descriptors
kp, des = orb.detectAndCompute(img_, None)
kp2, des2 = orb.detectAndCompute(img, None)
print(len(kp))
#Setting up BFmatcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des, des2, k=2) #Find 2 best matches for each descriptors (This is required for ratio test?)
#Using lowes ratio test as suggested in paper at .7-.8
good = []
for m in matches:
if m[0].distance < .8 * m[1].distance:
good.append(m)
matches = np.asarray(good) #matches is essentially a list of matching descriptors
#Aligning the images
if(len(matches)) >= 4:
src = np.float32([kp[m.queryIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
dst = np.float32([kp2[m.trainIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
#Creating the homography and mask
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
print(H)
else:
print("Could not find 4 good matches to find homography")
dst = cv2.warpPerspective(img_, H, (img.shape[1] + 900, img.shape[0]))
dst[0:img.shape[0], 0:img.shape[1]] = img
cv2.imwrite("Output.jpg", dst)
With the output of the 4th+ stitch looking like such:
As you can see the images are getting further and further transformed in a weird way. My theory for such an event happening is due to the camera position and angle at which the images were taken, but I am not sure. If this might be the case, are there optimal parameters that will produce the best images to stitching?
Is there a way to fix this issue where the content can be pushed "flush" against the x axis?
Edit: Adding source images: https://imgur.com/zycPQuV

Detect difference in x, y direction between 2 images using OpenCv and ORB detector

I am trying to detect is there any shift in x or y direction between 2 images, one of the images is reference image and the other one is live image coming from camera.
Idea is to use ORB detector to extract keypoints in 2 images and then use BFFMatcher to find good matches. After that do further analysis by checking if good matches are matching coordinates of keypoints in image1 and image2, if they match then we are assuming that there is no any shift. If there is offeset by in x direction 3px for example in all set of good matches then image is shifted by 3px (maybe there is better way of doing it(?)).
Up to now I am able to get keypoints between 2 images, however I am not sure how to check coordinates of those good matches in image1 and image2.
import cv2
import numpy as np
import matplotlib.pyplot as plt
import os.path
import helpers
referenceImage = None
liveImage = None
lowe_ration= 0.75
orb = cv2.ORB_create()
bfMatcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = False)
cap = cv2.VideoCapture(1)
def compareUsingOrb():
kp1, des1 = orb.detectAndCompute(liveImage, None)
print("For Live Image it detecting %d keypoints" % len(kp1))
matches = bfMatcher.knnMatch(des1, des2, k=2)
goodMatches=[]
for m,n in matches:
if(m.distance < lowe_ration*n.distance):
goodMatches.append([m])
#Check good matches x, y coordinates
img4 = cv2.drawKeypoints(referenceImage, kp2, outImage=np.array([]), color=(0,0,255))
cv2.imshow("Keypoints in reference image", img4)
img3 = cv2.drawMatchesKnn(liveImage, kp1, referenceImage, kp2, goodMatches, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
print("Found %d good matches" % (len(goodMatches)))
cv2.imshow("Matches", img3)
if(helpers.doesFileExist() == False):
ret, frame = cap.read()
cv2.imwrite('referenceImage.png', frame)
referenceImage = cv2.imread('referenceImage.png')
kp2, des2 = orb.detectAndCompute(referenceImage, None)
print("For Reference Image it detecting %d keypoints" % len(kp2))
else:
referenceImage = cv2.imread('referenceImage.png')
kp2, des2 = orb.detectAndCompute(referenceImage, None)
print("For Reference Image it detecting %d keypoints" % len(kp2))
while(True & helpers.doesFileExist()):
ret, liveImage = cap.read()
cv2.imshow("LiveImage", liveImage)
compareUsingOrb()
if(cv2.waitKey(1) & 0xFF == ord('q')):
break
cap.release()
cv2.destroyAllWindows()
The goal is detect if there is a shift between 2 images and if there is - then attempt to align images and do image comparison. Any tips how to achieve this using OpenCV would be appreciated.
Basically, you want to know How to get pixel coordinates from Feature Matching in OpenCV Python. Then you need some way to filter outliers. If only differnce between your images is translation (shift) on live image, this should be straightforward. But I'd suspect your live image might also be affected by rotation, or 3D transformation to some extent. If ORB finds enough features, finding right transformation using OpenCV isn't hard.

Set line width for default drawing functions in Python/OpenCV

I want to draw SIFT feature matches across two different pictures but when the image resolution is too high then the lines become barely visible (of course this is usually desired, and it is the expected behavior, and one can always zoom, but...). Is there a way to change default line width (or pass is as a flag) or is the only way to implement a custom drawing function?
Sample code for drawing:
img1 = np.array(Image.open("im1.jpg"))
img2 = np.array(Image.open("im2.jpg"))
sift = cv2.xfeatures2d.SIFT_create()
kp1, desc1 = sift.detectAndCompute(img1, None)
kp2, desc2 = sift.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# Match descriptors.
matches = bf.match(desc1,desc2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x: x.distance)
out = np.concatenate([img1, img2], axis=1)
# Draw first 10 matches.
cv2.drawMatches(img1, kp1, img2, kp2, matches[:20], flags=2, outImg=out)
plt.figure(figsize=(18,14))
plt.imshow(out)
plt.show()

Adding global features to SIFT features to find image similarity

I'm currently using the SIFT features to find a measure of similarity between images. But I want to add more features to it so that the similarity measure can be improved. Right now, for a similar image it shows a value for len(good) around 500 and for an image of a board game and a dog the value is around 275. What are some other features that I could look into, maybe global features? And how do I add it with SIFT?
def feature_matching():
img1 = cv2.imread('img1.jpeg', 0) # queryImage
img2 = cv2.imread('img2.jpeg', 0)
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append(m)
print(len(good))
#gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
#gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = drawMatches(img1,kp1,img2,kp2,good)
You could also try cross check matching as explained here in C++. In python, you just need to change the following line:
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)

Categories

Resources