Adding global features to SIFT features to find image similarity - python

I'm currently using the SIFT features to find a measure of similarity between images. But I want to add more features to it so that the similarity measure can be improved. Right now, for a similar image it shows a value for len(good) around 500 and for an image of a board game and a dog the value is around 275. What are some other features that I could look into, maybe global features? And how do I add it with SIFT?
def feature_matching():
img1 = cv2.imread('img1.jpeg', 0) # queryImage
img2 = cv2.imread('img2.jpeg', 0)
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append(m)
print(len(good))
#gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
#gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = drawMatches(img1,kp1,img2,kp2,good)

You could also try cross check matching as explained here in C++. In python, you just need to change the following line:
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)

Related

How to find orientation of a particular SIFT feature/description in OpenCV?

So I have a template and an image. I want to find the location and orientation of the template inside the image. I am using SIFT to find features and description.
Problem is only one feature is consistently correct at recognizing the image. Homography requires at least 4 features to work. error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function 'cv::findHomography'
Since I am working with 2D image (with same scale), position and rotation of even one correct feature should be enough to provide the location and rotation of the template in the image.
From OpenCV Docs https://docs.opencv.org/3.4/da/df5/tutorial_py_sift_intro.html
OpenCV also provides cv.drawKeyPoints() function which draws the small
circles on the locations of keypoints. If you pass a flag,
cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to it, it will draw a circle
with size of keypoint and it will even show its orientation.
However the image I am working is too low resolution to actually see the circles and I need numbers which can compared.
All the other examples of finding orientation I find on the internet use edge detection. However There is no straight edge whose slope can be easily calculated exist in my template.
This solution can help, however my images could potentially have other unwanted objects which will mess with "minAreaRect". If there is any other solution, please let me know.
I have looked for tutorial, books, documentation on how to crunch the numbers in 'keypoints' and 'description', but I could not find any.
Perhaps I should use SURF -which is faster with 2d, same color images- but it is not available in latest opencv version.
Template to be searched
Image to be searched in
Matched
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
print (des1)
# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
# Apply ratio test
good = []
good_match = []
for m,n in matches:
if m.distance < .5*n.distance:
good.append([m])
good_match.append(m)
print('good matches are')
print(good)
print(good_match)
# cv.drawMatchesKnn expects list of lists as matches.
img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.imshow(img3),plt.show()
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good_match ]).reshape(-1,1,2)
M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv.perspectiveTransform(pts,M)
img2 = cv.polylines(img2,[np.int32(dst)],True,255,3, cv.LINE_AA)
draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
img3 = cv.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
plt.imshow(img3, 'gray'),plt.show()

How do I compute similarity of two images using SIFT/evaluate SIFT results?

I want to compute similarities between two images using SIFT. I have managed to compute matches and visualize it as seen in the image below.
I have one image of the Eiffel tower and another image of a heavily modified Eiffel tower. To me this match looks good but I don't know what metrics, equations or algorithms to use to compute the similarity or to evaluate the match.
I am using the following code to compute the matching.
import cv2
# Read images
img1 = cv2.imread("eiffel_normal.jpeg")
img2 = cv2.imread("eiffel_rotated.jpeg")
#sift
sift = cv2.SIFT_create()
# Get keypoints and descriptors
keypoints_1, descriptors_1 = sift.detectAndCompute(img1, None)
keypoints_2, descriptors_2 = sift.detectAndCompute(img2, None)
#feature matching
bf = cv2.BFMatcher(cv2.NORM_L1, crossCheck=True)
matches = bf.match(descriptors_1,descriptors_2)
matches = sorted(matches, key=lambda x:x.distance)
# Visualize the results
img3 = cv2.drawMatches(img1, keypoints_1, img2, keypoints_2, matches[:30], img2, flags=2)
plt.imshow(img3)
plt.show()
I've tried:
def calculateScore(matches, key_1_len, key_2_len):
return 100 * (matches/min(key_1_len, key_2_len))
similar_regions = [i for i in matches if i.distance < 50]
sift_score= calculateScore(len(matches), len(keypoints_1), len(keypoints_2))
sift_acc = len(similar_regions)/len(matches)
but both sift_score and sift_acc gives bad results.
The evaluator must take in account: Scaling, Rotation and translation
Any ideas?

How to improve/modify parameters of Feature Matching - SIFT SURF - Opencv

I am trying to use feature matching to determine if two images are similar. I have tried SIFT, SURF, and ORB with similar results from each. I am first trying to display the matches between these two images. Image1 Image2 I have tried both brute force matching and knn matching but with both implementations, I get around 10 matches with little to no accuracy. Are the two images too different in terms of scale, transformation, and perspective to generate accurate matches? What parameters could I modify to improve performance? The matching often produces a lot of matches but only a few pass ratio test.
import cv2
import matplotlib.pyplot as plt
import numpy as np
model = cv2.imread("path")
modelg = cv2.cvtColor(model,cv2.COLOR_BGR2GRAY)
modelg = cv2.GaussianBlur(modelg,(7,7),0)
height, width = modelg.shape[:2]
print(str(height)+ " " + str(width))
frame = cv2.imread("Path")
plt.imshow(frame)
plt.show()
#frame = cv2.imread("Path")
frame = cv2.resize(frame,(width,height))
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(frame,(7,7),0)
#blur = cv2.blur(frame,(7,7))
plt.imshow(blur,cmap='gray')
plt.show()
plt.imshow(modelg,cmap='gray')
plt.show()
# Initiate SIFT detector
surf = cv2.xfeatures2d.SURF_create()
# find the keypoints and descriptors with SIFT
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = surf.detectAndCompute(modelg,None)
kp2, des2 = surf.detectAndCompute(blur,None)
#kp1, des1 = surf.detectAndCompute(,None)
#kp2, des2 = surf.detectAndCompute(blur,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
print(len(matches))
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
print(len(good))
# cv.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(modelg,kp1,blur,kp2,good,None,flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.imshow(img3),plt.show()

Extracting SIFT features of image dataset to be matched

I have image dataset ant want to extract its features in order to be compared with the query image to select the best features inside threshold. I'm able to extract images features and select the best ones in two corresponding images as the following code:
img1 = cv2.imread("path\of\training\image")
img2 = cv2.imread("path\of\query\image")
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the key-points and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=100) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.8*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
I want to compare the query image features with features of all images inside dataset, to select the best ones in order to recognize the specific object. How can I combine all dataset features and compare them with the query image features? can anyone please help me with thanks.
The first and naivest idea to solve your problem is: with your query image, which has some features represented by vectors, you find the nearest neighbours to those vectors in your features/vectors dataset, then the result should be the image which has most nearest features to the query features.
Basically, you have to calculate the distances between your vectors and FAISS give you some effort ways to do that.
I found this site that may help you:
https://waltyou.github.io/Faiss-In-Project-English/. The author faced the same situation as you did. And he used above way to get through it.
"To solve this problem, you can assign multiple ids to multiple vectors of an image when building a Faiss index. In this way, after searching with multiple vectors of a picture, in the returned result, only the number of times the associated id appears can be counted, and the similarity level can be obtained."

Set line width for default drawing functions in Python/OpenCV

I want to draw SIFT feature matches across two different pictures but when the image resolution is too high then the lines become barely visible (of course this is usually desired, and it is the expected behavior, and one can always zoom, but...). Is there a way to change default line width (or pass is as a flag) or is the only way to implement a custom drawing function?
Sample code for drawing:
img1 = np.array(Image.open("im1.jpg"))
img2 = np.array(Image.open("im2.jpg"))
sift = cv2.xfeatures2d.SIFT_create()
kp1, desc1 = sift.detectAndCompute(img1, None)
kp2, desc2 = sift.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# Match descriptors.
matches = bf.match(desc1,desc2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x: x.distance)
out = np.concatenate([img1, img2], axis=1)
# Draw first 10 matches.
cv2.drawMatches(img1, kp1, img2, kp2, matches[:20], flags=2, outImg=out)
plt.figure(figsize=(18,14))
plt.imshow(out)
plt.show()

Categories

Resources