Related
I understand that this is a popular question on Stack Overflow however, I have not managed to find the best solution yet.
Background
I am trying to classify an image. I currently have 10,000 unique images that a given image can match with. For each image in my database, I only have a single image for training. So I have a DB of 10,000 and the possible output classes are also 10,000. e.g. lets say there are 10,000 unique objects and I have a single image for each.
The goal is to match an input image to the 'best' matching image in the DB.
I am currently using Python with OpenCV and the Sift library to identify keypoints / descriptors then applying the standard matching methods to see which image in the DB that the input image best matches.
Code
I am using the following code to iterate over my database of images, to then find all the key points / descriptors and saving those descriptors to a file. This is to save time later on.
for i in tqdm(range(labels.shape[0])): #Use the length of the DB
# Read img from DB
img_path = 'data/'+labels['Image_Name'][i]
img = cv2.imread(img_path)
# Resize to ensure all images are equal for ROI
dim = (734,1024)
img = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
#Grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Roi
img = img[150:630, 20:700]
# Sift
sift = cv2.xfeatures2d.SIFT_create()
keypoints_1, descriptors_1 = sift.detectAndCompute(img,None)
# Save descriptors
path = 'data/'+labels['Image_Name'][i].replace(".jpeg", "_descriptors.csv")
savetxt(path, descriptors_1, delimiter=',')
Then when I am ready to classify an image, I can then read in all of the descriptors. This has proven to be 30% quicker.
# Array to store all of the descriptors from SIFT
descriptors = []
for i in tqdm(range(labels.shape[0])): #Use the length of the DB
# Read in teh descriptor file
path = 'data/'+labels['Image_Name'][i].replace(".jpeg", "_descriptors.csv")
descriptor = loadtxt(path, delimiter=',')
# Add to array
descriptors.append(descriptor)
Finally, I just need to read in an image, apply the sift method and then find the best match.
# Calculate simaularity
img = cv2.imread(PATH)
# Resize
dim = (734,1024)
img = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
#Grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Roi
img = img[150:630, 20:700]
# Sift
sift = cv2.xfeatures2d.SIFT_create()
keypoints_1, descriptors_1 = sift.detectAndCompute(img,None)
# Use Flann (Faster)
index_params = dict(algorithm=0, trees=5)
search_params = dict()
flann = cv2.FlannBasedMatcher(index_params, search_params)
# Store results
scoresdf = pd.DataFrame(columns=["index","score"])
#Find best matches in DB
for i in tqdm(range(labels.shape[0])):
# load in data
path = 'data/'+labels['Image_Name'][i].replace(".jpeg", "_descriptors.csv")
# Get descriptors for both images to compare
descriptors_2 = descriptors[i]
descriptors_2 = np.float32(descriptors_2)
# Find matches
matches = flann.knnMatch(descriptors_1, descriptors_2, k=2)
# select the lowest amount of keypoints
number_keypoints = 0
if len(descriptors_1) <= len(descriptors_2):
number_keypoints = len(descriptors_1)
else:
number_keypoints = len(descriptors_2)
# Find 'good' matches LOWE
good_points = []
ratio = 0.6
for m, n in matches:
if m.distance < ratio*n.distance:
good_points.append(m)
# Get simularity score
score = len(good_points) / number_keypoints * 100
scoresdf.loc[len(scoresdf)] = [i, score]
This all works but it does take some time and I would like to find a match much quicker.
Solutions?
I have read about the bag of word (BOW) method. However, I do not know if this will work given there are 10,000 classes. Would I need to set K=10000?
Given that each descriptor is an array, is there a way to reduce my search space? Can I find the X closest arrays (descriptors) to the descriptor of my input image?
Any help would be greatly appreciated :)
Edit
Can you use a Bag of Words (BOW) method to create X clusters. Then when I read in a new image, find out which cluster it belongs to. Then use SIFT matching on the images in that cluster to find the exact match? I am struggling to find much code examples for this.
Original image 1
Original image 2
I am trying to match two microscopy images (please see the attached file). However, the matches are horrible and the homography matrix produces an unacceptable result. Is there a way to improve this registration?
import cv2 # Imports the Open CV2 module for image manipulation.
import numpy as np # Imports the numpy module for numerical manipulation.
from tkinter import Tk # Imports tkinter for the creation of a graphic user interface.
from tkinter.filedialog import askopenfilename # Imports the filedialog window from tkinter
Tk().withdraw()
filename1 = askopenfilename(title='Select the skewed file')
Tk().withdraw()
filename2 = askopenfilename(title='Select the original file')
img1 = cv2.imread(filename1)
img2 = cv2.imread(filename2)
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
orb = cv2.ORB_create(nfeatures=10000)
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(des1, des2, None)
matches = sorted(matches, key = lambda x:x.distance)
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = kp1[match.queryIdx].pt
points2[i, :] = kp2[match.trainIdx].pt
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
height, width = img2.shape
im1Reg = cv2.warpPerspective(img1, h, (width, height))
img3 = cv2.drawKeypoints(img1, kp1, None, flags=0)
img4 = cv2.drawKeypoints(img2, kp2, None, flags=0)
img5 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None)
img = np.dstack((im1Reg, img2, im1Reg))
cv2.imshow("Shifted", img3)
cv2.imshow("Original", img4)
cv2.imshow("Matches", img5)
cv2.imshow("Registered", im1Reg)
cv2.imshow("Merged", img)
cv2.waitKey(0)
Image showing the matches I get
(I may be wrong, since haven't dealt with microscopy image processing and there must exist commonly spread ways to solve typical problems in the area, you should investigate this area if it's not a toy project).
In my opinion you should try another decision to solve your problem instead of using any kind of point-feature-based image descriptors (ORB, SURF etc.).
First of all, not all of them provide subpixel accuracy you may need in processing microscopy images. But the main reason is the math behind that descriptors. Refer to any CV book or paper.
Here is the link to ORB-descriptor paper. Notice images authors use for matching detected points. Good points are one on edges and corners of the image so it's can be used to match objects of sharp and outstanding shape.
Well-known example:
Matched points are on letters (unique shape) and textured drawing.
Try to detect plain green textbook (without any letters and anything on its cover) with this tool and you will fail.
So I think that your images are not one can be processed this way (since objects are not sharp in shape, not textured and very close to each other). It would be hard even for man's eye to match similar circles (in case of less obvious example, e.g. shift one view left/right a little).
But what I notice at glance, your image can be greatly described by circles on it. Hough-based circle detection is much easier (both for understanding and computing) and what is really important, it can give almost 100% accuracy on such images. You can easily operate with circles number, size, position etc.
Though, microscopy CV is a separate area with its own common tools to use and there might be lots of pros and cons to use Hough or something else. But at first glance it seemed to be much more accurate choose than point features description.
i've been trying to match a scanned formular with its empty template. The goal is to rotate and scale it to match the template.
Source (left), template (right)
Match (left), Homography warp (right)
The template does not contain any very specific logo, fixation cross or rectangular frame that would conveniently help me with feature or pattern matching. Even worse, the scanned formular can be skewed, altered and contains handwritten signatures and stamps.
My approach, after unsuccessfully testing ORB feature matching, was to concentrate on the shape of the formular (lines and column).
The pictures I provide here are obtained by reconstituting lines after a segment detection (LSD) with a certain minimum size. Most of what remains for source and template is the document layout itself.
In the following script (that should work out of the box along with pictures), I attempt to do ORB feature matching, but fail to make it work because it is concentrating on edges and not on the document layout.
import cv2 # using opencv-python v3.4
import numpy as np
from imutils import resize
# alining image using ORB descriptors, then homography warp
def align_images(im1, im2,MAX_MATCHES=5000,GOOD_MATCH_PERCENT = 0.15):
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_MATCHES)
keypoints1, descriptors1 = orb.detectAndCompute(im1, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
if len(im2.shape) == 2:
height, width = im2.shape
else:
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg, h, imMatches
template_fn = './stack/template.jpg'
image_fn = './stack/image.jpg'
im = cv2.imread(image_fn, cv2.IMREAD_GRAYSCALE)
template = cv2.imread(template_fn, cv2.IMREAD_GRAYSCALE)
# aligh images
imReg, h, matches = align_images(template,im)
# display output
cv2.imshow('im',im)
cv2.imshow('template',template)
cv2.imshow('matches',matches)
cv2.imshow('result',imReg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Is there any way to make the pattern matching algorithm work on the image on the left (source)? (another idea was to leave only lines intersections)
Alternatively, I have been trying to do scale and rotation invariant pattern matching for loops and while keeping max correlation, but it is way too resource consuming and not very reliable.
I'm therefore looking for hints in the right direction using opencv.
SOLUTION
The issue was about reducing the image to what really matters: the layout.
Also, ORB was not appropriate since it is not as robust (rotation and size invariant) as SIFT and AKAZE are.
I proceeded as follows:
convert the images to black and white
use line segment detection and filter lines shorter than 1/60th of the width
reconstruct the image from segments (line width does not have a big impact)
(optional: resize the pictures to speed up the rest)
apply a Gaussian transformation on the line reconstruction, 1/25th of the width
detect and match features using SIFT (patented) or AKAZE (free) algorithm
find a homography and warp the source picture to match the template
Matches for AKAZE
Matches for SIFT
I noted:
the layout of the template has to match, otherwise it will only stick to what it recognizes
line detection is better with higher resolution, then downsizing is possible since Gaussian are applied
SIFT produces more features and seems more reliable than AKAZE
I am analyzing multiple images and need to be able to tell if they are shifted compared to a reference image. The purpose is to tell if the camera moved at all in between capturing images. I would ideally like to be able to correct the shift in order to still do the analysis, but at a minimum I need to be able to determine if an image is shifted and discard it if it's beyond a certain threshold.
Here are some examples of the shifts in an image I would like to detect:
I will use the first image as a reference and then compare all of the following images to it to figure out if they are shifted. The images are gray-scale (they are just displayed in color using a heat-map) and are stored in a 2-D numpy array. Any ideas how I can do this? I would prefer to use the packages I already have installed (scipy, numpy, PIL, matplotlib).
As Lukas Graf hints, you are looking for cross-correlation. It works well, if:
The scale of your images does not change considerably.
There is no rotation change in the images.
There is no significant illumination change in the images.
For plain translations cross-correlation is very good.
The simplest cross-correlation tool is scipy.signal.correlate. However, it uses the trivial method for cross-correlation, which is O(n^4) for a two-dimensional image with side length n. In practice, with your images it'll take very long.
The better too is scipy.signal.fftconvolve as convolution and correlation are closely related.
Something like this:
import numpy as np
import scipy.signal
def cross_image(im1, im2):
# get rid of the color channels by performing a grayscale transform
# the type cast into 'float' is to avoid overflows
im1_gray = np.sum(im1.astype('float'), axis=2)
im2_gray = np.sum(im2.astype('float'), axis=2)
# get rid of the averages, otherwise the results are not good
im1_gray -= np.mean(im1_gray)
im2_gray -= np.mean(im2_gray)
# calculate the correlation image; note the flipping of onw of the images
return scipy.signal.fftconvolve(im1_gray, im2_gray[::-1,::-1], mode='same')
The funny-looking indexing of im2_gray[::-1,::-1] rotates it by 180° (mirrors both horizontally and vertically). This is the difference between convolution and correlation, correlation is a convolution with the second signal mirrored.
Now if we just correlate the first (topmost) image with itself, we get:
This gives a measure of self-similarity of the image. The brightest spot is at (201, 200), which is in the center for the (402, 400) image.
The brightest spot coordinates can be found:
np.unravel_index(np.argmax(corr_img), corr_img.shape)
The linear position of the brightest pixel is returned by argmax, but it has to be converted back into the 2D coordinates with unravel_index.
Next, we try the same by correlating the first image with the second image:
The correlation image looks similar, but the best correlation has moved to (149,200), i.e. 52 pixels upwards in the image. This is the offset between the two images.
This seems to work with these simple images. However, there may be false correlation peaks, as well, and any of the problems outlined in the beginning of this answer may ruin the results.
In any case you should consider using a windowing function. The choice of the function is not that important, as long as something is used. Also, if you have problems with small rotation or scale changes, try correlating several small areas agains the surrounding image. That will give you different displacements at different positions of the image.
Another way to solve it is to compute sift points in both images, use RANSAC to get rid of outliers and then solve for translation using a least squares estimator.
as Bharat said as well another is using sift features and Ransac:
import numpy as np
import cv2
from matplotlib import pyplot as plt
def crop_region(path, c_p):
"""
This function crop the match region in the input image
c_p: corner points
"""
# 3 or 4 channel as the original
img = cv2.imread(path, -1)
# mask
mask = np.zeros(img.shape, dtype=np.uint8)
# fill the the match region
channel_count = img.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, c_p, ignore_mask_color)
# apply the mask
matched_region = cv2.bitwise_and(img, mask)
return matched_region
def features_matching(path_temp,path_train):
"""
Function for Feature Matching + Perspective Transformation
"""
img1 = cv2.imread(path_temp, 0) # template
img2 = cv2.imread(path_train, 0) # input image
min_match=10
# SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# extract the keypoints and descriptors with SIFT
kps1, des1 = sift.detectAndCompute(img1,None)
kps2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches (g_matches) as per Lowe's ratio
g_match = []
for m,n in matches:
if m.distance < 0.7 * n.distance:
g_match.append(m)
if len(g_match)>min_match:
src_pts = np.float32([ kps1[m.queryIdx].pt for m in g_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kps2[m.trainIdx].pt for m in g_match ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2, [np.int32(dst)], True, (0,255,255) , 3, cv2.LINE_AA)
else:
print "Not enough matches have been found! - %d/%d" % (len(g_match), min_match)
matchesMask = None
draw_params = dict(matchColor = (0,255,255),
singlePointColor = (0,255,0),
matchesMask = matchesMask, # only inliers
flags = 2)
# region corners
cpoints=np.int32(dst)
a, b,c = cpoints.shape
# reshape to standard format
c_p=cpoints.reshape((b,a,c))
# crop matching region
matching_region = crop_region(path_train, c_p)
img3 = cv2.drawMatches(img1, kps1, img2, kps2, g_match, None, **draw_params)
return (img3,matching_region)
I am just doing an example of feature detection in OpenCV. This example is shown below. It is giving me the following error
module' object has no attribute 'drawMatches'
I have checked the OpenCV Docs and am not sure why I'm getting this error. Does anyone know why?
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
Error:
Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'
I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the drawMatches function doesn't exist in my distribution. I've also tried the second approach with find_obj and that didn't work for me either. With that, I decided to write my own implementation of it that mimics drawMatches to the best of my ability and this is what I've produced.
I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise.
The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. Be advised that I assume that both images are grayscale.
I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their (x,y) coordinates. I draw circles at each of the detected locations, then draw a line connecting these circles together.
Bear in mind that the detected keypoint in the second image is with respect to its own coordinate system. If you want to place this in the final output image, you need to offset the column coordinate by the amount of columns from the first image so that the column coordinate is with respect to the coordinate system of the output image.
Without further ado:
import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
To illustrate that this works, here are the two images that I used:
I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such:
import numpy as np
import cv2
img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
This is the image I get:
To use with knnMatch from cv2.BFMatcher
I'd like to make a note where the above code only works if you assume that the matches appear in a 1D list. However, if you decide to use the knnMatch method from cv2.BFMatcher for example, what is returned is a list of lists. Specifically, given the descriptors in img1 called des1 and the descriptors in img2 called des2, each element in the list returned from knnMatch is another list of k matches from des2 which are the closest to each descriptor in des1. Therefore, the first element from the output of knnMatch is a list of k matches from des2 which were the closest to the first descriptor found in des1. The second element from the output of knnMatch is a list of k matches from des2 which were the closest to the second descriptor found in des1 and so on.
To make the most sense of knnMatch, you must limit the total amount of neighbours to match to k=2. The reason why is because you want to use at least two matched points for each source point available to verify the quality of the match and if the quality is good enough, you'll want to use these to draw your matches and show them on the screen. You can use a very simple ratio test (credit goes to David Lowe) to ensure that for a point, we see that the distance / dissimilarity in matching to the best point is much smaller than the distance / dissimilarity in matching to the second best point. We can capture this by finding the ratio of the distance of the best matched point to the second best matched point. The ratio should be small to illustrate that a point to its best matched point is unambiguous. If the ratio is close to 1, this means that both matches are equally as "good" and thus ambiguous so we should not include these. We can think of this as an outlier rejection technique. Therefore, to turn what is returned from knnMatch to what is required with the code I wrote above, iterate through the matches, use the above ratio test and check if it passes. If it does, add the first matched keypoint to a new list.
Assuming that you created all of the variables like you did before declaring the BFMatcher instance, you'd now do this to adapt the knnMatch method for using drawMatches:
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance / n.distance < 0.75: # Or you can do m.distance < 0.75 * n.distance
# Add the match for point m to the best
good.append(m)
# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]
# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)
As you iterate over the matches list, m and n should be the match between a point from des1 and its best match (m) and its second best match (n) both from des2. If we see that the ratio is small, we'll add this best match between the two points (m) to a final list. The ratio that I have, 0.75, is a parameter that needs tuning so if you're not getting good results, play around with this until you do. However, values between 0.7 to 0.8 are a good start.
I want to attribute the above modifications to user #ryanmeasel and the answer that these modifications were found is in his post: OpenCV Python : No drawMatchesknn function.
The drawMatches Function is not part of the Python interface.
As you can see in the docs, it is only defined for C++ at the moment.
Excerpt from the docs:
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
If the function had a Python interface, you would find something like this:
Python: cv2.drawMatches(img1, keypoints1, [...])
EDIT
There actually was a commit that introduced this function 5 months ago. However, it is not (yet) in the official documentation.
Make sure you are using the newest OpenCV Version (2.4.7).
For sake of completeness the Functions interface for OpenCV 3.0.0 will looks like this:
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
I know this question has an accepted answer that is correct, but if you are using OpenCV 2.4.8 and not 3.0(-dev), a workaround could be to use some functions from the included samples found in opencv\sources\samples\python2\find_obj
import cv2
from find_obj import filter_matches,explore_match
img1 = cv2.imread('../c/box.png',0) # queryImage
img2 = cv2.imread('../c/box_in_scene.png',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)#, crossCheck=True)
matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, matches)
explore_match('find_obj', img1,img2,kp_pairs)#cv2 shows image
cv2.waitKey()
cv2.destroyAllWindows()
This is the output image: