Bad matches between images when performing image registration - python

Original image 1
Original image 2
I am trying to match two microscopy images (please see the attached file). However, the matches are horrible and the homography matrix produces an unacceptable result. Is there a way to improve this registration?
import cv2 # Imports the Open CV2 module for image manipulation.
import numpy as np # Imports the numpy module for numerical manipulation.
from tkinter import Tk # Imports tkinter for the creation of a graphic user interface.
from tkinter.filedialog import askopenfilename # Imports the filedialog window from tkinter
Tk().withdraw()
filename1 = askopenfilename(title='Select the skewed file')
Tk().withdraw()
filename2 = askopenfilename(title='Select the original file')
img1 = cv2.imread(filename1)
img2 = cv2.imread(filename2)
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
orb = cv2.ORB_create(nfeatures=10000)
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(des1, des2, None)
matches = sorted(matches, key = lambda x:x.distance)
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = kp1[match.queryIdx].pt
points2[i, :] = kp2[match.trainIdx].pt
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
height, width = img2.shape
im1Reg = cv2.warpPerspective(img1, h, (width, height))
img3 = cv2.drawKeypoints(img1, kp1, None, flags=0)
img4 = cv2.drawKeypoints(img2, kp2, None, flags=0)
img5 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None)
img = np.dstack((im1Reg, img2, im1Reg))
cv2.imshow("Shifted", img3)
cv2.imshow("Original", img4)
cv2.imshow("Matches", img5)
cv2.imshow("Registered", im1Reg)
cv2.imshow("Merged", img)
cv2.waitKey(0)
Image showing the matches I get

(I may be wrong, since haven't dealt with microscopy image processing and there must exist commonly spread ways to solve typical problems in the area, you should investigate this area if it's not a toy project).
In my opinion you should try another decision to solve your problem instead of using any kind of point-feature-based image descriptors (ORB, SURF etc.).
First of all, not all of them provide subpixel accuracy you may need in processing microscopy images. But the main reason is the math behind that descriptors. Refer to any CV book or paper.
Here is the link to ORB-descriptor paper. Notice images authors use for matching detected points. Good points are one on edges and corners of the image so it's can be used to match objects of sharp and outstanding shape.
Well-known example:
Matched points are on letters (unique shape) and textured drawing.
Try to detect plain green textbook (without any letters and anything on its cover) with this tool and you will fail.
So I think that your images are not one can be processed this way (since objects are not sharp in shape, not textured and very close to each other). It would be hard even for man's eye to match similar circles (in case of less obvious example, e.g. shift one view left/right a little).
But what I notice at glance, your image can be greatly described by circles on it. Hough-based circle detection is much easier (both for understanding and computing) and what is really important, it can give almost 100% accuracy on such images. You can easily operate with circles number, size, position etc.
Though, microscopy CV is a separate area with its own common tools to use and there might be lots of pros and cons to use Hough or something else. But at first glance it seemed to be much more accurate choose than point features description.

Related

Keypoints detection and matching between binary masks

I am trying to match keypoints using opencv (tutorial) between images shown below.
The thing is that I am not sure if I need to adjust some parameters or I am entirely using wrong method. Taking only right side of map.png did not help either.
Here is my code and also result.
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('../map.png',0)
img2 = cv2.imread('../mask.png',0)
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1,des2)
matches = sorted(matches, key = lambda x:x.distance)
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:20], None, flags=2)
cv2.imwrite('test.png', img3)
Feature detectors such as ORB used by you are designed to match feature points between images that differ in translation, rotation and scale. They are not intended to be used when images differ significantly in perspective (that is your case) and therefore your approach doesn't work. Moreover, such algorithms are designed for images that are rich in texture such as photos. In your case the features are repetative (multiple feature points extracted from first image, such as line endings, can be matched to a single point in the other).
In your case you should consider another features such as those based on lines intersections, see this tutorial for more information. This is only a hint, not the solution for your problem as it is really challenging.

Image stitching distorted wrap with multiple images

I am working on a project which requires me to stitch images together. I decided to test this with buildings due to a large number of possible key points that can be calculated. I have been following several guides, but the one with the best results for 2-3 images has been this guide: https://towardsdatascience.com/image-stitching-using-opencv-817779c86a83. The way I decided to stitch multiple images is to stitch the first two, then take the output and then stitch that with the third image, so on and so forth. I am confident in the matching of descriptors for the images. But as I stitch more and more images, the previous stitched part gets pushed further and further into -z axis. Meaning they get distorted and smaller. The code I use to accomplish this is as follows:
import cv2
import numpy as np
import os
os.chdir('images')
img_ = cv2.imread('Output.jpg', cv2.COLOR_BGR2GRAY)
img = cv2.imread('DJI_0019.jpg', cv2.COLOR_BGR2GRAY)
#Setting up orb key point detector
orb = cv2.ORB_create()
#using orb to compute keypoints and descriptors
kp, des = orb.detectAndCompute(img_, None)
kp2, des2 = orb.detectAndCompute(img, None)
print(len(kp))
#Setting up BFmatcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des, des2, k=2) #Find 2 best matches for each descriptors (This is required for ratio test?)
#Using lowes ratio test as suggested in paper at .7-.8
good = []
for m in matches:
if m[0].distance < .8 * m[1].distance:
good.append(m)
matches = np.asarray(good) #matches is essentially a list of matching descriptors
#Aligning the images
if(len(matches)) >= 4:
src = np.float32([kp[m.queryIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
dst = np.float32([kp2[m.trainIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
#Creating the homography and mask
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
print(H)
else:
print("Could not find 4 good matches to find homography")
dst = cv2.warpPerspective(img_, H, (img.shape[1] + 900, img.shape[0]))
dst[0:img.shape[0], 0:img.shape[1]] = img
cv2.imwrite("Output.jpg", dst)
With the output of the 4th+ stitch looking like such:
As you can see the images are getting further and further transformed in a weird way. My theory for such an event happening is due to the camera position and angle at which the images were taken, but I am not sure. If this might be the case, are there optimal parameters that will produce the best images to stitching?
Is there a way to fix this issue where the content can be pushed "flush" against the x axis?
Edit: Adding source images: https://imgur.com/zycPQuV

opencv feature matching with empty formular template

i've been trying to match a scanned formular with its empty template. The goal is to rotate and scale it to match the template.
Source (left), template (right)
Match (left), Homography warp (right)
The template does not contain any very specific logo, fixation cross or rectangular frame that would conveniently help me with feature or pattern matching. Even worse, the scanned formular can be skewed, altered and contains handwritten signatures and stamps.
My approach, after unsuccessfully testing ORB feature matching, was to concentrate on the shape of the formular (lines and column).
The pictures I provide here are obtained by reconstituting lines after a segment detection (LSD) with a certain minimum size. Most of what remains for source and template is the document layout itself.
In the following script (that should work out of the box along with pictures), I attempt to do ORB feature matching, but fail to make it work because it is concentrating on edges and not on the document layout.
import cv2 # using opencv-python v3.4
import numpy as np
from imutils import resize
# alining image using ORB descriptors, then homography warp
def align_images(im1, im2,MAX_MATCHES=5000,GOOD_MATCH_PERCENT = 0.15):
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_MATCHES)
keypoints1, descriptors1 = orb.detectAndCompute(im1, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
if len(im2.shape) == 2:
height, width = im2.shape
else:
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg, h, imMatches
template_fn = './stack/template.jpg'
image_fn = './stack/image.jpg'
im = cv2.imread(image_fn, cv2.IMREAD_GRAYSCALE)
template = cv2.imread(template_fn, cv2.IMREAD_GRAYSCALE)
# aligh images
imReg, h, matches = align_images(template,im)
# display output
cv2.imshow('im',im)
cv2.imshow('template',template)
cv2.imshow('matches',matches)
cv2.imshow('result',imReg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Is there any way to make the pattern matching algorithm work on the image on the left (source)? (another idea was to leave only lines intersections)
Alternatively, I have been trying to do scale and rotation invariant pattern matching for loops and while keeping max correlation, but it is way too resource consuming and not very reliable.
I'm therefore looking for hints in the right direction using opencv.
SOLUTION
The issue was about reducing the image to what really matters: the layout.
Also, ORB was not appropriate since it is not as robust (rotation and size invariant) as SIFT and AKAZE are.
I proceeded as follows:
convert the images to black and white
use line segment detection and filter lines shorter than 1/60th of the width
reconstruct the image from segments (line width does not have a big impact)
(optional: resize the pictures to speed up the rest)
apply a Gaussian transformation on the line reconstruction, 1/25th of the width
detect and match features using SIFT (patented) or AKAZE (free) algorithm
find a homography and warp the source picture to match the template
Matches for AKAZE
Matches for SIFT
I noted:
the layout of the template has to match, otherwise it will only stick to what it recognizes
line detection is better with higher resolution, then downsizing is possible since Gaussian are applied
SIFT produces more features and seems more reliable than AKAZE

How to detect a shift between images

I am analyzing multiple images and need to be able to tell if they are shifted compared to a reference image. The purpose is to tell if the camera moved at all in between capturing images. I would ideally like to be able to correct the shift in order to still do the analysis, but at a minimum I need to be able to determine if an image is shifted and discard it if it's beyond a certain threshold.
Here are some examples of the shifts in an image I would like to detect:
I will use the first image as a reference and then compare all of the following images to it to figure out if they are shifted. The images are gray-scale (they are just displayed in color using a heat-map) and are stored in a 2-D numpy array. Any ideas how I can do this? I would prefer to use the packages I already have installed (scipy, numpy, PIL, matplotlib).
As Lukas Graf hints, you are looking for cross-correlation. It works well, if:
The scale of your images does not change considerably.
There is no rotation change in the images.
There is no significant illumination change in the images.
For plain translations cross-correlation is very good.
The simplest cross-correlation tool is scipy.signal.correlate. However, it uses the trivial method for cross-correlation, which is O(n^4) for a two-dimensional image with side length n. In practice, with your images it'll take very long.
The better too is scipy.signal.fftconvolve as convolution and correlation are closely related.
Something like this:
import numpy as np
import scipy.signal
def cross_image(im1, im2):
# get rid of the color channels by performing a grayscale transform
# the type cast into 'float' is to avoid overflows
im1_gray = np.sum(im1.astype('float'), axis=2)
im2_gray = np.sum(im2.astype('float'), axis=2)
# get rid of the averages, otherwise the results are not good
im1_gray -= np.mean(im1_gray)
im2_gray -= np.mean(im2_gray)
# calculate the correlation image; note the flipping of onw of the images
return scipy.signal.fftconvolve(im1_gray, im2_gray[::-1,::-1], mode='same')
The funny-looking indexing of im2_gray[::-1,::-1] rotates it by 180° (mirrors both horizontally and vertically). This is the difference between convolution and correlation, correlation is a convolution with the second signal mirrored.
Now if we just correlate the first (topmost) image with itself, we get:
This gives a measure of self-similarity of the image. The brightest spot is at (201, 200), which is in the center for the (402, 400) image.
The brightest spot coordinates can be found:
np.unravel_index(np.argmax(corr_img), corr_img.shape)
The linear position of the brightest pixel is returned by argmax, but it has to be converted back into the 2D coordinates with unravel_index.
Next, we try the same by correlating the first image with the second image:
The correlation image looks similar, but the best correlation has moved to (149,200), i.e. 52 pixels upwards in the image. This is the offset between the two images.
This seems to work with these simple images. However, there may be false correlation peaks, as well, and any of the problems outlined in the beginning of this answer may ruin the results.
In any case you should consider using a windowing function. The choice of the function is not that important, as long as something is used. Also, if you have problems with small rotation or scale changes, try correlating several small areas agains the surrounding image. That will give you different displacements at different positions of the image.
Another way to solve it is to compute sift points in both images, use RANSAC to get rid of outliers and then solve for translation using a least squares estimator.
as Bharat said as well another is using sift features and Ransac:
import numpy as np
import cv2
from matplotlib import pyplot as plt
def crop_region(path, c_p):
"""
This function crop the match region in the input image
c_p: corner points
"""
# 3 or 4 channel as the original
img = cv2.imread(path, -1)
# mask
mask = np.zeros(img.shape, dtype=np.uint8)
# fill the the match region
channel_count = img.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, c_p, ignore_mask_color)
# apply the mask
matched_region = cv2.bitwise_and(img, mask)
return matched_region
def features_matching(path_temp,path_train):
"""
Function for Feature Matching + Perspective Transformation
"""
img1 = cv2.imread(path_temp, 0) # template
img2 = cv2.imread(path_train, 0) # input image
min_match=10
# SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# extract the keypoints and descriptors with SIFT
kps1, des1 = sift.detectAndCompute(img1,None)
kps2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches (g_matches) as per Lowe's ratio
g_match = []
for m,n in matches:
if m.distance < 0.7 * n.distance:
g_match.append(m)
if len(g_match)>min_match:
src_pts = np.float32([ kps1[m.queryIdx].pt for m in g_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kps2[m.trainIdx].pt for m in g_match ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2, [np.int32(dst)], True, (0,255,255) , 3, cv2.LINE_AA)
else:
print "Not enough matches have been found! - %d/%d" % (len(g_match), min_match)
matchesMask = None
draw_params = dict(matchColor = (0,255,255),
singlePointColor = (0,255,0),
matchesMask = matchesMask, # only inliers
flags = 2)
# region corners
cpoints=np.int32(dst)
a, b,c = cpoints.shape
# reshape to standard format
c_p=cpoints.reshape((b,a,c))
# crop matching region
matching_region = crop_region(path_train, c_p)
img3 = cv2.drawMatches(img1, kps1, img2, kps2, g_match, None, **draw_params)
return (img3,matching_region)

Image stitching Python

I have to stitch two or more images together using python and openCV.
I found this code for finding keypoints and matches, but I don't know how to continue.
Help me please!
import numpy as np
import cv2
MIN_MATCH_COUNT = 10
img1 = cv2.imread('a.jpg',0) # queryImage
img2 = cv2.imread('b.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
Your question is not very clear, but I assume what you mean is that you have a bunch of images and you want to have opencv find the corresponding landmarks and then warp/scale each picture so that they can form one big image.
A method without using the stitcher class, basically looping over pictures and determining the best fitting one each iteration, is documented in this github code
One approach to image stitching consists of the following steps.
Firstly, as you've already figured out, you need a feature point detector and the some way to find correspondences between feature points on both images. It's typically a good idea to eliminate a lot of correspondences because they will likely contain a lot of noise. A super simple way to eliminate a lot of noise is to look for symmetry in the matches.
This is roughly what your code does up to this point.
Next, to stitch images together, you need to warp one of the images to match the perspective of the other image. This is done by estimating the homography using the correspondences. Because your correspondences will still likely contain a lot of noise, we typically use RANSAC to robustly estimate the homography.
A quick google search provides many examples of this being implemented.

Categories

Resources