Replicate photoshop focus stacking in python - python

Hi I am trying to focus stack a certain set of images. I have used photoshop to focus stack them but I am wondering is there a way to replicate how we focus stack the images in python using the same method in photoshop. My understanding of photoshop method is that it creates a mask of the original images and considers the focused areas in each image to create the final image.
I have used this as a reference article: https://www.learnopencv.com/image-alignment-ecc-in-opencv-c-python/
https://github.com/spmallick/learnopencv/blob/master/ImageAlignment/image_alignment.py
https://github.com/spmallick/learnopencv/blob/master/ImageAlignment-FeatureBased/align.py
What I have tried till now is find the homographies for two images and using ORB to find the key points and descriptors for image alignment. Based on these I am wraping the images to produce the final sharp image
for i in range(0,len(matches)):
image_1_points[i] = image_1_kp[matches[i].queryIdx].pt
image_2_points[i] = image_2_kp[matches[i].trainIdx].pt
homography, mask = cv2.findHomography(image_1_points, image_2_points, cv2.RANSAC, ransacReprojThreshold=2.0)
print( "Detecting features of base image")
outimages.append(images[0])
image1gray = cv2.cvtColor(images[0],cv2.COLOR_BGR2GRAY)
image_1_kp, image_1_desc = detector.detectAndCompute(image1gray, None)
for i in range(1,len(images)):
print ("Aligning image {}".format(i))
image_i_kp, image_i_desc = detector.detectAndCompute(images[i], None)
bf = cv2.BFMatcher()
# This returns the top two matches for each feature point (list of list)
pairMatches = bf.knnMatch(image_i_desc,image_1_desc, k=2)
rawMatches = []
for m,n in pairMatches:
if m.distance < 0.7*n.distance:
rawMatches.append(m)
sortMatches = sorted(rawMatches, key=lambda x: x.distance)
matches = sortMatches[0:128]
hom = findHomography(image_i_kp, image_1_kp, matches)
newimage = cv2.warpPerspective(images[i], hom, (images[i].shape[1], images[i].shape[0]), flags=cv2.INTER_LINEAR)
outimages.append(newimage)

Related

Image stitching distorted wrap with multiple images

I am working on a project which requires me to stitch images together. I decided to test this with buildings due to a large number of possible key points that can be calculated. I have been following several guides, but the one with the best results for 2-3 images has been this guide: https://towardsdatascience.com/image-stitching-using-opencv-817779c86a83. The way I decided to stitch multiple images is to stitch the first two, then take the output and then stitch that with the third image, so on and so forth. I am confident in the matching of descriptors for the images. But as I stitch more and more images, the previous stitched part gets pushed further and further into -z axis. Meaning they get distorted and smaller. The code I use to accomplish this is as follows:
import cv2
import numpy as np
import os
os.chdir('images')
img_ = cv2.imread('Output.jpg', cv2.COLOR_BGR2GRAY)
img = cv2.imread('DJI_0019.jpg', cv2.COLOR_BGR2GRAY)
#Setting up orb key point detector
orb = cv2.ORB_create()
#using orb to compute keypoints and descriptors
kp, des = orb.detectAndCompute(img_, None)
kp2, des2 = orb.detectAndCompute(img, None)
print(len(kp))
#Setting up BFmatcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des, des2, k=2) #Find 2 best matches for each descriptors (This is required for ratio test?)
#Using lowes ratio test as suggested in paper at .7-.8
good = []
for m in matches:
if m[0].distance < .8 * m[1].distance:
good.append(m)
matches = np.asarray(good) #matches is essentially a list of matching descriptors
#Aligning the images
if(len(matches)) >= 4:
src = np.float32([kp[m.queryIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
dst = np.float32([kp2[m.trainIdx].pt for m in matches[:, 0]]).reshape(-1, 1, 2)
#Creating the homography and mask
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
print(H)
else:
print("Could not find 4 good matches to find homography")
dst = cv2.warpPerspective(img_, H, (img.shape[1] + 900, img.shape[0]))
dst[0:img.shape[0], 0:img.shape[1]] = img
cv2.imwrite("Output.jpg", dst)
With the output of the 4th+ stitch looking like such:
As you can see the images are getting further and further transformed in a weird way. My theory for such an event happening is due to the camera position and angle at which the images were taken, but I am not sure. If this might be the case, are there optimal parameters that will produce the best images to stitching?
Is there a way to fix this issue where the content can be pushed "flush" against the x axis?
Edit: Adding source images: https://imgur.com/zycPQuV

How to detect good features for rotationally aligning microscope images to a template

I'm working on a project to automatically rotate microscope image stacks of a fluid experiment so that they are lined up with images of the CAD template for the microfluidic chip. I am using the OpenCV package in Python for image processing. Having the correct rotational orientation is necessary so that the images can be masked properly for analysis. Our chips have markers filled with fluorescent dye that are visible in every frame. The template and a sample image look like the following (the template can be scaled to arbitrary size, but the relevant region of the images is typically ~100x100 pixels or so):
I have not been able to rotationally align the image to the CAD template. Typically, the misalignment between the CAD template and the images is less than a few degrees, which is still sufficient to interfere with analysis, so I need to be able to measure the rotational difference even if it is relatively small.
Following examples online I am using the following procedure:
Scale up the image to approximately the same size as the template using cubic interpolation (~800 x 800)
Threshold both images using Otsu's method
Find keypoints and extract descriptors using a built-in method (I've tried ORB, AKAZE, and BRIEF).
Match descriptors using a brute-force matcher with Hamming distance.
Take the best matches and use them to compute a partial affine transformation matrix
Use that matrix to infer a rotational shift, warping the one image to the other as a check.
Here's a sample of my code (borrowed in part from here):
import numpy as np
import cv2
import matplotlib.pyplot as plt
MAX_FEATURES = 500
GOOD_MATCH_PERCENT = 0.5
def alignImages(im1, im2,returnpoints=False):
# Detect ORB features and compute descriptors.
size1 = int(0.1*(np.mean(np.shape(im1))))
size2 = int(0.1*(np.mean(np.shape(im2))))
orb1 = cv2.ORB_create(MAX_FEATURES,edgeThreshold=size1,patchSize=size1)
orb2 = cv2.ORB_create(MAX_FEATURES,edgeThreshold=size2,patchSize=size2)
keypoints1, descriptors1 = orb1.detectAndCompute(im1, None)
keypoints2, descriptors2 = orb2.detectAndCompute(im2, None)
matcher = cv2.BFMatcher(cv2.NORM_HAMMING,crossCheck=True)
matches = matcher.match(descriptors1,descriptors2)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
cv2.imwrite("matches.jpg", imMatches)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
M, inliers = cv2.estimateAffinePartial2D(points1,points2)
height, width = im2.shape
im1Reg = cv2.warpAffine(im1,M,(width,height))
return im1Reg, M
if __name__ == "__main__":
test_template = cv2.cvtColor(cv2.imread("test_CAD_cropped.png"),cv2.COLOR_RGB2GRAY)
test_image = cv2.cvtColor(cv2.imread("test_CAD_cropped.png"),cv2.COLOR_RGB2GRAY)
fx = fy = 88/923
test_image_big = cv2.resize(test_image,(0,0),fx=1/fx,fy=1/fy,interpolation=cv2.INTER_CUBIC)
ret, imRef_t = cv2.threshold(test_template,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret, test_big_t = cv2.threshold(test_image_big,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
imReg, M = alignImages(test_big_t,imRef_t)
fig, ax = plt.subplots(nrows=2,ncols=2,figsize=(8,8))
ax[1,0].imshow(imReg)
ax[1,0].set_title("Warped Image")
ax[0,0].imshow(imRef_t)
ax[0,0].set_title("Template")
ax[0,1].imshow(test_big_t)
ax[0,1].set_title("Thresholded Image")
ax[1,1].imshow(imRef_t - imReg)
ax[1,1].set_title("Diff")
plt.show()
In this example, I get the following bad transformation because there are only 3 matching keypoints and they are all incorrect:
I find that regardless of my keypoint/descriptor parameters I tend to get too few "good" features. Is there anything I can do to pre-process my images better to get good features more reliably, or is there a better method to align my images to this template that doesn't involve keypoint matching? The specific application of this experiment means that I can't use the patented keypoint extractor/descriptors like SURF and SIFT.
A good method to align two images based on rotation, translation and scaling only is the Fourier Mellin transform.
Here is an example using the implementation in DIPlib (disclosure: I'm an author):
import diplib as dip
# load data
image = dip.ImageRead('image.png')
template = dip.ImageRead('template.png')
template = template.TensorElement(0) # this one is RGB, take any one channel
# pad the two images with zeros so they have equal sizes
sz = [max(image.Size(0), template.Size(0)), max(image.Size(1), template.Size(1))]
image = image.Pad(sz)
template = template.Pad(sz)
# match
res = dip.FourierMellinMatch2D(template, image)
# display
dip.JoinChannels((template,res,res)).Show()
However, there are many other approaches. A key thing here is that both the template and the image are quite simple, and very similar. This makes registration very easy.
For example, assuming you have the proper scaling of the template (this should not be a problem I presume), all you need to do is find the rotation and the translation. You can brute-force the rotations, simply rotating the image over a set of small angles, and matching each of the results with the template (cross-correlation). The one with the best match (largest cross-correlation value) has the appropriate rotation. If you need to have a very precise rotation estimation, you can do a second set of angles close to the best choice in the first set.
Cross-correlation is cheap and easy to compute, and leads to high precision translation estimates (the Fourier Mellin method makes extensive use of it). Don't just find the pixel with the largest value in the cross-correlation output, you can fit a parabola to the few pixels around this one and use the location of the maximum of the fitted parabola. This leads to sub-pixel estimates of translation.

opencv feature matching with empty formular template

i've been trying to match a scanned formular with its empty template. The goal is to rotate and scale it to match the template.
Source (left), template (right)
Match (left), Homography warp (right)
The template does not contain any very specific logo, fixation cross or rectangular frame that would conveniently help me with feature or pattern matching. Even worse, the scanned formular can be skewed, altered and contains handwritten signatures and stamps.
My approach, after unsuccessfully testing ORB feature matching, was to concentrate on the shape of the formular (lines and column).
The pictures I provide here are obtained by reconstituting lines after a segment detection (LSD) with a certain minimum size. Most of what remains for source and template is the document layout itself.
In the following script (that should work out of the box along with pictures), I attempt to do ORB feature matching, but fail to make it work because it is concentrating on edges and not on the document layout.
import cv2 # using opencv-python v3.4
import numpy as np
from imutils import resize
# alining image using ORB descriptors, then homography warp
def align_images(im1, im2,MAX_MATCHES=5000,GOOD_MATCH_PERCENT = 0.15):
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_MATCHES)
keypoints1, descriptors1 = orb.detectAndCompute(im1, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
if len(im2.shape) == 2:
height, width = im2.shape
else:
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg, h, imMatches
template_fn = './stack/template.jpg'
image_fn = './stack/image.jpg'
im = cv2.imread(image_fn, cv2.IMREAD_GRAYSCALE)
template = cv2.imread(template_fn, cv2.IMREAD_GRAYSCALE)
# aligh images
imReg, h, matches = align_images(template,im)
# display output
cv2.imshow('im',im)
cv2.imshow('template',template)
cv2.imshow('matches',matches)
cv2.imshow('result',imReg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Is there any way to make the pattern matching algorithm work on the image on the left (source)? (another idea was to leave only lines intersections)
Alternatively, I have been trying to do scale and rotation invariant pattern matching for loops and while keeping max correlation, but it is way too resource consuming and not very reliable.
I'm therefore looking for hints in the right direction using opencv.
SOLUTION
The issue was about reducing the image to what really matters: the layout.
Also, ORB was not appropriate since it is not as robust (rotation and size invariant) as SIFT and AKAZE are.
I proceeded as follows:
convert the images to black and white
use line segment detection and filter lines shorter than 1/60th of the width
reconstruct the image from segments (line width does not have a big impact)
(optional: resize the pictures to speed up the rest)
apply a Gaussian transformation on the line reconstruction, 1/25th of the width
detect and match features using SIFT (patented) or AKAZE (free) algorithm
find a homography and warp the source picture to match the template
Matches for AKAZE
Matches for SIFT
I noted:
the layout of the template has to match, otherwise it will only stick to what it recognizes
line detection is better with higher resolution, then downsizing is possible since Gaussian are applied
SIFT produces more features and seems more reliable than AKAZE

How do you use Akaze in Open CV on python

I found example in c++:
http://docs.opencv.org/3.0-beta/doc/tutorials/features2d/akaze_matching/akaze_matching.html
But there isn't any example in python showing how to use this feature detector (also couldn't find anything more in documentation about AKAZE there is ORB SIFT, SURF, etc but not what I'm looking for)
http://docs.opencv.org/3.1.0/db/d27/tutorial_py_table_of_contents_feature2d.html#gsc.tab=0
Can someone could share or show me where I can find information how to match images in python with akaze?
I am not sure on where to find it, the way I made it work was through this function which used the Brute Force matcher:
def kaze_match(im1_path, im2_path):
# load the image and convert it to grayscale
im1 = cv2.imread(im1_path)
im2 = cv2.imread(im2_path)
gray1 = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# initialize the AKAZE descriptor, then detect keypoints and extract
# local invariant descriptors from the image
detector = cv2.AKAZE_create()
(kps1, descs1) = detector.detectAndCompute(gray1, None)
(kps2, descs2) = detector.detectAndCompute(gray2, None)
print("keypoints: {}, descriptors: {}".format(len(kps1), descs1.shape))
print("keypoints: {}, descriptors: {}".format(len(kps2), descs2.shape))
# Match the features
bf = cv2.BFMatcher(cv2.NORM_HAMMING)
matches = bf.knnMatch(descs1,descs2, k=2) # typo fixed
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.9*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
im3 = cv2.drawMatchesKnn(im1, kps1, im2, kps2, good[1:20], None, flags=2)
cv2.imshow("AKAZE matching", im3)
cv2.waitKey(0)
Remember that the feature vectors are binary vectors. Therefore, the similarity is based on the Hamming distance, rather than the commonly used L2 norm or Euclidean distance if you will.
I searched for the same tutorial and found out the tutorial is given in 3 alternate languages C++, Python & Java. There are 3 hyperlinks for them before the start of code area.
Try this [ https://docs.opencv.org/3.4/db/d70/tutorial_akaze_matching.html ]

3D normalised cross-correlation in Python

I'm currently doing 2D template matching using OpenCV's MatchTemplate function called from Python. I'm looking to extend my code into 3D but can't find any existing 3D cross-correlation programs. Can anyone help out?
Do you mean that you are currently looking for a known object somewhere in an image, and you are currently only able to handle that object being affine transformed (moved around on a 2D plane), but you want to be able to handle it being perspective transformed?
You could try using a SURF or SIFT algorithm to find features in your reference and unknown images:
def GetSurfPoints(image, mask)
surfDetector = cv2.FeatureDetector_create("SURF")
surfExtractor = cv2.DescriptorExtractor_create("SURF")
keyPoints = surfDetector.detect(image, mask)
keyPoints, descriptions = surfExtractor.compute(image, keyPoints)
return keyPoints, descriptions
Then use FLANN to find matching points (this is from one of the cv2 samples):
def MatchFlann(desc1, desc2, r_threshold = 0.6):
FLANN_INDEX_KDTREE = 1 # bug: flann enums are missing
flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 4)
flann = cv2.flann_Index(desc2, flann_params)
idx2, dist = flann.knnSearch(desc1, 2, params = {}) # bug: need to provide empty dict
mask = dist[:,0] / dist[:,1] < r_threshold
idx1 = numpy.arange(len(desc1))
matches = numpy.int32( zip(idx1, idx2[:,0]) )
return matches[mask]
Now if you want to, you could use FindHomography to find a transformation that aligns the two images:
referencePoints = numpy.array([keyPoints[match[0]].pt for match in matches])
newPoints = numpy.array([keyPoints[match[1]].pt for match in matches])
transformMatrix, mask = cv2.findHomography(newPoints, referencePoints, method = cv2.cv.CV_LMEDS)
You could then use WarpPerspective and that matrix to align the images. Or you could do something else with the set of matched points found earlier.

Categories

Resources