Implementation of Fourier transformation on an image - python

I am trying to replicate the algorithm that is given in
research paper regarding rating an image for a blur score
Please find below the function I have created. I have added the points in the comments on what I was trying to do.
def calculate_blur(image_name):
img_1 = cv2.imread(image_name) # Reading the Image
img_2 = np.fft.fft2(img_1) # Performing 2 dimensional fft on the image
img_3 = np.fft.fftshift(img_2) #findind fc by shifting origin of F to centre
img_4 = np.fft.ifftshift(img_3)
af=np.abs(img_4) #Calculating the absolute value of centred Fourier Transform
threshold=np.max(af)/1000# calculating the threshold value where the max value is calculated from absolute value
Th=np.sum(img_2>threshold) #total number of pixels in F/img_2 whose pixel value>threshold
fm=Th/(img_1.shape[0]*img_1.shape[1]) #calculating the image quality measure(fm)
if fm>0.05 : #Assuming fm>0.05 would be Not Blur (as I assumed from the results given in the research paper)
value='Not Blur'
else:
value='Blur'
return fm,value
I am seeing that when it is face closeup picture with appropriate light, even the images are blurry, the IQM score would be greater than 0.05 while for normal images(appropriate distance from the camera) that are clicked it is showing up good results.
I am sharing 2 pictures.
This has score of (0.2822434750792747, 'Not Blur')
This has a score of (0.035472916666666666, 'Blur')`
I am trying to understand how exactly it is working in the backend i.e deciding between the two and how to enhance my function and detection.

Your code seems to replicate the work in the paper.
Unfortunately, it is not at all this easy to determine if a picture is blurry or not. One can use this to compare multiple images of the same scene, to see which one is sharper or more blurry. If the illumination changes, or the contents of the scene changes, the comparison can no longer be made.
I am not aware of any fool-proof method to distinguish an out-of-focus image if there is no in-focus image to compare it to. All these methods will fail, telling you that a perfectly in-focus image of a white wall is out of focus.
The best one can do is compare the power (square of the magnitude of the frequency components) at higher frequencies to that at lower frequencies (using, for example, band-pass filters). This will tell you if the image contains any sharp edges or not. Of course, it will tell you the image is out of focus when the scene only contains smooth transitions and no sharp edges.
This other Q&A has some more ideas.
Nit pick:
img_4 = np.fft.ifftshift(img_3) undoes what img_3 = np.fft.fftshift(img_2) does, so that img_4 == img_2. Nonetheless, shifting the origin in the Fourier domain does not affect any of the subsequent processing, so it is irrelevant whether one uses img_2, img_3 or img_4 in the computations.

Related

How to find the right point correspondance between template and a rotated image?

I have a .dxf file containing a drawing (template) which is just a piece with holes, from said drawing I successfully extract the coordinates of the holes and their diameters given in a list [[x1,y1,d1],[x2,y2,d2]...[xn,yn,dn]].
After this, I take a picture of the piece (same as template) and after some image processing, I obtain the coordinates of my detected holes and the contours. However, this piece in the picture can be rotated with respect to the template.
How do I do the right hole correspondance (between coordinates of holes in template and the rotated coordinates of holes in image) so I can know the which diameter corresponds to each hole in the image?
Is there any method of point sorting it can give me this correspondence?
I'm working with Python and OpenCV.
All answers will be highly appreciated. Thanks!!!
Image of Template: https://ibb.co/VVpWmKx
In the template image, contours are drawn to the same size as given in the .dxf file, which differs to the size (in pixels) of the contours of the piece taken from camera.
Processed image taken from the camera, contours of the piece are shown: https://ibb.co/3rjCg5F
I've tried OpenCV functions of feature matching (ORB algorithm) so I can get the rotation angle the piece in picture was rotates with respect to the template?
but I still cannot get this rotation angle? how can I get the rotation angle with image descriptors?
is this the best approach for this problem? are there any better methods to address this problem?
Considering the image of the extracted contours, you might not need something as heavy as the feature matching algorithm of the OCV library. One approach would be to take the most outter contour of the piece and get the cv::minAreaRect of it. Resulting rotated rectangle will give you the angle. Now you just have to decide if the symmetry matches, because it might be flipped. That can be done as well in many ways. One of the most simple one (excluding the fact, the scale might be off) is that you take the most outter contour again, fill it and count the percentage of the points that overlay with the template. The one with right symmetric orientation should match in almost all points. Given that the scale of the matched piece and the template are the same.
emm you should use huMoments which gives translation, scale and rotation invariance descriptor for matching.
The hu moment can be found here https://en.wikipedia.org/wiki/Image_moment. and it is implemented in opencv
you can dig up the theory of Moment invariants on the wiki site pretty easily
to use it you can simply call
// Calculate Moments
Moments moments = moments(im, false);
// Calculate Hu Moments
double huMoments[7];
HuMoments(moments, huMoments);
The sample moment will be
h[0] = 0.00162663
h[1] = 3.11619e-07
h[2] = 3.61005e-10
h[3] = 1.44485e-10
h[4] = -2.55279e-20
h[5] = -7.57625e-14
h[6] = 2.09098e-20
Usually, here is a large range of the moment. There usually coupled with a log transform to lower the dynamic range for matching
H=log(H)
H[0] = 2.78871
H[1] = 6.50638
H[2] = 9.44249
H[3] = 9.84018
H[4] = -19.593
H[5] = -13.1205
H[6] = 19.6797
BTW, you might need to pad the template to extract the edge contour

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

How to measure rotation angle of an image compared to a known template

I've succeeded on it by using the below method, but I'm sure there must be other more time-efficient alternatives to provide exact angle of rotation instead of an approximation as the method below. I'll be pleased to hear your feedback.
The procedure is based on the following steps:
Import a template image (i.e.: with orientation at 0º)
Create a discrete array of the same image but each one rotated at 360º/rotatesteps compared to its nearest neighbour (i.e.: 30 to 50 rotated images)
# python 3 / opencv 3
# Settings:
rotate_steps = 36
step_angle = round((360/rotate_steps), 0) # one image at each 10º
# Rotation function
def rotate_image(image, angle):
# ../..
return rotated_image
# Importing a sample image and creating a n-dimension array where to store images in:
image = cv2.imread('sample_image.png')
image_Array = np.zeros((image.shape[1], image.shape[0], 1), dtype='uint8')
# Rotating sample image and saving it into the array as a new channel:
while rotation_angle <= (360 - step_angle):
angles.append(rotation_angle)
image_array[:,:,channel] = rotate_image(image.copy(), rotation_angle)
# ../..
So I get:
angles = [0, 10.0, 20.0, 30.0, .../..., 340.0, 350.0]
image_array = [image_1, image_2, image_3, ...] where image_i is a different channel on a numpy array.
Retrieve the 'test_image' for which I'm looking at the angle compared to the sample image we have previously rotated and stored into an array
Follow a series of cv2.matchTemplate() and cv2.minMaxLoc() to find what rotated image's angle best matches the 'test_image'
for i in range(len(angles)):
res = cv2.matchTemplate(test_image, image_array[:,:,i], cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# ../..
And finally I pick the discretized angle matching the sample image as the one corresponding to the template image with 'max_val' highest value.
This has proved to work well having in mind the resulting precision is based on an approximation with higher / lower precision depending on the amount of rotated template images, and also the rising time taken when rotated template number increases...
I'm sure there must be other smarter alternatives based on different methods such as generating a kind of "orientation vector" of an image, and so comparing just the resulting number with a previously known one from a sample template...
Your feedback will be highly appreciated.
I think your problem doesn't have an easy solution. It's in fact a registration problem, warping (in this case, rotating) an image to fit another. And it's a known difficult problem, as segmentation is.
I heard image processing researchers say that "he who masters segmentation and registration masters image processing", which might be a little bit of a hyperbole, but it gives the general idea.
Anyway, your technique is how I would have gone with it. Looking on researchgate, https://www.researchgate.net/post/How_can_one_determine_the_rotation_angle_between_two_images, lots of answers also go your way. The alternative would be using feature matching, but I'm not sure it would be faster than your solution.
Maybe you can have a look at OpenCV registration methods http://docs.opencv.org/trunk/db/d61/group__reg.html (the method in this link uses pixel matching and not feature matching, maybe it's faster)

calculate angle of rotation image wrt another image

I am developing an application which processes cheques for banks. But when the bank's image of a cheque can be skewed or rotated slightly by an angle of maximum value 20 degrees. Before the cheque can be processed, I need to properly align this skewed image. I am stuck here.
My initial idea was that I will first try to get the straight horizontal lines using Hough Line Transform in an "ideal cheque image". Once i get the number of straight lines, I will use the same technique to detect straight lines in a skewed image. If the number of lines is less than some threshold, I will detect the image as skewed. Following is my attempt:
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,50)
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,1000,100)
if len(lines[0]) > 2:
#image is mostly properly aligned
else:
#rotate it by some amount to align it
However, this gets me nowhere in finding the angle by which it is skewed. If i can find the angle, I can just do the following:
#say it is off by +20 degrees
deg = 20
M = cv2.getRotationMatrix2D(center, -deg, 1.0)
rotated = cv2.warpAffine(image, M, (w, h))
I then thought of getting the angle of rotation using scalar product. But then, using the scalar product of which two elements? I cannot get elements from the "bad" cheque by their coordinates in the "ideal" cheque, because its contents are skewed. So, is there any way in openCV by which, I can, say, superimpose the "bad" image over the "ideal" one and somehow calculate the angle it is off by?
What I would do in your case is to find the check within the image using feature matching with your template check image. Then you only need to find the transformation from one to the other and deduce the angle from this.
Take a look at this OpenCV tutorial that teaches you how to do that.
EDIT:
In fact, if what you want is to have the bank check with the right orientation, the homography is the right tool for that. No need to extract an angle. Just apply it to your image (or its inverse depending on how you computed it) and you should get a beautiful check, ready for processing.

Image comparison with color noise in OpenCV?

I have a code:
def compare_frames(frame1, frame2):
# cropping ranges of two images
frame1, frame2 = similize(frame1, frame2)
sc = 0
h = numpy.zeros((300,256,3))
frame1= cv2.cvtColor(frame1,cv2.COLOR_BGR2HSV)
frame2= cv2.cvtColor(frame2,cv2.COLOR_BGR2HSV)
bins = numpy.arange(256).reshape(256,1)
color = [ (255,0,0),(0,255,0),(0,0,255) ]
for ch, col in enumerate(color):
hist_item1 = cv2.calcHist([frame1],[ch],None,[256],[0,255])
hist_item2 = cv2.calcHist([frame2],[ch],None,[256],[0,255])
cv2.normalize(hist_item1,hist_item1,0,255,cv2.NORM_MINMAX)
cv2.normalize(hist_item2,hist_item2,0,255,cv2.NORM_MINMAX)
sc = sc + (cv2.compareHist(hist_item1, hist_item2, cv2.cv.CV_COMP_CORREL)/len(color))
return sc
It works, but if image have color noise (more darken/lighten tint) it's not working and give similarity equals is 0.5. (need 0.8)
An image 2 is more darken than image 1.
Can you suggest me FAST comparison algorythm ignore light, blur, noise on images or modify that?
Note:
i have template matching algorythm too:
But it works slowly than i need although similarity is 0.95.
def match_frames(frame1, frame2):
# cropping ranges of two images
frame1, frame2 = similize(frame1, frame2)
result = cv2.matchTemplate(frame1,frame2,cv2.TM_CCOEFF_NORMED)
return numpy.amax(result)
Thanks
Your question is one of the classic ones in computer vision and image processing. Many doctoral theses have been written and scores of papers in conferences and journals.
In short direct pixel comparisons will not work in this case. A transformation of some kind is needed to take you to a different feature space. You could do something simple or complex depending on the requirements you have in mind. You could compute edges or corners. One suggestion already mentioned is the FAST corner detection. This would be a good choice as would SIFT etc... There are many others you could use but it will depend on how much the two images can vary and in what ways.
For example, if there is only going to be global color changes, tint, etc the approach would be different than if the images could be rotated or the object position changing in size (i.e. camera zoom).
Strictly speaking for the case you mention features such as FAST, SIFT, or even edges would work reasonably well. Check http://en.wikipedia.org/wiki/Feature_detection_%28computer_vision%29 for more information
Image patch descriptors (SIFT, SURF...) are usually monochromatic and expect black-and-white images. Thus, for any approach (point matching, frame matching...) I would advise you to change the color space to Lab or YUV first and then work on the luminance plane.
FAST is a (fast) corner detection algorithm. A corner is obviously insensitive to noise and contrast, but may be affected by blur (bad position, bad corner response for example). FAST does not include a descriptor part however, so your matching should then rely on geometric proximity. If you need a descriptor part, then you need to switch to one of the many other keypoint descriptors (SIFT, SURF, FAST + BRIEF/BRISK/ORB/FREAK...).

Categories

Resources