Image comparison with color noise in OpenCV? - python

I have a code:
def compare_frames(frame1, frame2):
# cropping ranges of two images
frame1, frame2 = similize(frame1, frame2)
sc = 0
h = numpy.zeros((300,256,3))
frame1= cv2.cvtColor(frame1,cv2.COLOR_BGR2HSV)
frame2= cv2.cvtColor(frame2,cv2.COLOR_BGR2HSV)
bins = numpy.arange(256).reshape(256,1)
color = [ (255,0,0),(0,255,0),(0,0,255) ]
for ch, col in enumerate(color):
hist_item1 = cv2.calcHist([frame1],[ch],None,[256],[0,255])
hist_item2 = cv2.calcHist([frame2],[ch],None,[256],[0,255])
cv2.normalize(hist_item1,hist_item1,0,255,cv2.NORM_MINMAX)
cv2.normalize(hist_item2,hist_item2,0,255,cv2.NORM_MINMAX)
sc = sc + (cv2.compareHist(hist_item1, hist_item2, cv2.cv.CV_COMP_CORREL)/len(color))
return sc
It works, but if image have color noise (more darken/lighten tint) it's not working and give similarity equals is 0.5. (need 0.8)
An image 2 is more darken than image 1.
Can you suggest me FAST comparison algorythm ignore light, blur, noise on images or modify that?
Note:
i have template matching algorythm too:
But it works slowly than i need although similarity is 0.95.
def match_frames(frame1, frame2):
# cropping ranges of two images
frame1, frame2 = similize(frame1, frame2)
result = cv2.matchTemplate(frame1,frame2,cv2.TM_CCOEFF_NORMED)
return numpy.amax(result)
Thanks

Your question is one of the classic ones in computer vision and image processing. Many doctoral theses have been written and scores of papers in conferences and journals.
In short direct pixel comparisons will not work in this case. A transformation of some kind is needed to take you to a different feature space. You could do something simple or complex depending on the requirements you have in mind. You could compute edges or corners. One suggestion already mentioned is the FAST corner detection. This would be a good choice as would SIFT etc... There are many others you could use but it will depend on how much the two images can vary and in what ways.
For example, if there is only going to be global color changes, tint, etc the approach would be different than if the images could be rotated or the object position changing in size (i.e. camera zoom).
Strictly speaking for the case you mention features such as FAST, SIFT, or even edges would work reasonably well. Check http://en.wikipedia.org/wiki/Feature_detection_%28computer_vision%29 for more information

Image patch descriptors (SIFT, SURF...) are usually monochromatic and expect black-and-white images. Thus, for any approach (point matching, frame matching...) I would advise you to change the color space to Lab or YUV first and then work on the luminance plane.
FAST is a (fast) corner detection algorithm. A corner is obviously insensitive to noise and contrast, but may be affected by blur (bad position, bad corner response for example). FAST does not include a descriptor part however, so your matching should then rely on geometric proximity. If you need a descriptor part, then you need to switch to one of the many other keypoint descriptors (SIFT, SURF, FAST + BRIEF/BRISK/ORB/FREAK...).

Related

Implementation of Fourier transformation on an image

I am trying to replicate the algorithm that is given in
research paper regarding rating an image for a blur score
Please find below the function I have created. I have added the points in the comments on what I was trying to do.
def calculate_blur(image_name):
img_1 = cv2.imread(image_name) # Reading the Image
img_2 = np.fft.fft2(img_1) # Performing 2 dimensional fft on the image
img_3 = np.fft.fftshift(img_2) #findind fc by shifting origin of F to centre
img_4 = np.fft.ifftshift(img_3)
af=np.abs(img_4) #Calculating the absolute value of centred Fourier Transform
threshold=np.max(af)/1000# calculating the threshold value where the max value is calculated from absolute value
Th=np.sum(img_2>threshold) #total number of pixels in F/img_2 whose pixel value>threshold
fm=Th/(img_1.shape[0]*img_1.shape[1]) #calculating the image quality measure(fm)
if fm>0.05 : #Assuming fm>0.05 would be Not Blur (as I assumed from the results given in the research paper)
value='Not Blur'
else:
value='Blur'
return fm,value
I am seeing that when it is face closeup picture with appropriate light, even the images are blurry, the IQM score would be greater than 0.05 while for normal images(appropriate distance from the camera) that are clicked it is showing up good results.
I am sharing 2 pictures.
This has score of (0.2822434750792747, 'Not Blur')
This has a score of (0.035472916666666666, 'Blur')`
I am trying to understand how exactly it is working in the backend i.e deciding between the two and how to enhance my function and detection.
Your code seems to replicate the work in the paper.
Unfortunately, it is not at all this easy to determine if a picture is blurry or not. One can use this to compare multiple images of the same scene, to see which one is sharper or more blurry. If the illumination changes, or the contents of the scene changes, the comparison can no longer be made.
I am not aware of any fool-proof method to distinguish an out-of-focus image if there is no in-focus image to compare it to. All these methods will fail, telling you that a perfectly in-focus image of a white wall is out of focus.
The best one can do is compare the power (square of the magnitude of the frequency components) at higher frequencies to that at lower frequencies (using, for example, band-pass filters). This will tell you if the image contains any sharp edges or not. Of course, it will tell you the image is out of focus when the scene only contains smooth transitions and no sharp edges.
This other Q&A has some more ideas.
Nit pick:
img_4 = np.fft.ifftshift(img_3) undoes what img_3 = np.fft.fftshift(img_2) does, so that img_4 == img_2. Nonetheless, shifting the origin in the Fourier domain does not affect any of the subsequent processing, so it is irrelevant whether one uses img_2, img_3 or img_4 in the computations.

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

How to detect rectangular items in image with Python

I have found a plethora of questions regarding finding "things" in images using openCV, et al. in Python but so far I have been unable to piece them together for a reliable solution to my problem.
I am attempting to use computer vision to help count tiny surface mount electronics parts. The idea is for me to dump parts onto a solid color piece of paper, snap a picture, and have the software tell me how many items are in it.
The "things" differ from one picture to the next but will always be identical in any one image. I seem to be able to manually tune the parameters for things like hue/saturation for a particular part but it tends to require tweaking every time I change to a new part.
My current, semi-functioning code is posted below:
import imutils
import numpy
import cv2
import sys
def part_area(contours, round=10):
"""Finds the mode of the contour area. The idea is that most of the parts in an image will be separated and that
finding the most common area in the list of areas should provide a reasonable value to approximate by. The areas
are rounded to the nearest multiple of 200 to reduce the list of options."""
# Start with a list of all of the areas for the provided contours.
areas = [cv2.contourArea(contour) for contour in contours]
# Determine a threshold for the minimum amount of area as 1% of the overall range.
threshold = (max(areas) - min(areas)) / 100
# Trim the list of areas down to only those that exceed the threshold.
thresholded = [area for area in areas if area > threshold]
# Round the areas to the nearest value set by the round argument.
rounded = [int((area + (round / 2)) / round) * round for area in thresholded]
# Remove any areas that rounded down to zero.
cleaned = [area for area in rounded if area != 0]
# Count the areas with the same values.
counts = {}
for area in cleaned:
if area not in counts:
counts[area] = 0
counts[area] += 1
# Reduce the areas down to only those that are in groups of three or more with the same area.
above = []
for area, count in counts.iteritems():
if count > 2:
for _ in range(count):
above.append(area)
# Take the mean of the areas as the average part size.
average = sum(above) / len(above)
return average
def find_hue_mode(hsv):
"""Given an HSV image as an input, compute the mode of the list of hue values to find the most common hue in the
image. This is used to determine the center for the background color filter."""
pixels = {}
for row in hsv:
for pixel in row:
hue = pixel[0]
if hue not in pixels:
pixels[hue] = 0
pixels[hue] += 1
counts = sorted(pixels.keys(), key=lambda key: pixels[key], reverse=True)
return counts[0]
if __name__ == "__main__":
# load the image and resize it to a smaller factor so that the shapes can be approximated better
image = cv2.imread(sys.argv[1])
# define range of blue color in HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
center = find_hue_mode(hsv)
print 'Center Hue:', center
lower = numpy.array([center - 10, 50, 50])
upper = numpy.array([center + 10, 255, 255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower, upper)
inverted = cv2.bitwise_not(mask)
blurred = cv2.GaussianBlur(inverted, (5, 5), 0)
edged = cv2.Canny(blurred, 50, 100)
dilated = cv2.dilate(edged, None, iterations=1)
eroded = cv2.erode(dilated, None, iterations=1)
# find contours in the thresholded image and initialize the shape detector
contours = cv2.findContours(eroded.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
# Compute the area for a single part to use when setting the threshold and calculating the number of parts within
# a contour area.
part_area = part_area(contours)
# The threshold for a part's area - can't be too much smaller than the part itself.
threshold = part_area * 0.5
part_count = 0
for contour in contours:
if cv2.contourArea(contour) < threshold:
continue
# Sometimes parts are close enough together that they become one in the image. To battle this, the total area
# of the contour is divided by the area of a part (derived earlier).
part_count += int((cv2.contourArea(contour) / part_area) + 0.1) # this 0.1 "rounds up" slightly and was determined empirically
# Draw an approximate contour around each detected part to give the user an idea of what the tool has computed.
epsilon = 0.1 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
cv2.drawContours(image, [approx], -1, (0, 255, 0), 2)
# Print the part count and show off the processed image.
print 'Part Count:', part_count
cv2.imshow("Image", image)
cv2.waitKey(0)
Here's an example of the type of input image I am using:
or this:
And I'm currently getting results like this:
The results clearly show that the script is having trouble identifying some parts and it's true Achilles heel seems to be when parts touch one another.
So my question/challenge is, what can I do to improve the reliability of this script?
The script is to be integrated into an existing Python tool so I am searching for a solution using Python. The solution does not need to be pure Python as I am willing to install whatever 3rd party libraries might be needed.
If the objects are all of similar types, you might have more success isolating a single example in the image and then using feature matching to detect them.
A full solution would be out of scope for Stack Overflow, but my suggestion for progress would be to first somehow find one or more "correct" examples using your current rectangle retrieval method. You could probably look for all your samples that are of the expected size, or that are accurate rectangles.
Once you have isolated a few positive examples, use some feature matching techniques to find the others. There is a lot of reading up you probably need to do on it but that is a potential solution.
A general summary is that you use your positive examples to find "features" of the object you want to detect. These "features" are generally things like corners or changes in gradient. OpenCV contains many methods you can use.
Once you have the features, there are several algorithms in OpenCV you can look at that will search the image for all matching features. You’ll want one that is rotation invariant (can detect the same features arranged in different rotation), but you probably don’t need scale invariance (can detect the same features at multiple scales).
My one concern with this method is that the items you are searching for in your images are quite small. It might be difficult to find good, consistent features to match on.
You're tackling a 2D object recognition problem, for which there are many possible approaches. You've gone about it using background/foreground segmentation, which is ok as you have control on the scene (laying down the background paper sheet). However this will always have fundamental limitations when the objects touch. A simple solution to your problem can be this:
1) You assume that touching objects are rare events (which is a fine assumption in your problem). Therefore you can compute the areas for each segmented region, and compute the median of these, which will give a robust estimate for the object's area. Let's call this robust estimate A (in squared pixels). This will be fine if fewer than 50% of regions correspond to touching objects.
2) You then proceed to measure the number of objects in each segmented region. Let Ai be the area of the ith region. You then compute the number of objects in each region by Ni=round(Ai/A). You then sum Ni to give you the total number of objects.
This approach will be fine as long as the following conditions are met:
A) The touching objects do not significantly overlap
B) You do not have objects lying on their sides. If you do you might be able to deal with this using two area estimates (side and flat). Better to eliminate this scenario if you can for simplicity.
C) The objects are all roughly the same distance to the camera. If this is not the case then the areas of the objects (in pixels) cannot be modelled well by a single value.
D) There are not partially visible objects at the borders of the image.
E) You ensure that only the same type of object is visible in each image.

How to use cv2.findHomography() with Squeezed Images

I am using the python bindings for opencv. I am using keypoint detection and description (ie SURF, SIFT,...) to find a template image contained within a target image, but there is a catch: the template can be "squeezed" in the target image, so that the aspect ratio is different than the target image.
This does not work with findHomography(), since it assumes a simple perspective transform, which cannot have this sort of stretching.
Are there any ways to do this? I have thought about incrementally stretching the target image different amounts to change the aspect ratio, and using findHomography at each iteration, but as far as I can tell there is no way of comparing the quality of a fit (since I'm using RANSAC to find the best fit), so I can't tell at which squeeze level it fits best.
Perhaps counting the number of points that matched correctly from the RANSAC by looking at the length of the returned mask? This seems sorta gross.
This does not work with findHomography(), since it assumes a simple perspective transform, which cannot have this sort of stretching.
This is not true; even an affine warp includes stretching the aspect ratios and even shear distortion, and homographies expand this by even non-uniform distortions. For example, the affine transformation given by the matrix
2 0 0
0 1 0
will stretch an image horizontally by a factor of two, as seen with this short program:
import cv2
import numpy as np
img = cv2.imread('lena.png')
affine_warp = np.array([[2, 0, 0], [0, 1, 0]], dtype=np.float32)
dsize = (img.shape[1]*2, img.shape[0])
warped_img = cv2.warpAffine(img, affine_warp, dsize)
cv2.imshow("2x Horizontal Stretching", warped_img)
cv2.waitKey(0)
Producing the output:
So that is not your issue. Homographies allow even stronger warping. Are you running RANSAC yourself or letting the findHomography() function decide your points via RANSAC? Please post your expected output and your current code, possibly in a new question that reflects the problems you're facing.

How to measure rotation angle of an image compared to a known template

I've succeeded on it by using the below method, but I'm sure there must be other more time-efficient alternatives to provide exact angle of rotation instead of an approximation as the method below. I'll be pleased to hear your feedback.
The procedure is based on the following steps:
Import a template image (i.e.: with orientation at 0º)
Create a discrete array of the same image but each one rotated at 360º/rotatesteps compared to its nearest neighbour (i.e.: 30 to 50 rotated images)
# python 3 / opencv 3
# Settings:
rotate_steps = 36
step_angle = round((360/rotate_steps), 0) # one image at each 10º
# Rotation function
def rotate_image(image, angle):
# ../..
return rotated_image
# Importing a sample image and creating a n-dimension array where to store images in:
image = cv2.imread('sample_image.png')
image_Array = np.zeros((image.shape[1], image.shape[0], 1), dtype='uint8')
# Rotating sample image and saving it into the array as a new channel:
while rotation_angle <= (360 - step_angle):
angles.append(rotation_angle)
image_array[:,:,channel] = rotate_image(image.copy(), rotation_angle)
# ../..
So I get:
angles = [0, 10.0, 20.0, 30.0, .../..., 340.0, 350.0]
image_array = [image_1, image_2, image_3, ...] where image_i is a different channel on a numpy array.
Retrieve the 'test_image' for which I'm looking at the angle compared to the sample image we have previously rotated and stored into an array
Follow a series of cv2.matchTemplate() and cv2.minMaxLoc() to find what rotated image's angle best matches the 'test_image'
for i in range(len(angles)):
res = cv2.matchTemplate(test_image, image_array[:,:,i], cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# ../..
And finally I pick the discretized angle matching the sample image as the one corresponding to the template image with 'max_val' highest value.
This has proved to work well having in mind the resulting precision is based on an approximation with higher / lower precision depending on the amount of rotated template images, and also the rising time taken when rotated template number increases...
I'm sure there must be other smarter alternatives based on different methods such as generating a kind of "orientation vector" of an image, and so comparing just the resulting number with a previously known one from a sample template...
Your feedback will be highly appreciated.
I think your problem doesn't have an easy solution. It's in fact a registration problem, warping (in this case, rotating) an image to fit another. And it's a known difficult problem, as segmentation is.
I heard image processing researchers say that "he who masters segmentation and registration masters image processing", which might be a little bit of a hyperbole, but it gives the general idea.
Anyway, your technique is how I would have gone with it. Looking on researchgate, https://www.researchgate.net/post/How_can_one_determine_the_rotation_angle_between_two_images, lots of answers also go your way. The alternative would be using feature matching, but I'm not sure it would be faster than your solution.
Maybe you can have a look at OpenCV registration methods http://docs.opencv.org/trunk/db/d61/group__reg.html (the method in this link uses pixel matching and not feature matching, maybe it's faster)

Categories

Resources