shape detection - python

I have tried 3 algorithms:
Compare by Compare_ssim.
Difference detection by PIL (ImageChops.difference).
Images subtraction.
The first algorithm:
(score, diff) = compare_ssim(img1, img2, full=True)
diff = (diff * 255).astype("uint8")
The second algorithm:
from PIL import Image ,ImageChops
img1=Image.open("canny1.jpg")
img2=Image.open("canny2.jpg")
diff=ImageChops.difference(img1,img2)
if diff.getbbox():
diff.show()
The third algorithm:
image3= cv2.subtract(image1,image2)
The problem is these algorithms are so sensitive. If the images have different noise, they consider that the two images are totally different. Any ideas to fix that?

These pictures are different in many ways (deformation, lighting, colors, shape) and simple image processing just cannot handle all of this.
I would recommend a higher level method that tries to extract the geometry and color of those tubes, in the form of a simple geometric graph. Then compare the graphs rather than the images.
I acknowledge that this is easier said than done, and will only work with this particular kind of scene.

It is very difficult to help since we don't really know which parameters you can change, like can you keep your camera fixed? Will it always be just about tubes? What about tubes colors?
Nevertheless, I think what you are looking for is a framework for image registration and I propose you to use SimpleElastix. It is mainly used for medical images so you might have to get familiar with the library SimpleITK. What's interesting is that you have a lot of parameters to control the registration. I think that you will have to look into the documentation to find out how to control a specific image frequency, the one that create the waves and deform the images. Hereafter I did not configured it to have enough local distortion, you'll have to find the best trade-off, but I think it should be flexible enough.
Anyway, you can get such result with the following code, I don't know if it helps, I hope so:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import SimpleITK as sitk
fixedImage = sitk.ReadImage('1.jpg', sitk.sitkFloat32)
movingImage = sitk.ReadImage('2.jpg', sitk.sitkFloat32)
elastixImageFilter = sitk.ElastixImageFilter()
affine_registration_parameters = sitk.GetDefaultParameterMap('affine')
affine_registration_parameters["NumberOfResolutions"] = ['6']
affine_registration_parameters["WriteResultImage"] = ['false']
affine_registration_parameters["MaximumNumberOfSamplingAttempts"] = ['4']
parameterMapVector = sitk.VectorOfParameterMap()
parameterMapVector.append(affine_registration_parameters)
parameterMapVector.append(sitk.GetDefaultParameterMap("bspline"))
elastixImageFilter.SetFixedImage(fixedImage)
elastixImageFilter.SetMovingImage(movingImage)
elastixImageFilter.SetParameterMap(parameterMapVector)
elastixImageFilter.Execute()
registeredImage = elastixImageFilter.GetResultImage()
transformParameterMap = elastixImageFilter.GetTransformParameterMap()
resultImage = sitk.Subtract(registeredImage, fixedImage)
resultImageNp = np.sqrt(sitk.GetArrayFromImage(resultImage) ** 2)
cv2.imwrite('gray_1.png', sitk.GetArrayFromImage(fixedImage))
cv2.imwrite('gray_2.png', sitk.GetArrayFromImage(movingImage))
cv2.imwrite('gray_2r.png', sitk.GetArrayFromImage(registeredImage))
cv2.imwrite('gray_diff.png', resultImageNp)
Your first image resized to 256x256:
Your second image:
Your second image registered with the first one:
Here is the difference between the first and second image which could show what's different:

This is one of the classical problems of image treatment - and one which does not have an answer which holds universally. The possible answers depend highly on what type of images you have, and what type of information you want to extract from them and the differences between them.
You can reduce noise by two means:
a) take several images of the same object, such that the object does not change. You can stack the images and noise is reduced by square-root of the number of images.
b) You can run a blur filter over the image. The more you blur, the more noise is averaged. Noise is here reduced by square-root of the number of pixels you average over. But so is detail in the images.
In both cases (a) and (b) you run the difference analysis after you applied either method.
Probably not applicable to you as you likely cannot get hold of either: it helps, if you can get hold of flatfields which give the inhomogeneity of illumination and pixel sensitivity of your camera and allow correcting the images prior to any treatment. Similar goes for darkfields which give an estimate of the influence of the read-out noise of the camera and allow correcting images for those.
There is somewhat another 3rd option, which is more high-level: run your object analysis first at a detailed-enough level. And compare the results.

Related

Detecting colour difference between images in python

I have a collection of images taken with different lens (same distorsion) and I want to see if theres a difference in the colour.
I think the best way to compare is simple average, for which I have:
import cv2
import numpy
myimg = cv2.imread('image.jpg')
avg_color_per_row = numpy.average(myimg, axis=0)
avg_color = numpy.average(avg_color_per_row, axis=0)
print(avg_color)
From: How to find the average colour of an image in Python with OpenCV?
I am struggling to find out if I should be using squared values of rgb and how i would do this in python. Any help highly appreciated & any advice on other ways to compare colour of identical images. I am new here so pls advise on any protocol ive missed out on too. Thank you.
That is a very broad question. And it depends on what you call "difference of color".
Are your images supposed to be strictly (to the pixel) aligned? Your comparison by rows seems to indicate so. But the rest of the text makes it very unrealistic (in fact, even in industrial computer vision, for example to control defects on parts, in very controlled environment, strictly positioned parts and cameras, this is unrealistic).
Are differences of luminosity considered difference of color?
Are the images contents supposed to be identical?
...
Very generally speaking, if what you are trying to assess is color impact of a lense, on an otherwise very similar image (same lighting, same object), I would just compute histograms of colors and compare them.

Image comparison by pixel neighborhoods

Given two images of the same scene with potential alignment, focus, lighting differences and noise, I am looking for an operation that I can run on these images that produces another image of the difference between them that minimizes these differences or is more sensitive to the structural differences between them than the global differences. My initial thought was a comparison between the corresponding neighborhoods around a pixel in image A and the same pixel in image B might work.
Is this function already implemented in OpenCV or some other Python library (scipy, numpy etc)?
My musings:
A simple frame difference would tell me where absolute differences occur but is very brittle to alignment, lighting differences. Maybe there is a way to find the standard deviation over a pixel's neighborhood. Numpy's std only works by axis...
This seems like I want the correlation between two signals but I don't know how to extend this to a non-repeating 2D world. scipy.signal.correlate2d seems like it may work if there was an efficient way to just pass corresponding neighborhoods to it. However, I don't have a good feel for what is going on under the hood.
A convolution of one image where the kernel comes from corresponding locations in the other images would give a comparison that would handle noise and focus issues well but I don't know how to use a dynamic kernel for a convolution.
If I had a library of basically identical images (not an original assumption but doable) to compare one image to, I could use a mean difference or mixture of gaussians. But I don't think this would help with alignment or lighting. I could align the image first and then do the comparison.
Per the comment below, I looked up the skimage SSIM (Structural Similarity Index) method that is used to measure image degradation due to things like lossy compression and decompression. It actually expects two copies of the same image - one a truth source, one in a potentially degraded state due to lossy compression and decompression. This method is soft on global bias (lighting), which is good, but sensitive to noise (by design) and especially sensitive to misalignment.
The comment led me to MSE which acts globally, but if iterated over an image by neighborhood, it gives a good result - insensitive to bias, noise but not structural differences. However, it is fairly sensitive to alignment differences and very slow in python...
# Mean Squared Error
def mse(imageA, imageB):
return np.mean((imageA - imageB)**2)
from scipy import misc
import numpy as np
face = misc.face()
nface = np.array(face)
nface[295:305,395:415] = face[195:205,495:515] #discontinuous region
nface = cv2.blur(nface, (3,3)) # focus effects
nface = nface + (np.random.randn(*face.shape) * 10 - 5) # noise
nface = (nface * .9 + 20).astype(int) #lighting
n = m = 3
output = np.zeros(face.shape[:2])
for i in range(face.shape[0]):
for j in range(face.shape[1]):
if i > n and i < face.shape[0]-n and j > m and j < face.shape[1]-m:
output[i,j] = mse(nface[i-n:i+n, j-m:j+m], face[i-n:i+n, j-m:j+m])
Is this a common image processing technique? Is there an name for this or an optimized implementation in openCV or Numpy?

Basic pattern recognition in binary (pixelated) image

Here is a cropped example (about 11x9 pixels) of the kind of images (which ultimately are actually all of size 28x28, but stored in memory flattened as a 784-components array) I will be trying to apply the algorithm on:
Basically, I want to be able to recognize when this shape appears (red lines are used to put emphasis on the separation of the pixels, while the surrounding black border is used to better outline the image against the white background of StackOverflow):
The orientation of it doesn't matter: it must be detected in any of its possible representations (rotations and symmetries) along the horizontal and vertical axis (so, for example, a 45° rotation shouldn't be considered, nor a diagonal symmetry: only consider 90°, 180°, and 270° rotations, for example).
There are two solutions to be found on that image that I first presented, though only one needs to be found (ignore the gray blurr surrounding the white region):
Take this other sample (which also demonstrates that the white figures inside the images aren't always fully surrounded by black pixels):
The function should return True because the shape is present:
Now, there is obviously a simple solution to this:
Use a variable such as pattern = [[1,0,0,0],[1,1,1,1]], produce its variations, and then slide all of the variations along the image until an exact match is found at which point the whole thing just stops and returns True.
This would, however, in the worst case scenario, take up to 8*(28-2)*(28-4)*(2*4) which is approximately 40000 operations for a single image, which seem a bit overkill (if I did my quick calculations right).
I'm guessing one way of making this naive approach better would be to first of all scan the image until I find the very first white pixel, and then start looking for the pattern 4 rows and 4 columns earlier than that point, but even that doesn't seem good enough.
Any ideas? Maybe this kind of function has already been implemented in some library? I'm looking for an implementation or an algorithm that beats my naive approach.
As a side note, while kind of a hack, I'm guessing this is the kind of problem that can be offloaded to the GPU but I do not have much experience with that. While it wouldn't be what I'm looking for primarily, if you provide an answer, feel free to add a GPU-related note.
EDIT:
I ended up making an implementation of the accepted answer. You can see my code in this Gist.
If you have too many operations, think how to do less of them.
For this problem I'd use image integrals.
If you convolve a summing kernel over the image (this is a very fast operation in fft domain with just conv2,imfilter), you know that only locations where the integral is equal to 5 (in your case) are possible pattern matching places. Checking those (even for your 4 rotations) should be computationally very fast. There can not be more than 50 locations in your example image that fit this pattern.
My python is not too fluent, but this is the proof of concept for your first image in MATLAB, I am sure that translating this code should not be a problem.
% get the same image you have (imgur upscaled it and made it RGB)
I=rgb2gray(imread('https://i.stack.imgur.com/l3u4A.png'));
I=imresize(I,[9 11]);
I=double(I>50);
% Integral filter definition (with your desired size)
h=ones(3,4);
% horizontal and vertical filter (because your filter is not square)
Ifiltv=imfilter(I,h);
Ifilth=imfilter(I,h');
% find the locations where integral is exactly the value you want
[xh,yh]=find(Ifilth==5);
[xv,yv]=find(Ifiltv==5);
% this is just plotting, for completeness
figure()
imshow(I,[]);
hold on
plot(yh,xh,'r.');
plot(yv,xv,'r.');
This results in 14 locations to check. My standard computer takes 230ns on average on computing both image integrals, which I would call fast.
Also GPU computing is not a hack :D. Its the way to go with a big bunch of problems because of the enormous computing power they have. E.g. convolutions in GPUs are incredibly fast.
The operation you are implementing is an operator in Mathematical Morphology called hit and miss.
It can be implemented very efficiently as a composition of two erosions. If the shape you’re detecting can be decomposed into a few simple geometrical shapes (especially rectangles are quick to compute) then the operator can be even more efficient.
You’ll find very efficient erosions in most image processing libraries, for example try OpenCV. OpenCV also has a hit and miss operator, here is a tutorial for how to use it.
As an example for what output to expect, I generated a simple test image (left), applied a hit and miss operator with a template that matches at exactly one place in the image (middle), and again with a template that does not match anywhere (right):
I did this in MATLAB, not Python, because I have it open and it's easiest for me to use. This is the code:
se = [1,1,1,1 % Defines the template
0,0,0,1];
img = [0,0,0,0,0,0 % Defines the test image
0,1,1,1,1,0
0,0,0,0,1,0
0,0,0,0,0,0
0,0,0,0,0,0
0,0,0,0,0,0];
img = dip_image(img,'bin');
res1 = hitmiss(img,se);
res2 = hitmiss(img,rot90(se,2));
% Quick-and-dirty display
h = dipshow([img,res1,res2]);
diptruesize(h,'tight',3000)
hold on
plot([5.5,5.5],[-0.5,5.5],'r-')
plot([11.5,11.5],[-0.5,5.5],'r-')
The code above uses the hit and miss operator as I implemented in DIPimage. This same implementation is available in DIPlib's Python bindings as dip.HitAndMiss() (install with pip install diplib):
import diplib as dip
# ...
res = dip.HitAndMiss(img, se)

Suggestion to detecting straight lines on a sand plate, python

I am working on a project that requires detecting lines on a plate of sand. The lines are hand-drew by user so that are not exactly "straight" (see photo). And because of the sand, the lines are quite hard to distinguish.
I tried cv2.HoughLines from OpenCV but didn't achieve good results. So any suggestion on the detecting method? And welcome for suggestion to improve the clarity of the lines. I am thinking of putting a few led light surrounding the plate.
Thanks
The detecting method depends a lot on how much generality you require: is the exposure and contrast going to change from one image to another? Is the typical width of lines going to change? In the following, I assume that such parameters do not vary much for your applications, please correct me if I'm wrong.
I'll be using scikit-image, a common image processing package for Python. If you're not familiar with this package, documentation can be found on http://scikit-image.org/, and the package is bundled with all installations of Scientific Python. However, the algorithms that I use are also available in other tools, like opencv.
My solution is written below. Basically, the principle is
first, denoise the image. Life is usually simpler after a denoising step. Here I use a total variation filter, since it results in a piecewise-constant image that will be easier to threshold. I enhance dark regions using a morphological erosion (on the gray-level image).
then apply an adaptive threshold that varies locally in space, since the contrast varies through the image. This operation results in a binary image.
erode the binary image to break spurious links between regions, and keep only large regions.
compute a measure of the elongation of the regions to keep only the most elongated ones. Here I use the ratio of the eigenvalues of the inertia tensor.
Parameters that are the most difficult to tune is the block size for the adaptive thresholding, and the minimum size of regions to keep. I also tried a Canny filter on the denoised image (skimage.filters.canny), and results were quite good, but edges were not always closed, you might also want to try an edge-detection method such as a Canny filter.
The result is shown below:
# Import modules
import numpy as np
from skimage import io, measure, morphology, restoration, filters
from skimage import img_as_float
import matplotlib.pyplot as plt
# Open the image
im = io.imread('sand_lines.png')
im = img_as_float(im)
# Denoising
tv = restoration.denoise_tv_chambolle(im, weight=0.4)
ero = morphology.erosion(tv, morphology.disk(5))
# Threshold the image
binary = filters.threshold_adaptive(ero, 181)
# Clean the binary image
binary = morphology.binary_dilation(binary, morphology.disk(8))
clean = morphology.remove_small_objects(np.logical_not(binary), 4000)
labels = measure.label(clean, background=0) + 1
# Keep only elongated regions
props = measure.regionprops(labels)
eigvals = np.array([prop.inertia_tensor_eigvals for prop in props])
eigvals_ratio = eigvals[:, 1] / eigvals[:, 0]
eigvals_ratio = np.concatenate(([0], eigvals_ratio))
color_regions = eigvals_ratio[labels]
# Plot the result
plt.figure()
plt.imshow(color_regions, cmap='spectral')

image matching in opencv python

I've been working on a project of recognizing a flag shown in the camera using opencv python.
I've already tried using surf, color histogram matching, and template matching. But of these 3, it does not always return the correct answer. what i want now, is what would be the best solution to this problem of mine.
Example of the template images:
Here is an example of flag shown in camera.
what to use if this is the kind of images that i want to recognize?
Update code in matchTemplate
flags=["Cambodia.jpg","Laos.jpg","Malaysia.jpg","Myanmar.jpg","Philippines.jpg","Singapore.jpg","Thailand.jpg","Vietnam.jpg","Indonesia.jpg","Brunei.jpg"]
while True:
methods = 'cv2.TM_CCOEFF_NORMED'
list_of_pics=[]
for flag in flags:
template= cv2.imread(flag,0)
img = cv2.imread('philippines2.jpg',0)
# generate Gaussian pyramid for A
G = template.copy()
gpA = [G]
for i in xrange(6):
G = cv2.pyrDown(G)
gpA.append(G)
n=0
for x in gpA:
w, h = x.shape[::-1]
method = eval(methods)#
# Apply template Match
res = cv2.matchTemplate(img,x,method)
matchVal=res[0][0]
picDict={"matchVal":matchVal,"name":flag}
list_of_pics.append(picDict)
n=n+1
newlist = sorted(list_of_pics, key=operator.itemgetter('matchVal'),reverse=True)
#print newlist
matched_image=newlist[0]['name']
print matched_image
k=cv2.waitKey(10)
if (k==27):
break
cv2.destroyAllWindows()
I don't think that you can get good results from SURF/SIFT because:
SURF/SIFT need keypoints to detect the object but in your case, you have to detect flags and most of the flags are mostly uniform and do not provide much keypoints.
In your webcam frame, you have several things rather than having only flag. Those several things also contribute to get the keypoints.
Solution: i still think that you should use matchTemplate() of opencv which you have already tried but the problem in your version is that you didn't consider the fact that matchTemplate() is not scale and orientation invariant. So, the solution is to use Gaussian pyramid and create the different size (half, one forth, double etc.) of your sample flags. After getting the same flag in 2-5 different size, you should perform the matchTemplate() between every size of flag and the webcam frame.
Strategy:
Receive the webcam frame
Load the image of a flag.
Using Gaussian pyramid, create smaller and bigger images of that flag (you don't need to store them.)
Perform matchTemplate() between the webcam frame and each size of flag.
Result = with which so ever image you get the maximum correlation value is the flag present in your webcam.
REMEMBER: matchTemplate is not scale and orientation invariant. so, if you rotate the image or make it larger/smaller in the webcam frame...you won't get the good results.
SURF cannot be applied to the images that have no corners (when gradient is mostly goes in one direction like in a striped flag). Color histogram of the whole object may not work since both of your examples have similar colors. However, if you can apply a histogram to different parts of the image it will work better.
What you need to do is to split your training image on say 4 quadrants and create 4 color histograms. The testing stage will integrate these 4 back projected histograms and check for the right spatial order of responses. Color histogram is quite robust to rotations, scaling and perspective. It changes with illumination so you need to have liberal matching thresholds. Spatial resolution from 4 quadrants will help to ameliorate this situation.
For the future I recommend studying methods in more detail to understand their applicability rather than trying them randomly.

Categories

Resources