Python Image Processing: Measuring Layer Widths from Electron Micrograph - python

I have an image from an electron micrograph depicting dense and rare layers in a biological system, as shown below.
The layers in question are in the middle of the image, starting just to near the label "re" and tapering up to the left. I would like to:
1) count the total number of dark/dense and light/rare layers
2) measure the width of each layer, given that the black scale bar in the bottom right is 1 micron long
I've been trying to do this in Python. If I crop the image beforehand so as to only contain parts of a few layers, such the 3 dark and 3 light layers shown here:
I am able to count the number of layers using the code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
from PIL import Image
tap = Image.open("VDtap.png").convert('L')
tap_a = np.array(tap)
tap_g = ndimage.gaussian_filter(tap_a, 1)
tap_norm = (tap_g - tap_g.min())/(float(tap_g.max()) - tap_g.min())
tap_norm[tap_norm < 0.5] = 0
tap_norm[tap_norm >= 0.5] = 1
result = 255 - (tap_norm * 255).astype(np.uint8)
tap_labeled, count = ndimage.label(result)
plt.imshow(tap_labeled)
plt.show()
However, I'm not sure how to incorporate the scale bar and measure the widths of these layers that I have counted. Even worse, when analyzing the entire image so as to include the scale bar I am having trouble even distinguishing the layers from everything else that is going on in the image.
I would really appreciate any insight in tackling this problem. Thanks in advance.
EDIT 1:
I've made a bit of progress on this problem so far. If I crop the image beforehand so as to contain just a bit of the layers, I've been able to use the following code to get at the thicknesses of each layer.
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
from PIL import Image
from skimage.measure import regionprops
tap = Image.open("VDtap.png").convert('L')
tap_a = np.array(tap)
tap_g = ndimage.gaussian_filter(tap_a, 1)
tap_norm = (tap_g - tap_g.min())/(float(tap_g.max()) - tap_g.min())
tap_norm[tap_norm < 0.5] = 0
tap_norm[tap_norm >= 0.5] = 1
result = 255 - (tap_norm * 255).astype(np.uint8)
tap_labeled, count = ndimage.label(result)
props = regionprops(tap_labeled)
ds = np.array([])
for i in xrange(len(props)):
if i==0:
ds = np.append(ds, props[i].bbox[1] - 0)
else:
ds = np.append(ds, props[i].bbox[1] - props[i-1].bbox[3])
ds = np.append(ds, props[i].bbox[3] - props[i].bbox[1])
Essentially, I discovered the Python module skimage, which can take a labeled image array and return the four coordinates of a boundary box for each labeled object; the 1 and [3] positions give the x coordinates of the boundary box, so their difference yields the extent of each layer in the x-dimension. Also, the first part of the for loop (the if-else condition) is used to get the light/rare layers that precede each dark/dense layer, since only the dark layers get labeled by ndimage.label.
Unfortunately this is still not ideal. Firstly, I would like to not have to crop the image beforehand, as I intend to repeat this procedure for many such images. I've considered that perhaps the (rough) periodicity of the layers could be highlighted using some sort of filter, but I'm not sure if such a filter exists? Secondly, the code above really only gives me the relative width of each layer - I still haven't figured out a way to incorporate the scale bar so as to get the actual widths.

I don't want to be a party-pooper, but I think your problem is harder than you first thought. I can't post a working code snippet because there are so many parts of your post that require in depth attention. I have worked in several bio/med labs and this work is usual done with a human to tag specific image points and a computer to calculate distances. That being said, one should probably try to automate =D.
To you, the problem is a simple, yet tedious job, of getting out a ruler and making a few hundred measurements. Perfect for a computer right? Well yes and no. The computer has no idea how to identify any of the bands in the picture and has to be told exactly what its looking for, and that will be tricky.
Identifying the scale bar
What do you know about the scale bars in all your images. Are they always the same number of vertical and horizontal pictures, are they always solid black? Are there always just one bar (what about the solid line for the letter r)? My suggestion is to try a wavelet transform. Imagine the 2d analog to the function
(probably helps to draw this function)
f(x) =
0 if |x| > 1,
1 if |x| <1 && |x| > 0.5
-1 if |x| < 0.5
Then when our wavelet f(x, y) is convolved over the image, the output image will have high values only when it finds the black scale bar. Also the length that I set to 1 can also be tuned for wavelets and that will help you find the scale bar too.
Finding the ridges
I'd solve the above problem first because it seems easier and sets you up for this one. I'd construct another wavelet for this one but just as a preprocessing step. For this wavelet I'd try a 2d 0-sum box function again, but this try to match three (or more) boxes next to each other. Also in addition to the height and width parameters for the box, we need a spacing and tilt angle parameter. You probably don't have to get very close to the actual value, just close enough that the rest of the image blackens out.
Measuring the ridges
There are lots and lots of ways to do this, but let's use our previous step for simplicity. Take your 3 box wavelet answer and it should be centered at the middle ridge and report a box "width" that is the average width of those three ridges it has captured. Probably close enough considering how slowly the widths are changing!
Good hunting!

Related

Automated Removal of Visually Blank Images from Datasets

I'm trying to filter out images that do not contain any or much visible structure from those that have a visible object in them so I can feed them into an self-supervised neural network.
I want to keep images like this, and I want to remove images like this:
I'm converting chemical imaging data to numpy arrays containing the signal intensity as float data, then use matplotlib to generate these images. To try to filter out the blank images, I first smoothed the images by setting each pixel value to the mean of its surrounding pixel to minimize noise. Then I found the standard deviation (σ) and mean (μ) and tried to filter out the bad images based on a σ, σ/μ, or σ^2/μ threshold, the latter of which somewhat worked. But if I set a threshold each image must exceed, such as σ^2/μ = 500, to apply to all datasets, it would remove far too many images from some or remove few to none from others.
Here's an example of me smoothing out the image and comparing σ^2/μ.
np.load(example.npy)
smoothed_image = np.empty(imgs.shape[1:])
for i, image in enumerate(imgs):
for x in range(imgs.shape[1]):
for y in range(imgs.shape[2]):
# Select pixels to average
subset = image[np.clip(x-3, 0, None):np.clip(x+4, None, image.shape[0]-1),
np.clip(y-3, 0, None):np.clip(y+4, None, image.shape[1]-1)]
subset_ave = np.mean(subset)
smoothed_image[x,y] = subset_ave
smoothed_image[x,y]
# Show stddev^2/mean and related image
print(f'stddev^2/mean = {smoothed_image.std()**2/smoothed_image.mean()})
plt.imshow(image)
plt.show()
plt.close()
I need to filter this data in an unsupervised fashion, so checking and changing the threshold for each dataset isn't an option. In addition, this process adds a significant amount of time to my data processing due to the ordering of my workflow. I tried to find other options online, but I don't think I know what to search to find information about this specific issue.
Here is some example data. Selecting any index on axis 0 (ex. images[8]) will give you a single image array.
Any suggestions on what methods I could use to filter images like this, preferably without very time consuming computation?
Thanks in advance!
My first thought is to use an aggressive threshold to suppress nearly all the noise, then simply take the sum of the image and set a threshold that way, kind of like:
image_thresh = image - 100 # where image is a numpy array and 100 would surely suppress noise, but not features
image_thresh[image_thresh<0] = 0
image_sum = np.sum( image_thresh )
Another way is to use OpenCV and look for ellipses above a certain size. You could reference such a page as this one to get started on that.

How to speed up dilating a 3D region in a boolean numpy array?

I have a 3D numpy array boolean mask which has been segmented from a MRI brain volume.
Brain voxels = True. Everything else = False.
What I would like to do is to enlarge this mask such that it would encompass the surrounding tissues in the MRI volume, not just the segmented organ, perhaps a 10mm rind of non-brain all around the brain.
I tried using a 2D dilation using the skimage.morphology.dilation with a diamond filter. While this is nice and fast for a single image, I need to repeat this in multiple slices through the volume and in at least 2 planes to come even close to uniformly dilating the 3D mask.
I largely took my code from here: https://scipy-lectures.org/packages/scikit-image/index.html
typical volume shape = 512, 512, 270
# 1st pass in axial plane
(x, y, z) = np.shape(3dMask)
for slice_number in range(z):
image_slice = 3dMask[:, :, slice_number]
3dMask[:, :, slice_number] = morphology.binary_dilation(image_slice, morphology.diamond(30))
# repeat in coronal plane...
This works very nicely with the desired effect in each slice, but is very slow for 3D.
I can speed things up by only dilating those slices containing at least one 'True', but that inevitably leaves 100+ slices in each plane. Still slow.
In the hope that the python side looping is slowing everything down, I have looked for a 3D equivalent single function in numpy and skimage but have found nothing that I can recognise as useful.
I toyed with the idea of finding the geometric centre and simply zooming the volume by 5%, but there will necessarily be holes in the mask (the space in-between the 2 halves of the brain) which will no longer match up with the MRI volume and so is of no use...
I assume this means that I am doing it wrong as I am new to both numpy and skimage.
Is there a fast way to do this? Perhaps a 3D alternative to the 2D skimage dilation?
This question actually has a bit of subtlety, which I'll try to unpack.
The first thing to note is that most scikit-image functions actually work totally fine in 3D, including binary_dilation! So you should in an ideal world be able to do:
dilated = morphology.binary_dilation(
mask3d, morphology.ball(radius=30)
)
I say in an ideal world because that crashes on my machine, probably because this longstanding SciPy bug prevents SciPy filters (which scikit-image uses under the hood) from working with large neighbourhood sizes.
For square- and diamond-shaped neighbourhoods, though, you do have a workaround: dilating once with a diamond of radius 30 is actually the same as dilating 30 times with a diamond of radius 1! You can do this manually in a for-loop, or you can use scipy.ndimage.binary_dilation using the iterations keyword argument. (See this issue for some discussion around this.)
from scipy import ndimage as ndi
# make a little 3D diamond:
diamond = ndi.generate_binary_structure(rank=3, connectivity=1)
# dilate 30x with it
dilated = ndi.binary_dilation(mask3d, diamond, iterations=30)
You can actually get pretty far with this strategy. For example, if your dataset doesn't have the same resolution in x, y, and z, maybe you want to dilate more, say twice as much, along x and y. You can do this in two steps:
dilated1 = ndi.binary_dilation(mask3d, diamond, iterations=15)
flat = np.copy(diamond)
flat[:, :, 0] = 0
flat[:, :, -1] = 0
dilated2 = ndi.binary_dilation(mask3d, flat, iterations=15)
Finally, note that binary dilation is equivalent to a (nonbinary) convolution followed by thresholding above 0. So I found that this also works:
from scipy import signal
b = morphology.ball(radius=30)
dilated = signal.fftconvolve(mask3d, b, mode='same') > 0
However, for this image size and on my machine, this was slower than the iterated dilation. But, it's worth keeping in mind because the performance will be different for different datasets.
As a side note, I recommend posting complete, working code in your StackOverflow questions, as explained here. In your case, np.shape(3dMask) is a syntax error since 3dMask is not a valid Python identifier! =)
I hope this helps!

How to judge if an image is part of another one in Python?

This is how I tried:
(1) use PIL.Image to open the original(say 100*100) and target(say 20*20) image and convert them into np.array;
(2) start from every pixel in the original one as a starting position, crop a 20*20 area and compare every pixel RGB with the target.
(3) If the total difference is under certain given level, then stop and output the specific starting pixel position in the original one.
The problem is, step(3) costs over 10s which is much too long, even step(2) costs over 0.04s and I hope to optimize my program. In both steps I used For to iterate array, is there a more efficient way?
To compare two signals (or images) at different displacements one can use cross-correlation.
If you have the scipy package you can use 2D cross-correlation to measure how similar the two images are when you slide one image over the other.
This example is copied from the correlate2d function:
from scipy import signal
from scipy import misc
lena = misc.lena() - misc.lena().mean()
template = np.copy(lena[235:295, 310:370]) # right eye
template -= template.mean()
lena = lena + np.random.randn(*lena.shape) * 50 # add noise
corr = signal.correlate2d(lena, template, boundary='symm', mode='same')
y, x = np.unravel_index(np.argmax(corr), corr.shape) # find the match
If you don't want to use a toolbox you could implement the cross-correlation yourself.

How can I extract this obvious event from this image?

EDIT: I have found a solution :D thanks for the help.
I've created an image processing algorithm which extracts this image from the data. It's complex, so I won't go into detail, but this image is essentially a giant numpy array (it's visualizing angular dependence of pixel intensity of an object).
I want to write a program which automatically determines when the curves switch direction. I have the data and I also have this image, but it turns out doing something meaningful with either has been tricky. Thresholding fails because there are bands of different background color. Sobel operators and Hough Transforms also do not work well for this same reason.
This is really easy for humans to see when this switch happens, but not so easy to tell a computer. Any tips? Thanks!
Edit: Thanks all, I'm now fitting lines to this image after convolution with general gaussian and skeletonization of the result. Any pointers on doing this would be appreciated :)
You can take a weighted dot product of successive columns to get a one-dimensional signal that is much easier to work with. You might be able to extract the patterns using this signal:
import numpy as np
A = np.loadtxt("img.txt")
N = A.shape[0]
L = np.logspace(1,2,N)
X = []
for c0,c1 in zip(A.T, A.T[1:]):
x = c0.dot(c1*L) / (np.linalg.norm(c0)*np.linalg.norm(c1))
X.append(x)
X = np.array(X)
import pylab as plt
plt.matshow(A,alpha=.5)
plt.plot(X*3-X.mean(),'k',lw=2)
plt.axis('tight')
plt.show()
This is absolutely not a complete answer to the question, but a useful observation that is too long for a comment. I'll delete if a better answer comes along.
With the help of Mark McCurry, I was able to get a good result.
Step 1: Load original image. Remove background by subtracting median of each vertical column from itself.
no_background=[]
for i in range(num_frames):
no_background.append(orig[:,i]-np.median(orig,1))
no_background=np.array(no_background).T
Step 2: Change negative values to 0.
clipped_background = no_background.clip(min=0)
Step 3: Extract a 1D signal. Take weighted sum of the vertical columns, which relates the max intensity in a column to its position.
def exp_func(x):
return np.dot(np.arange(len(x)), np.power(x, 10))/(np.sum(np.power(x, 10)))
weighted_sum = np.apply_along_axis(exp_func,0, clipped_background)
Step 4: Take the derivative of 1D signal.
conv = np.convolve([-1.,1],weighted_sum, mode='same')
pl.plot(conv)
Step 5: Determine when the derivative changes sign.
signs=np.sign(conv)
pl.plot(signs)
pl.ylim(-1.2,1.2)
Step 6: Apply median filter to above signal.
filtered_signs=median_filter(signs, 5) #pick window size based on result. second arg and odd number.
pl.plot(filtered_signs)
pl.ylim(-1.2,1.2)
Step 7: Find the indices (frame locations) of when the sign switches. Plot result.
def sign_switch(oneDarray):
inds=[]
for ind in range(len(oneDarray)-1):
if (oneDarray[ind]<0 and oneDarray[ind+1]>0) or (oneDarray[ind]>0 and oneDarray[ind+1]<0):
inds.append(ind)
return np.array(inds)
switched_frames = sign_switch(filtered_signs)
For detecting tip positions or turning points, you might try using a corner detector on the original image (not the skeletonized one). As a corner detector the structure tensor could be applicable. The structure tensor is also useful for calculating the local orientation in an image.

Image comparison algorithm

I'm trying to compare images to each other to find out whether they are different. First I tried to make a Pearson correleation of the RGB values, which works also quite good unless the pictures are a litte bit shifted. So if a have a 100% identical images but one is a little bit moved, I get a bad correlation value.
Any suggestions for a better algorithm?
BTW, I'm talking about to compare thousand of imgages...
Edit:
Here is an example of my pictures (microscopic):
im1:
im2:
im3:
im1 and im2 are the same but a little bit shifted/cutted, im3 should be recognized as completly different...
Edit:
Problem is solved with the suggestions of Peter Hansen! Works very well! Thanks to all answers! Some results can be found here
http://labtools.ipk-gatersleben.de/image%20comparison/image%20comparision.pdf
A similar question was asked a year ago and has numerous responses, including one regarding pixelizing the images, which I was going to suggest as at least a pre-qualification step (as it would exclude very non-similar images quite quickly).
There are also links there to still-earlier questions which have even more references and good answers.
Here's an implementation using some of the ideas with Scipy, using your above three images (saved as im1.jpg, im2.jpg, im3.jpg, respectively). The final output shows im1 compared with itself, as a baseline, and then each image compared with the others.
>>> import scipy as sp
>>> from scipy.misc import imread
>>> from scipy.signal.signaltools import correlate2d as c2d
>>>
>>> def get(i):
... # get JPG image as Scipy array, RGB (3 layer)
... data = imread('im%s.jpg' % i)
... # convert to grey-scale using W3C luminance calc
... data = sp.inner(data, [299, 587, 114]) / 1000.0
... # normalize per http://en.wikipedia.org/wiki/Cross-correlation
... return (data - data.mean()) / data.std()
...
>>> im1 = get(1)
>>> im2 = get(2)
>>> im3 = get(3)
>>> im1.shape
(105, 401)
>>> im2.shape
(109, 373)
>>> im3.shape
(121, 457)
>>> c11 = c2d(im1, im1, mode='same') # baseline
>>> c12 = c2d(im1, im2, mode='same')
>>> c13 = c2d(im1, im3, mode='same')
>>> c23 = c2d(im2, im3, mode='same')
>>> c11.max(), c12.max(), c13.max(), c23.max()
(42105.00000000259, 39898.103896795357, 16482.883608327804, 15873.465425120798)
So note that im1 compared with itself gives a score of 42105, im2 compared with im1 is not far off that, but im3 compared with either of the others gives well under half that value. You'd have to experiment with other images to see how well this might perform and how you might improve it.
Run time is long... several minutes on my machine. I would try some pre-filtering to avoid wasting time comparing very dissimilar images, maybe with the "compare jpg file size" trick mentioned in responses to the other question, or with pixelization. The fact that you have images of different sizes complicates things, but you didn't give enough information about the extent of butchering one might expect, so it's hard to give a specific answer that takes that into account.
I have one done this with an image histogram comparison. My basic algorithm was this:
Split image into red, green and blue
Create normalized histograms for red, green and blue channel and concatenate them into a vector (r0...rn, g0...gn, b0...bn) where n is the number of "buckets", 256 should be enough
subtract this histogram from the histogram of another image and calculate the distance
here is some code with numpy and pil
r = numpy.asarray(im.convert( "RGB", (1,0,0,0, 1,0,0,0, 1,0,0,0) ))
g = numpy.asarray(im.convert( "RGB", (0,1,0,0, 0,1,0,0, 0,1,0,0) ))
b = numpy.asarray(im.convert( "RGB", (0,0,1,0, 0,0,1,0, 0,0,1,0) ))
hr, h_bins = numpy.histogram(r, bins=256, new=True, normed=True)
hg, h_bins = numpy.histogram(g, bins=256, new=True, normed=True)
hb, h_bins = numpy.histogram(b, bins=256, new=True, normed=True)
hist = numpy.array([hr, hg, hb]).ravel()
if you have two histograms, you can get the distance like this:
diff = hist1 - hist2
distance = numpy.sqrt(numpy.dot(diff, diff))
If the two images are identical, the distance is 0, the more they diverge, the greater the distance.
It worked quite well for photos for me but failed on graphics like texts and logos.
You really need to specify the question better, but, looking at those 5 images, the organisms all seem to be oriented the same way. If this is always the case, you can try doing a normalized cross-correlation between the two images and taking the peak value as your degree of similarity. I don't know of a normalized cross-correlation function in Python, but there is a similar fftconvolve() function and you can do the circular cross-correlation yourself:
a = asarray(Image.open('c603225337.jpg').convert('L'))
b = asarray(Image.open('9b78f22f42.jpg').convert('L'))
f1 = rfftn(a)
f2 = rfftn(b)
g = f1 * f2
c = irfftn(g)
This won't work as written since the images are different sizes, and the output isn't weighted or normalized at all.
The location of the peak value of the output indicates the offset between the two images, and the magnitude of the peak indicates the similarity. There should be a way to weight/normalize it so that you can tell the difference between a good match and a poor match.
This isn't as good of an answer as I want, since I haven't figured out how to normalize it yet, but I'll update it if I figure it out, and it will give you an idea to look into.
If your problem is about shifted pixels, maybe you should compare against a frequency transform.
The FFT should be OK (numpy has an implementation for 2D matrices), but I'm always hearing that Wavelets are better for this kind of tasks ^_^
About the performance, if all the images are of the same size, if I remember well, the FFTW package created an specialised function for each FFT input size, so you can get a nice performance boost reusing the same code... I don't know if numpy is based on FFTW, but if it's not maybe you could try to investigate a little bit there.
Here you have a prototype... you can play a little bit with it to see which threshold fits with your images.
import Image
import numpy
import sys
def main():
img1 = Image.open(sys.argv[1])
img2 = Image.open(sys.argv[2])
if img1.size != img2.size or img1.getbands() != img2.getbands():
return -1
s = 0
for band_index, band in enumerate(img1.getbands()):
m1 = numpy.fft.fft2(numpy.array([p[band_index] for p in img1.getdata()]).reshape(*img1.size))
m2 = numpy.fft.fft2(numpy.array([p[band_index] for p in img2.getdata()]).reshape(*img2.size))
s += numpy.sum(numpy.abs(m1-m2))
print s
if __name__ == "__main__":
sys.exit(main())
Another way to proceed might be blurring the images, then subtracting the pixel values from the two images. If the difference is non nil, then you can shift one of the images 1 px in each direction and compare again, if the difference is lower than in the previous step, you can repeat shifting in the direction of the gradient and subtracting until the difference is lower than a certain threshold or increases again. That should work if the radius of the blurring kernel is larger than the shift of the images.
Also, you can try with some of the tools that are commonly used in the photography workflow for blending multiple expositions or doing panoramas, like the Pano Tools.
I have done some image processing course long ago, and remember that when matching I normally started with making the image grayscale, and then sharpening the edges of the image so you only see edges. You (the software) can then shift and subtract the images until the difference is minimal.
If that difference is larger than the treshold you set, the images are not equal and you can move on to the next. Images with a smaller treshold can then be analyzed next.
I do think that at best you can radically thin out possible matches, but will need to personally compare possible matches to determine they're really equal.
I can't really show code as it was a long time ago, and I used Khoros/Cantata for that course.
First off, correlation is a very CPU intensive rather inaccurate measure for similarity. Why not just go for the sum of the squares if differences between individual pixels?
A simple solution, if the maximum shift is limited: generate all possible shifted images and find the one that is the best match. Make sure you calculate your match variable (i.e. correllation) only over the subset of pixels that can be matched in all shifted images. Also, your maximum shift should be significantly smaller than the size of your images.
If you want to use some more advances image processing techniques I suggest you look at SIFT this is a very powerfull method that (theoretically anyway) can properly match items in images independent of translation, rotation and scale.
I guess you could do something like this:
estimate vertical / horizontal displacement of reference image vs the comparison image. a
simple SAD (sum of absolute difference) with motion vectors would do to.
shift the comparison image accordingly
compute the pearson correlation you were trying to do
Shift measurement is not difficult.
Take a region (say about 32x32) in comparison image.
Shift it by x pixels in horizontal and y pixels in vertical direction.
Compute the SAD (sum of absolute difference) w.r.t. original image
Do this for several values of x and y in a small range (-10, +10)
Find the place where the difference is minimum
Pick that value as the shift motion vector
Note:
If the SAD is coming very high for all values of x and y then you can anyway assume that the images are highly dissimilar and shift measurement is not necessary.
To get the imports to work correctly on my Ubuntu 16.04 (as of April 2017), I installed python 2.7 and these:
sudo apt-get install python-dev
sudo apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk
sudo apt-get install python-scipy
sudo pip install pillow
Then I changed Snowflake's imports to these:
import scipy as sp
from scipy.ndimage import imread
from scipy.signal.signaltools import correlate2d as c2d
How awesome that Snowflake's scripted worked for me 8 years later!
I propose a solution based on the Jaccard index of similarity on the image histograms. See: https://en.wikipedia.org/wiki/Jaccard_index#Weighted_Jaccard_similarity_and_distance
You can compute the difference in the distribution of the pixel colors. This is indeed pretty invariant to translations.
from PIL.Image import Image
from typing import List
def jaccard_similarity(im1: Image, im2: Image) -> float:
"""Compute the similarity between two images.
First, for each image an histogram of the pixels distribution is extracted.
Then, the similarity between the histograms is compared using the weighted Jaccard index of similarity, defined as:
Jsimilarity = sum(min(b1_i, b2_i)) / sum(max(b1_i, b2_i)
where b1_i, and b2_i are the ith histogram bin of images 1 and 2, respectively.
The two images must have same resolution and number of channels (depth).
See: https://en.wikipedia.org/wiki/Jaccard_index
Where it is also called Ruzicka similarity."""
if im1.size != im2.size:
raise Exception("Images must have the same size. Found {} and {}".format(im1.size, im2.size))
n_channels_1 = len(im1.getbands())
n_channels_2 = len(im2.getbands())
if n_channels_1 != n_channels_2:
raise Exception("Images must have the same number of channels. Found {} and {}".format(n_channels_1, n_channels_2))
assert n_channels_1 == n_channels_2
sum_mins = 0
sum_maxs = 0
hi1 = im1.histogram() # type: List[int]
hi2 = im2.histogram() # type: List[int]
# Since the two images have the same amount of channels, they must have the same amount of bins in the histogram.
assert len(hi1) == len(hi2)
for b1, b2 in zip(hi1, hi2):
min_b = min(b1, b2)
sum_mins += min_b
max_b = max(b1, b2)
sum_maxs += max_b
jaccard_index = sum_mins / sum_maxs
return jaccard_index
With respect to mean squared error, the Jaccard index lies always in the range [0,1], thus allowing for comparisons among different image sizes.
Then, you can compare the two images, but after rescaling to the same size! Or pixel counts will have to be somehow normalized. I used this:
import sys
from skincare.common.utils import jaccard_similarity
import PIL.Image
from PIL.Image import Image
file1 = sys.argv[1]
file2 = sys.argv[2]
im1 = PIL.Image.open(file1) # type: Image
im2 = PIL.Image.open(file2) # type: Image
print("Image 1: mode={}, size={}".format(im1.mode, im1.size))
print("Image 2: mode={}, size={}".format(im2.mode, im2.size))
if im1.size != im2.size:
print("Resizing image 2 to {}".format(im1.size))
im2 = im2.resize(im1.size, resample=PIL.Image.BILINEAR)
j = jaccard_similarity(im1, im2)
print("Jaccard similarity index = {}".format(j))
Testing on your images:
$ python CompareTwoImages.py im1.jpg im2.jpg
Image 1: mode=RGB, size=(401, 105)
Image 2: mode=RGB, size=(373, 109)
Resizing image 2 to (401, 105)
Jaccard similarity index = 0.7238955686269157
$ python CompareTwoImages.py im1.jpg im3.jpg
Image 1: mode=RGB, size=(401, 105)
Image 2: mode=RGB, size=(457, 121)
Resizing image 2 to (401, 105)
Jaccard similarity index = 0.22785529941822316
$ python CompareTwoImages.py im2.jpg im3.jpg
Image 1: mode=RGB, size=(373, 109)
Image 2: mode=RGB, size=(457, 121)
Resizing image 2 to (373, 109)
Jaccard similarity index = 0.29066426814105445
You might also consider experimenting with different resampling filters (like NEAREST or LANCZOS), as they, of course, alter the color distribution when resizing.
Additionally, consider that swapping images change the results, as the second image might be downsampled instead of upsampled (After all, cropping might better suit your case rather than rescaling.)

Categories

Resources