Edge detection for image stored in matrix - python

I represent images in the form of 2-D arrays. I have this picture:
How can I get the pixels that are directly on the boundaries of the gray region and colorize them?
I want to get the coordinates of the matrix elements in green and red separately. I have only white, black and gray regions on the matrix.

The following should hopefully be okay for your needs (or at least help). The idea is to split into the various regions using logical checks based on threshold values. The edge between these regions can then be detected using numpy roll to shift pixels in x and y and comparing to see if we are at an edge,
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from skimage.morphology import closing
thresh1 = 127
thresh2 = 254
#Load image
im = sp.misc.imread('jBD9j.png')
#Get threashold mask for different regions
gryim = np.mean(im[:,:,0:2],2)
region1 = (thresh1<gryim)
region2 = (thresh2<gryim)
nregion1 = ~ region1
nregion2 = ~ region2
#Plot figure and two regions
fig, axs = plt.subplots(2,2)
axs[0,0].imshow(im)
axs[0,1].imshow(region1)
axs[1,0].imshow(region2)
#Clean up any holes, etc (not needed for simple figures here)
#region1 = sp.ndimage.morphology.binary_closing(region1)
#region1 = sp.ndimage.morphology.binary_fill_holes(region1)
#region1.astype('bool')
#region2 = sp.ndimage.morphology.binary_closing(region2)
#region2 = sp.ndimage.morphology.binary_fill_holes(region2)
#region2.astype('bool')
#Get location of edge by comparing array to it's
#inverse shifted by a few pixels
shift = -2
edgex1 = (region1 ^ np.roll(nregion1,shift=shift,axis=0))
edgey1 = (region1 ^ np.roll(nregion1,shift=shift,axis=1))
edgex2 = (region2 ^ np.roll(nregion2,shift=shift,axis=0))
edgey2 = (region2 ^ np.roll(nregion2,shift=shift,axis=1))
#Plot location of edge over image
axs[1,1].imshow(im)
axs[1,1].contour(edgex1,2,colors='r',lw=2.)
axs[1,1].contour(edgey1,2,colors='r',lw=2.)
axs[1,1].contour(edgex2,2,colors='g',lw=2.)
axs[1,1].contour(edgey2,2,colors='g',lw=2.)
plt.show()
Which gives the . For simplicity I've use roll with the inverse of each region. You could roll each successive region onto the next to detect edges
Thank you to #Kabyle for offering a reward, this is a problem that I spent a while looking for a solution to. I tried scipy skeletonize, feature.canny, topology module and openCV with limited success... This way was the most robust for my case (droplet interface tracking). Hope it helps!

There is a very simple solution to this: by definition any pixel which has both white and gray neighbors is on your "red" edge, and gray and black neighbors is on the "green" edge. The lightest/darkest neighbors are returned by the maximum/minimum filters in skimage.filters.rank, and a binary combination of masks of pixels that have a lightest/darkest neighbor which is white/gray or gray/black respectively produce the edges.
Result:
A worked solution:
import numpy
import skimage.filters.rank
import skimage.morphology
import skimage.io
# convert image to a uint8 image which only has 0, 128 and 255 values
# the source png image provided has other levels in it so it needs to be thresholded - adjust the thresholding method for your data
img_raw = skimage.io.imread('jBD9j.png', as_grey=True)
img = numpy.zeros_like(img, dtype=numpy.uint8)
img[:,:] = 128
img[ img_raw < 0.25 ] = 0
img[ img_raw > 0.75 ] = 255
# define "next to" - this may be a square, diamond, etc
selem = skimage.morphology.disk(1)
# create masks for the two kinds of edges
black_gray_edges = (skimage.filters.rank.minimum(img, selem) == 0) & (skimage.filters.rank.maximum(img, selem) == 128)
gray_white_edges = (skimage.filters.rank.minimum(img, selem) == 128) & (skimage.filters.rank.maximum(img, selem) == 255)
# create a color image
img_result = numpy.dstack( [img,img,img] )
# assign colors to edge masks
img_result[ black_gray_edges, : ] = numpy.asarray( [ 0, 255, 0 ] )
img_result[ gray_white_edges, : ] = numpy.asarray( [ 255, 0, 0 ] )
imshow(img_result)
P.S. Pixels which have black and white neighbors, or all three colors neighbors, are in an undefined category. The code above doesn't color those. You need to figure out how you want the output to be colored in those cases; but it is easy to extend the approach above to produce another mask or two for that.
P.S. The edges are two pixels wide. There is no getting around that without more information: the edges are between two areas, and you haven't defined which one of the two areas you want them to overlap in each case, so the only symmetrical solution is to overlap both areas by one pixel.
P.S. This counts the pixel itself as its own neighbor. An isolated white or black pixel on gray, or vice versa, will be considered as an edge (as well as all the pixels around it).

While plonser's answer may be rather straight forward to implement, I see it failing when it comes to sharp and thin edges. Nevertheless, I suggest you use part of his approach as preconditioning.
In a second step you want to use the Marching Squares Algorithm. According to the documentation of scikit-image, it is
a special case of the marching cubes algorithm (Lorensen, William and
Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface
Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings)
21(4) July 1987, p. 163-170
There even exists a Python implementation as part of the scikit-image package. I have been using this algorithm (my own Fortran implementation, though) successfully for edge detection of eye diagrams in communications engineering.
Ad 1: Preconditioning
Create a copy of your image and make it two color only, e.g. black/white. The coordinates remain the same, but you make sure that the algorithm can properly make a yes/no-decision independent from the values that you use in your matrix representation of the image.
Ad 2: Edge Detection
Wikipedia as well as various blogs provide you with a pretty elaborate description of the algorithm in various languages, so I will not go into it's details. However, let me give you some practical advice:
Your image has open boundaries at the bottom. Instead of modifying the algorithm, you can artifically add another row of pixels (black or grey to bound the white/grey areas).
The choice of the starting point is critical. If there are not too many images to be processed, I suggest you select it manually. Otherwise you will need to define rules. Since the Marching Squares Algorithm can start anywhere inside a bounded area, you could choose any pixel of a given color/value to detect the corresponding edge (it will initially start walking in one direction to find an edge).
The algorithm returns the exact 2D positions, e.g. (x/y)-tuples. You can either
iterate through the list and colorize the corresponding pixels by assigning a different value or
create a mask to select parts of your matrix and assign the value that corresponds to a different color, e.g. green or red.
Finally: Some Post-Processing
I suggested to add an artificial boundary to the image. This has two advantages:
1. The Marching Squares Algorithm works out of the box.
2. There is no need to distinguish between image boundary and the interface between two areas within the image. Just remove the artificial boundary once you are done setting the colorful edges -- this will remove the colored lines at the boundary of the image.

Basically by follow pyStarter's suggestion of using the marching square algorithm from scikit-image, the desired could contours can be extracted with the following code:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from skimage import measure
import scipy.ndimage as ndimage
from skimage.color import rgb2gray
from pprint import pprint
#Load image
im = rgb2gray(sp.misc.imread('jBD9j.png'))
n, bins_edges = np.histogram(im.flatten(),bins = 100)
# Skip the black area, and assume two distinct regions, white and grey
max_counts = np.sort(n[bins_edges[0:-1] > 0])[-2:]
thresholds = np.select(
[max_counts[i] == n for i in range(max_counts.shape[0])],
[bins_edges[0:-1]] * max_counts.shape[0]
)
# filter our the non zero values
thresholds = thresholds[thresholds > 0]
fig, axs = plt.subplots()
# Display image
axs.imshow(im, interpolation='nearest', cmap=plt.cm.gray)
colors = ['r','g']
for i, threshold in enumerate(thresholds):
contours = measure.find_contours(im, threshold)
# Display all contours found for this threshold
for n, contour in enumerate(contours):
axs.plot(contour[:,1], contour[:,0],colors[i], lw = 4)
axs.axis('image')
axs.set_xticks([])
axs.set_yticks([])
plt.show()
!
However, from your image there is no clear defined gray region, so I took the two largest counts of intensities in the image and thresholded on these. A bit disturbing is the red region in the middle of the white region, however I think this could be tweaked with the number of bins in the histogram procedure. You could also set these manually as Ed Smith did.

Maybe there is a more elegant way to do that ...
but in case your array is a numpy array with dimensions (N,N) (gray scale) you can do
import numpy as np
# assuming black -> 0 and white -> 1 and grey -> 0.5
black_reg = np.where(a < 0.1, a, 10)
white_reg = np.where(a > 0.9, a, 10)
xx_black,yy_black = np.gradient(black_reg)
xx_white,yy_white = np.gradient(white_reg)
# getting the coordinates
coord_green = np.argwhere(xx_black**2 + yy_black**2>0.2)
coord_red = np.argwhere(xx_white**2 + yy_white**2>0.2)
The number 0.2 is just a threshold and needs to be adjusted.

I think you are probably looking for edge detection method for gray scale images. There are many ways to do that. Maybe this can help http://en.m.wikipedia.org/wiki/Edge_detection. For differentiating edges between white and gray and edges between black and gray, try use local average intensity.

Related

Coloring area between pixels based on thickness

I'm trying to isolate the gray matter in a brain image and color it based on the cortical thickness at each point giving a result similar to this:
Cortical thickness map based on this original: Original brain scan
So far I have segmented the white matter boundary and the gray matter boundary giving me this:
White + Gray matter segmentation
The next step is where I'm stuck.
I need to find the distance between the 2 boundaries by finding the closest white boundary pixel for each gray boundary pixel and record the distance between them as shown here: Distance
This can be done simply with some for loops and Euclidian distance.
My problem is how to then color the pixels in between them/assign the distance value to the pixels between them.
import numpy as np
import matplotlib.pyplot as plt
import nibabel as nib
from skimage import filters
from skimage import morphology
t1 = nib.load('raw_map1.nii').get_fdata()
t1map = nib.load('thickness_map1.nii').get_fdata()
filt_t1 = filters.gaussian(t1,sigma=1)
plt.imshow(filt_t1[:,128,:])
#Segment the white matter surface
wm = filt_t1 > 75
plt.imshow(wm[:,128,:])
med_wm = filters.median(wm)
plt.imshow(med_wm[:,128,:])
dilw = morphology.binary_dilation(med_wm)
edge_wm = dilw.astype(float) - med_wm
plt.imshow(edge_wm[:,128,:])
#Segment the gray matter surface
gm = (filt_t1 < 75) & (filt_t1 > 45)
plt.imshow(gm[:,128,:])
med_gm = filters.median(gm)
plt.imshow(med_gm[:,128,:])
dilg = morphology.binary_dilation(med_gm)
edge_gm = dilg.astype(float) - med_gm
plt.imshow(edge_gm[:,128,:])
dilw2 = morphology.binary_dilation(edge_wm)
plt.imshow(dilw2[:,128,:])
fedge_gm = edge_gm.astype(float) - dilw2
plt.imshow(fedge_gm[:,128,:])
fedge_gm2 = fedge_gm > 0
plt.imshow(fedge_gm2[:,128,:])
#Combine both surfaces
final = fedge_gm2 + edge_wm
plt.imshow(final[:,128,:])
You may use DL+DiReCT: https://github.com/SCAN-NRAD/DL-DiReCT
Starting from a brain scan (T1-weighted MRI) as input, DL+DiReCT labels anatomical regions including the cortex and calculates a voxel-wise cortical thickness map (T1w_norm_thickmap.nii.gz). For every voxel inside the cortex, the intensity indicates the thickness of the cortex in mm.

Python Image segementation and extraction

I am trying to figure out how to extract multiple objects from a greyscale image with mineral grains. I need to segment the different grains and then extract each of them to save as a separate image file.
While doing research, I have found that everyone uses skimage. I'm worried that some grains will not be extracted (mineralgrains).
Has anyone had to work on a similar problem?
I'm not entirely sure what the mineral grains are in this image, but segmenting and labeling connected components is pretty straight forward using skimage algorithms.
The simplest approach that I know of is accomplished in two major steps:
Generate a binary image. The most straight forward methods for this are using thresholding based methods such as those elaborated on here: https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_thresholding.html
Separating connected objects. This is often accomplished through use of a watershed algorithm as elaborated on here: https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_watershed.html
Assuming that the mineral grains are the bigger objects in the example image, a first pass approach may look something like the following
import numpy as np
from scipy import ndimage as ndi
from skimage import io
from skimage.filters import gaussian, threshold_li
from skimage.morphology import remove_small_holes, remove_small_objects
from skimage.segmentation import watershed
from skimage.feature import peak_local_max
from skimage.measure import label
# read in the image
img = io.imread('fovAK.png', as_gray=True)
gauss_sigma = 1 # standard deviation for Gaussian kernel for removing a bit of noise from image
obj_min_size = 5000 # minimum # of pixels for labeled objects
min_hole_size = 5000 # The maximum area, in pixels, of a contiguous hole that will be filled
watershed_footprin = 100 # size of local region within which to search for peaks for watershed
#########################
#make a binary image using threshold
#########################
img_blurred = gaussian(img, gauss_sigma) # remove a bit of noise from image
img_thresh = threshold_li(img_blurred) # use histogram based method to determine intensity threshold
img_binary = img_blurred > img_thresh # make a binary image
# make a border around the image so that holes at edge are not filled
ydims, xdims = img.shape
img_binary[0, :] = 0
img_binary[ydims-1, :] = 0
img_binary[:, 0] = 0
img_binary[:, xdims-1] = 0
img_rsh = remove_small_holes(img_binary, min_hole_size) # removes small holdes in binary image
img_rso = remove_small_objects(img_rsh, obj_min_size) # removes small objects in binary image
#############################
#use watershed algorithm to seperate connected objects
####################################
distance = ndi.distance_transform_edt(img_rso) # distance transform of image
coords = peak_local_max(distance, labels=img_rso, footprint=np.ones((ws_footprint, ws_footprint))) # local pead distances
mask = np.zeros(distance.shape, dtype=bool) # mask for watershed
mask[tuple(coords.T)] = True
markers, _ = ndi.label(mask)
labels = watershed(-distance, markers, mask=img_rso)
This is absolutely a first pass on your example image and is by no means perfect. You may need to use a different thresholding algorithm or tweak other parameters but hopefully this gets you moving in the right direction.
From there you can just iterate over the labeled objects and save individual images.
There are definitely more sophisticated approaches to these kinds of problems ranging from edge detectors and bandpass filters all the out to using ML based methods but hopefully this gets you moving in the right direction.

How to plot centroids on image after kmeans clustering?

I have a color image and wanted to do k-means clustering on it using OpenCV.
This is the image on which I wanted to do k-means clustering.
This is my code:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image1 = cv2.imread("./triangle.jpg", 0)
Z1 = image1.reshape((-1))
Z1 = np.float32(Z1)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K1 = 2
ret, mask, center =cv2.kmeans(Z1,K1,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
print(center)
res_image1 = center[mask.flatten()]
clustered_image1 = res_image1.reshape((image1.shape))
for c in center:
plt.hlines(c, xmin=0, xmax=max(clustered_image1.shape[0], clustered_image1.shape[1]), lw=1.)
plt.imshow(clustered_image1)
plt.show()
This is what I get from the center variable.
[[112]
[255]]
This is the output image
My problem is that I'm unable to understand the output. I have two lists in the center variable because I wanted two classes. But why do they have only one value?
Shouldn't it be something like this (which makes sense because centroids should be points):
[[x1, y1]
[x2, y2]]
instead of this:
[[x]
[y]]
and if I read the image as a color image like this:
image1 = cv2.imread("./triangle.jpg")
Z1 = image1.reshape((-1, 3))
I get this output:
[[255 255 255]
[ 89 173 1]]
Color image output
Can someone explain to me how I can get 2d points instead of lines? Also, how do I interpret the output I got from the center variable when using the color image?
Please let me know if I'm unclear anywhere. Thanks!!
K-Means-clustering finds clusters of similar values. Your input is an array of color values, hence you find the colors that describe the 2 clusters. [255 255 255] is the white color, [ 89 173 1] is the green color. Similar for [112] and [255] in the grayscale version. What you're doing is color quantization
They are correctly the centroids, but their dimension is color, not location. Therefor you cannot plot it anywhere. Well you can, but I looks like this:
See how the 'color location' determines to which class each pixel belongs?
This is not something you can locate in your image. What you can do is find the pixels that belong to the different clusters, and use the locations of the found pixels to determine their centroid or 'average' position.
To get the 'average' position of each color, you have to separate out the pixel coordinates according to the class/color to which they belong. In the code below I used np.where( img <= 240) where 240 is the threshold. I used 240 out of ease, but you could use K-Means to determine where the threshold should be. (inRange() might be useful at some point)) If you sum the coordinates and divide that by the number of pixels found, you'll have what I think you are looking for:
Result:
Code:
import cv2
# load image as grayscale
img = cv2.imread('D21VU.jpg',0)
# get the positions of all pixels that are not full white (= triangle)
triangle_px = np.where( img <= 240)
# dividing the sum of the values by the number of pixels
# to get the average location
ty = int(sum(triangle_px[0])/len(triangle_px[0]))
tx = int(sum(triangle_px[1])/len(triangle_px[1]))
# print location and draw filled black circle
print("Triangle ({},{})".format(tx,ty))
cv2.circle(img, (tx,ty), 10,(0), -1)
# the same process, but now with only white pixels
white_px = np.where( img > 240)
wy = int(sum(white_px[0])/len(white_px[0]))
wx = int(sum(white_px[1])/len(white_px[1]))
# print location and draw white filled circle
print("White: ({},{})".format(wx,wy))
cv2.circle(img, (wx,wy), 10,(255), -1)
# display result
cv2.imshow('Result',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is an Imagemagick solution, since I am not proficient with OpenCV.
Basically, I convert your actual image (from your link in the comments) to binary, then use image moments to extract the centroid and other statistics.
I suspect you can do something similar in OpenCV, Skimage, or Python Wand, which is based upon Imagemagick. (See for example:
https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga556a180f43cab22649c23ada36a8a139
https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.moments_coords_central
https://en.wikipedia.org/wiki/Image_moment)
Input:
Your image does not have just two colors. Perhaps this image did not have kmeans clustering applied with 2 colors only. So I will do that with an Imagemagick script that I have built.
kmeans -n 2 -m 5 img.png img2.png
final colors:
count,hexcolor
99234,#65345DFF
36926,#27AD0EFF
Then I convert the two colors to black and white by simply thresholding and stretching the dynamic range to full black and white.
convert img2.png -threshold 50% -auto-level img3.png
Then I get all the image moment statistics for the white pixels, which includes the x,y centroid in pixels relative to the top left corner of the image. It also includes the equivalent ellipse major and minor axes, angle of major axis, eccentricity of the ellipse, and equivalent brightness of the ellipse, plus the 8 Hu image moments.
identify -verbose -moments img3.png
Channel moments:
Gray:
--> Centroid: 208.523,196.302 <--
Ellipse Semi-Major/Minor axis: 170.99,164.34
Ellipse angle: 140.853
Ellipse eccentricity: 0.197209
Ellipse intensity: 106.661 (0.41828)
I1: 0.00149333 (0.380798)
I2: 3.50537e-09 (0.000227937)
I3: 2.10942e-10 (0.00349771)
I4: 7.75424e-13 (1.28576e-05)
I5: 9.78445e-24 (2.69016e-09)
I6: -4.20164e-17 (-1.77656e-07)
I7: 1.61745e-24 (4.44704e-10)
I8: 9.25127e-18 (3.91167e-08)

Tranform HSV mask into a set of points

I created a HSV mask from the image. The result is like the following:
I hope that this mask can be represented by a set of points. My original idea was to use Skimage Skeletonize to create a line and then use the sliding window to calculate the local mean for point creation.
However, skeletonize takes too long. It requires 0.4s for each frame. This is not a good idea for video processing.
Do you want the points of all True elements of the mask, or do you just want a skeleton? If the former..
import skimage as ski
from skimage import io
import numpy as np
mask = ski.io.imread('./mask.png')[:,:,0]/255
mask = mask.astype('bool')
s0,s1 = mask.shape # dimensions of mask
a0,a1 = np.arange(s0),np.arange(s1) # make two 1d coordinate arrays
coords = np.array(np.meshgrid(a0,a1)).T # cartesian product into a coordinate matrix
coords = coords[mask] # mask out the points of interest
If the latter, you can get the start and end points (from left to right) of the object in the mask in a fast way with something like
start_mat = np.stack((np.roll(mask,1,axis=1),mask),-1)
start_mask = np.fromiter(map(lambda p: np.alltrue(p==np.array([False,True])),start_mat[mask]),dtype=bool)
starts = coords[start_mask]
end_mat = np.stack((np.roll(mask,-1,axis=1),mask),-1)
end_mask = np.fromiter(map(lambda p: np.alltrue(p==np.array([False,True])),end_mat[mask]),dtype=bool)
ends = coords[end_mask]
This will give you a rough outline of the object. Outline points will be missing anywhere that the slope of the figure is 0. You may have to think of a vertical difference scheme for those areas. The same idea would work with np.roll(...,axis=0). You could just concatenate the unique points from rolling over rows to the points from rolling over columns to get the full outline.
Averaging the correct pairs to get the skeleton isn't so easy.
Here's a resultant outline. You can definitely make this faster than 0.4s:
Couldn't a simple For loop work?
Scan each "across" line of your bitmap looking for...
X pos where from Black meets White = new start point.
Also in same scanned line now look for a new X-pos: where from White meets Black = new end point.
Either put dots at start/end points for "outline" effect, or else put dots in "center" effect by dot.x = (end_point - start_point) / 2

Counting particles using image processing in python

Is there any good algorithm for detecting particles on a changing background intensity?
For example, if I have the following image:
Is there a way to count the small white particles, even with the clearly different background that appears towards the lower left?
To be a little more clear, I would like to label the image and count the particles with an algorithm that finds these particles to be significant:
I have tried many things with the PIL, cv , scipy , numpy , etc. modules.
I got some hints from this very similar SO question, and it appears at first glance that you could take a simple threshold like so:
im = mahotas.imread('particles.jpg')
T = mahotas.thresholding.otsu(im)
labeled, nr_objects = ndimage.label(im>T)
print nr_objects
pylab.imshow(labeled)
but because of the changing background you get this:
I have also tried other ideas, such as a technique I found for measuring paws, which I implemented in this way:
import numpy as np
import scipy
import pylab
import pymorph
import mahotas
from scipy import ndimage
import cv
def detect_peaks(image):
"""
Takes an image and detect the peaks usingthe local maximum filter.
Returns a boolean mask of the peaks (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an 8-connected neighborhood
neighborhood = ndimage.morphology.generate_binary_structure(2,2)
#apply the local maximum filter; all pixel of maximal value
#in their neighborhood are set to 1
local_max = ndimage.filters.maximum_filter(image, footprint=neighborhood)==image
#local_max is a mask that contains the peaks we are
#looking for, but also the background.
#In order to isolate the peaks we must remove the background from the mask.
#we create the mask of the background
background = (image==0)
#a little technicality: we must erode the background in order to
#successfully subtract it form local_max, otherwise a line will
#appear along the background border (artifact of the local maximum filter)
eroded_background = ndimage.morphology.binary_erosion(background, structure=neighborhood, border_value=1)
#we obtain the final mask, containing only peaks,
#by removing the background from the local_max mask
detected_peaks = local_max - eroded_background
return detected_peaks
im = mahotas.imread('particles.jpg')
imf = ndimage.gaussian_filter(im, 3)
#rmax = pymorph.regmax(imf)
detected_peaks = detect_peaks(imf)
pylab.imshow(pymorph.overlay(im, detected_peaks))
pylab.show()
but this gives no luck either, showing this result:
Using the regional max function, I get images which almost appear to be giving correct particle identification, but there are either too many, or too few particles in the wrong spots depending on my gaussian filtering (images have gaussian filter of 2,3, & 4):
Also, it would need to work on images similar to this as well:
This is the same type of image above, just at a much higher density of particles.
EDIT: Solved solution: I was able to get a decent working solution to this problem using the following code:
import cv2
import pylab
from scipy import ndimage
im = cv2.imread('particles.jpg')
pylab.figure(0)
pylab.imshow(im)
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5,5), 0)
maxValue = 255
adaptiveMethod = cv2.ADAPTIVE_THRESH_GAUSSIAN_C#cv2.ADAPTIVE_THRESH_MEAN_C #cv2.ADAPTIVE_THRESH_GAUSSIAN_C
thresholdType = cv2.THRESH_BINARY#cv2.THRESH_BINARY #cv2.THRESH_BINARY_INV
blockSize = 5 #odd number like 3,5,7,9,11
C = -3 # constant to be subtracted
im_thresholded = cv2.adaptiveThreshold(gray, maxValue, adaptiveMethod, thresholdType, blockSize, C)
labelarray, particle_count = ndimage.measurements.label(im_thresholded)
print particle_count
pylab.figure(1)
pylab.imshow(im_thresholded)
pylab.show()
This will show the images like this:
(which is the given image)
and
(which is the counted particles)
and calculate the particle count as 60.
I had solved the "variable brightness in background" by using a tuned difference threshold with a technique called Adaptive Contrast. It works by performing a linear combination (a difference, in the case) of a grayscale image with a blurred version of itself, then applying a threshold to it.
Convolve the image with a suitable statistical operator.
Subtract the original from the convolved image, correcting intensity scale/gamma if necessary.
Threshold the difference image with a constant.
(original paper)
I did this very successfully with scipy.ndimage, in the floating-point domain (way better results than integer image processing), like this:
original_grayscale = numpy.asarray(some_PIL_image.convert('L'), dtype=float)
blurred_grayscale = scipy.ndimage.filters.gaussian_filter(original_grayscale, blur_parameter)
difference_image = original_grayscale - (multiplier * blurred_grayscale);
image_to_be_labeled = ((difference_image > threshold) * 255).astype('uint8') # not sure if it is necessary
labelarray, particle_count = scipy.ndimage.measurements.label(image_to_be_labeled)
Hope this helps!!
I cannot really give a definite answer, but here are a few pointers:
The function mahotas.morph.regmax might be better than the maximum filter as it removes pseudo-maxima. Perhaps combine this with a global threshold, with a local threshold (such as the mean over a window) or both.
If you have several images and the same uneven background, then maybe you can compute an average background and normalize against that, or use empty images as your estimate of background. This would be the case if you have a microscope, and like every microscope I've seen, the illumination is uneven.
Something like:
average = average_of_many(images)
# smooth it
average = mahotas.gaussian_filter(average,24)
Now you preprocess your images, like:
preproc = image/average
or something like that.

Categories

Resources