Create a probability map of image with Kernel Density Estimation - python

I'm working with mammograms that have calcifications (which are brighter spots than the surrounding tissue).
This is one of the images I have:
original image
I got this image by creating a heat map:
heat map of original image
But the heat map considers the overall brightness of the image. To create a probability map that considers the local luminosity I have thought about making a Kernel Density Estimation with a Gaussian filter, but I am having problems with the implementation.
My aim is to get a result similar to scikit-learn's "Kernel Density Estimate of Species Distributions" example. This is the code I tried to use:
import numpy as np
import cv2 as cv
from sklearn.neighbors import KernelDensity
img = cv.imread("mammo.tif", 0)
kde = KernelDensity(kernel="gaussian")
kde.fit(img)
sco = kde.score_samples(img)
The original image has a size of 4084x3328. I would like to get a probability map of the same size, but what I get in the variable sco is a vector of 4084x1 with all negative values.

In kde.fit(img) every row of img is treated as a single observation with 3328 features. Thus you are fitting a kernel density estimate where the kernel is a multivariate Gaussian distribution with 3328 variables. Then kde.score_samples(img) again computes the score for each row of img which results in 4084 values. Moreover, these values are logarithms of probabilities - hence they are negative.

Related

How to increase the size of image using python?

I have images of size 48x48. I want to increase its size to 150x150 for training using transfer learning(CNN). What is the possible way to do this? I want to increase the image size in such a way that the resolution remains same without any loss of data.
you can use without problem the tensorflow.image.resize method
tf.image.resize(X, [150, 150])
if you read the doc of tensorflow, it states that you can do down sampling and up sampling, with different methods
The method argument expects an item from the image.ResizeMethod enum, or the string equivalent. The options are:
bilinear: Bilinear interpolation. If antialias is true, becomes a
hat/tent filter function with radius 1 when downsampling.
lanczos3: Lanczos kernel with radius 3. High-quality practical filter but may have some ringing, especially on synthetic images
lanczos5: Lanczos kernel with radius 5. Very-high-quality filter but may have stronger ringing.
bicubic: Cubic interpolant of Keys. Equivalent to Catmull-Rom kernel. Reasonably good quality and faster than Lanczos3Kernel,
particularly when upsampling.
gaussian: Gaussian kernel with radius 3, sigma = 1.5 / 3.0.
nearest: Nearest neighbor interpolation. antialias has no effect when used with nearest neighbor interpolation.
area: Anti-aliased resampling with area interpolation. antialias has no effect when used with area interpolation; it always
anti-aliases.
mitchellcubic: Mitchell-Netravali Cubic non-interpolating filter. For synthetic images (especially those lacking proper prefiltering),
less ringing than Keys cubic kernel but less sharp.
here details
This might work.
from PIL import Image
import numpy as np
# random image data
image_data = np.random.randint(low=0, high=256, size=(48,48,3)).astype(np.uint8)
image_small = Image.fromarray(image_data)
# there are many settings you can play here, depending on how you want the image resized
image_large = image_small.resize((120,120))
np.array(image_large)

Additive poisson noise to an image

I have written a function to add poisson noise to an image using numpy with np.random.poisson(..). The image is already in numpy array form, using grayscale (0-255). I am wandering if it makes more physical sense to provide the numpy function with the pixel values as the rates for the distribution, or use a set value over all the image.
In the first case, the function will be expressed as:
import numpy as np
def poisson_noise(X):
noise = np.random.poisson(X, X.shape)
return noise + X
In the second:
import numpy as np
def poisson_noise(X):
noise = np.random.poisson(CONSTANT_RATE, X.shape)
return noise + X
In the first case, the pixels with higher grayscale value (lighter) will be more influenced by the noise, would that have any physical interpretation ?
Thank you!

How to remove noise from a histogram equalized image?

I have an image which I'm equalizing and then using clahe histogram on, like so:
self.equ = cv2.equalizeHist(self.result_array)
clahe = cv2.createCLAHE(clipLimit=100.0, tileGridSize=(8,8))
self.cl1 = clahe.apply(self.equ)
This is the result I get:
I want to get rid of all the black dots which is noise. Ultimately, I'm trying to extract out the blood vessels, which are black in the image shown above, in trying to do so, the noise makes the extraction inaccurate.
A large part of my thesis was on reducing the noise in images, and there was a technique I used which reduced noise in images while preserving the sharp edges of information in the image. I quote myself here:
An effective technique for removing noise from fringe patterns is to filter the image
using sine-cosine filtering [reference]. A low-pass filter is convolved with the two images that
result from taking the sine and cosine of the fringe pattern image, which are then
divided to obtain the tangent, restoring the phase pattern but with reduced noise. The
advantage of this technique is that the process can be repeated multiple times to
reduce noise while maintaining the sharp details of the phase transitions.
And here is the code I used:
import numpy as np
from scipy import ndimage
def scfilter(image, iterations, kernel):
"""
Sineā€cosine filter.
kernel can be tuple or single value.
Returns filtered image.
"""
for n in range(iterations):
image = np.arctan2(
ndimage.filters.uniform_filter(np.sin(image), size=kernel),
ndimage.filters.uniform_filter(np.cos(image), size=kernel))
return image
There, image was a numpy array representing the image, linearly rescaled to put black at 0 and white at 2 * pi, and kernel is the size in image pixels of the uniform filter applied to the data. It shouldn't take too many iterations to see a positive result, maybe in the region of 5 to 20.
Hope that helps :)

microscopy image segmentation: bacteria segmentation with python

I am trying to segment some microscopy bright-field images showing some E. coli bacteria.
The picture I am working with resembles this one (even if this one is obtained with phase contrast):
my problem is that after running my segmentation function (OtsuMask below) I cannot distinguish dividing bacteria (you can try my code below on the sample image). This means that I get one single labeled region for a couple of bacteria which are joined by their end, instead of two different labeled images.
The boundary between two dividing bacteria is too narrow to be highlighted by the morphological operations I perform on the thresholded image, but I guess there must be a way to achieve my goal.
Any ideas/suggestions?
import scipy as sp
import numpy as np
from scipy import optimize
import mahotas as mht
from scipy import ndimage
import pylab as plt
def OtsuMask(img,dilation_size=2,erosion_size=1,remove_size=500):
img_thres=np.asarray(img)
s=np.shape(img)
p0=np.array([0,0,0])
p0[0]=(img[0,0]-img[0,-1])/512.
p0[1]=(img[1,0]-img[1,-1])/512.
p0[2]=img.mean()
[x,y]=np.meshgrid(np.arange(s[1]),np.arange(s[0]))
p=fitplane(img,p0)
img=img-myplane(p,x,y)
m=img.min()
img=img-m
img=abs(img)
img=img.astype(uint16)
"""perform thresholding with Otsu"""
T = mht.thresholding.otsu(img,2)
print T
img_thres=img
img_thres[img<T*0.9]=0
img_thres[img>T*0.9]=1
img_thres=-img_thres+1
"""morphological operations"""
diskD=createDisk(dilation_size)
diskE=createDisk(erosion_size)
img_thres=ndimage.morphology.binary_dilation(img_thres,diskD)
labeled_im,N=mht.label(img_thres)
label_sizes=mht.labeled.labeled_size(labeled_im)
labeled_im=mht.labeled.remove_regions(labeled_im,np.where(label_sizes<remove_size))
figure();
imshow(labeled_im)
return labeled_im
def myplane(p,x,y):
return p[0]*x+p[1]*y+p[2]
def res(p,data,x,y):
a=(data-myplane(p,x,y));
return array(np.sum(np.abs(a**2)))
def fitplane(data,p0):
s=shape(data);
[x,y]=meshgrid(arange(s[1]),arange(s[0]));
print shape(x), shape(y)
p=optimize.fmin(res,p0,args=(data,x,y));
print p
return p
def createDisk( size ):
x, y = np.meshgrid( np.arange( -size, size ), np.arange( -size, size ) )
diskMask = ( ( x + .5 )**2 + ( y + .5 )**2 < size**2)
return diskMask
THE FIRST PART OF THE CODE IN OtsuMask CONSIST OF A PLANE FITTING AND SUBTRACTION.
A similar approach to the one described in this related stackoverflow answer can be used here.
It goes basically like this:
threshold your image, as you have done
apply a distance transform on the thresholded image
threshold the distance transform, so that only a small 'seed' part of each bacterium remains
label these seeds, giving each one a different shade of gray
(also add a labeled seed for the background)
execute the watershed algorithm with these seeds and the distance transformed image, to get the separatd contours of your bacteria
Check out the linked answer for some pictures that will make this much clearer.
A few thoughts:
Otsu may not be a good choice, as you may even use a fixed threshold (your bacteria are black).
Thresholding the image with any method will remove a lot of useful information.
I do not have a complete recipe for you, but even this very simple thing seems to give a lot of interesting information:
import matplotlib.pyplot as plt
import cv2
# cv2 is only used to read the image into an array, use only green channel
bact = cv.imread("/tmp/bacteria.png")[:,:,1]
# draw a contour image with fixed threshold 50
fig = plt.figure()
ax = fig.add_subplot(111)
ax.contourf(bact, levels=[0, 50], colors='k')
This gives:
This suggests that if you use contour-tracing techniques with fixed contours, you will receive quite nice-looking starting points for dilation and erosion. So, two differences in thresholding:
Contouring uses much more of the grayscale information than simple black/white thresholding.
The fixed threshold seems to work well with these images, and if illumination correction is needed, Otsu is not the best choice.
One day skimage Watershed segmentation was more useful for me, than any OpenCV samples. It uses some code borrowed from Cellprofiler project (python-based tool for sophisticated cell image analysis). Hint: use Euclidean distance transform from opencv, it's faster than scipy implementation. Also peak_local_max function has distance parameter, which useful for precise single cells distinguishing. I think this function is more robust in finding cell peaks than rude threshold (because intensity of cells may vary).
You can find scipy watershed implementation, but it has weird behavior.

Fit curve to segmented image

In my current data analysis I have some segmented Images like for example below.
My Problem is that I would like to fit a polynom or spline (s.th. one-dimensional) to
a certain area (red) in the segmented image. ( the result would be the black line).
Usually i would use something like orthogonal distance regression, the problem is that this
needs some kind of fit function which I don't have in this case.
So what would be the best approach to do this with python/numpy?
Is there maybe some standard algorithm for this kind of problem?
UPDATE:
it seems my drawing skills are probably not the best, the red area in the picture could also have some random noise and does not have to be completely connected (there could be small gaps due to noise).
UPDATE2:
The overall target would be to have a parametrized curve p(t) which returns the position i.e. p(t) => (x, y) for t in [0,1]. where t=0 start of black line, t= 1 end of black line.
I used scipy.ndimage and this gist as a template. This gets you almost there, you'll have to find a reasonable way to parameterize the curve from the mostly skeletonized image.
from scipy.misc import imread
import scipy.ndimage as ndimage
# Load the image
raw = imread("bG2W9mM.png")
# Convert the image to greyscale, using the red channel
grey = raw[:,:,0]
# Simple thresholding of the image
threshold = grey>200
radius = 10
distance_img = ndimage.distance_transform_edt(threshold)
morph_laplace_img = ndimage.morphological_laplace(distance_img,
(radius, radius))
skeleton = morph_laplace_img < morph_laplace_img.min()/2
import matplotlib.cm as cm
from pylab import *
subplot(221); imshow(raw)
subplot(222); imshow(grey, cmap=cm.Greys_r)
subplot(223); imshow(threshold, cmap=cm.Greys_r)
subplot(224); imshow(skeleton, cmap=cm.Greys_r)
show()
You may find other answers that reference skeletonization useful, an example of that is here:
Problems during Skeletonization image for extracting contours

Categories

Resources