Using Gaussian Mixture model to create binary image of lungs - python

I am trying to segment lungs on a CT scan using Sklearn's Gaussian Mixture. The code is running fine but I
am getting a very jumbled up output.
Here is the code:
def GaussianMixtureSegmentaion(image, fill_lung_structures = True):
mask = np.zeros_like(image)
for i in range(image.shape[0]):
twod = image[i,:,:]
np.random.seed(1)
gmm = mixture.GaussianMixture(n_components=2, covariance_type='diag', max_iter=100, n_init = 1)
gmm.fit(twod)
label_data = gmm.predict(twod)
#label_data = label_data.reshape(twod.shape)
mask[i,:,:] = label_data
labels = measure.label(mask)
# Pick the pixel in the very corner to determine which label is air.
# Improvement: Pick multiple background labels from around the patient
# More resistant to "trays" on which the patient lays cutting the air
# around the person in half
background_label = labels[0,0,0]
#Fill the air around the person
mask[background_label == labels] = 2
# Method of filling the lung structures (that is superior to something like
# morphological closing)
if fill_lung_structures:
# For every slice we determine the largest solid structure
for i, axial_slice in enumerate(mask):
axial_slice = axial_slice - 1
labeling = measure.label(axial_slice)
l_max = largest_label_volume(labeling, bg=0)
if l_max is not None: #This slice contains some lung
mask[i][labeling != l_max] = 1
mask -= 1 #Make the image actual binary
mask = 1-mask # Invert it, lungs are now 1
# Remove other air pockets insided body
labels = measure.label(mask, background=0)
l_max = largest_label_volume(labels, bg=0)
if l_max is not None: # There are air pockets
mask[labels != l_max] = 0
return mask
The input is a 3d ct scan of the lungs. I iterate through each slice and use the GaussianMixture on each slice to try to cluster the slice into two parts (essentialy the light and dark areas). Am I using this in the wrong way?
Attached is a picture of an example of one of the slices of the lung:
I believe this should be easy to do - I just want to make it a binary picture.

Related

How do I collect the center of the blobs in chessboard order?

I'm using opencv in python to calibrate the lens.
I took a photo in different angles and locations like this:
Then, I ran the blob detector in opencv.
import cv2
import numpy
img = cv2.imread('./image.png', cv2.IMREAD_GRAYSCALE)
params = cv2.SimpleBlobDetector_Params()
params.minThreshold = 50
params.maxThreshold = 255
params.filterByArea = True
params.minArea = 0
params.maxArea = 80
params.filterByColor = True
params.blobColor = 255
params.filterByCircularity = False
params.filterByConvexity = False
params.filterByInertia = False
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(img)
The detected blobs are shown in red circles like this.
The object keypoints contains the center location of the detected blobs.
I collected the center location like this:
ips = []
for keypoint in keypoints: ips.append((keypoint.pt[0], keypoint.pt[1]))
ips = np.array(ips, np.float32) # image points.
What I want is, I want to store the center location of those red circles in order like cv2. findChessboardCorners.
So I defined a function that sorting along the horizontal axis first and then vertical axis.
def index_sorting_to_checkerboard(arr, shrinker):
ind1 = arr[:,1].argsort()
arr = arr[ind1]
floored = np.floor(arr).astype(int)
floored_shrinked = np.zeros_like(floored)
std = floored[0,0]
for i, (x,y) in enumerate(floored):
if abs(y-std) <= shrinker: floored_shrinked[i] = (x, std)
else:
std = y
floored_shrinked[i] = (x, y)
# sort by the second col, then the first column.
ind2 = np.lexsort((floored_shrinked[:,0], floored_shrinked[:,1]))
ret = floored_shrinked[ind2]
ind3 = ind1[ind2] # An accumulation of all argument changes.
return ret, ind3
shk, ind = index_sorting_to_checkerboard(ips, 30)
ips = ips[ind]
This function works when the image is low-tilted.
However, when the image is tilted a lot like:
it does not works well.
That's because the vertical order of the blobs in the upper line can be lower than those in the next line.
That is, the vertical location of the blob 13 (upper right) can be lower than blob 14 (below the blob 0) when the images are tilted a lot.
So I have to change the value of 'shrinker' manually every time the new images are taken.
Can you suggest me better algorithm to sort in chessboard order that works regardless of the inclination?
I think it is possible because cv2.findChessboardCorners always returns the location in this order,
but I don't know how it does that.
Since your pattern is elongated, you can estimate the approximate directions of the two axes of the grid (e.g. with PCA).
Based on the estimated direction and the distance between points, you'll be able to search which point is next to a point.
So, it seems that the order of the points can be recognized.

How to remove a rough line artifact from image after binarization

I am stuck in a problem where I want to differentiate between an object and the background(having a semi-transparent white sheet with backlight) i.e a fixed rough line introduced in the background and is merged with the object. My algorithm right now is I am taking the image from the camera, smoothing with gaussian blur, then extracting Value component from HSV, applying local binarization using wolf method to get the binarized image after which using OpenCV connected component algorithm I remove some small artifacts that are not connected to object as seen here. Now there is only this line artifact which is merged with the object but I want only the object as seen in this image. Please note that there are 2 lines in the binary image so using the 8 connected logic to detect lines not making a loop is not possible this is what I think and tried also. here is the code for that
size = np.size(thresh_img)
skel = np.zeros(thresh_img.shape,np.uint8)
element = cv2.getStructuringElement(cv2.MORPH_RECT,(3,3))
done = False
while( not done):
eroded = cv2.erode(thresh_img,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(thresh_img,temp)
skel = cv2.bitwise_or(skel,temp)
thresh_img = eroded.copy()
zeros = size - cv2.countNonZero(thresh_img)
if zeros==size:
done = True
# set max pixel value to 1
s = np.uint8(skel > 0)
count = 0
i = 0
while count != np.sum(s):
# non-zero pixel count
count = np.sum(s)
# examine 3x3 neighborhood of each pixel
filt = cv2.boxFilter(s, -1, (3, 3), normalize=False)
# if the center pixel of 3x3 neighborhood is zero, we are not interested in it
s = s*filt
# now we have pixels where the center pixel of 3x3 neighborhood is non-zero
# if a pixels' 8-connectivity is less than 2 we can remove it
# threshold is 3 here because the boxfilter also counted the center pixel
s[s < 1] = 0
# set max pixel value to 1
s[s > 0] = 1
i = i + 1
Any help in the form of code would be highly appreciated thanks.
Since you are already using connectedComponents the best way is to exclude, not only the ones which are small, but also the ones that are touching the borders of the image.
You can know which ones are to be discarded using connectedComponentsWithStats() that gives you also information about the bounding box of each component.
Alternatively, and very similarly you can switch from connectedComponents() to findContours() which gives you directly the Components so you can discard the external ones and the small ones to retrieved the part you are interested in.

Resample DICOM Images to Size and Spacing and align to same Origin

I have a set of 4 DICOM CT Volumes which I am reading with SimpleITK ImageSeriesReader. Two of the images represent the CT of patient before and after the surgery. The other two images are binary segmentation masks segmented on the former 2 CT images. The segmentations are a ROI of their source CT.
All the 4 CT images, have different Size, Spacing, Origin and Direction. I have tried applying this GitHub gist https://gist.github.com/zivy/79d7ee0490faee1156c1277a78e4a4c4 to resize my images to 512x512x512 and Spacing 1x1x1. However, it doesn't place the images at the correct location. The segmented structure is always placed in the center of the CT image, instead of the correct location, as you can see from the pictures.
This my "raw" DICOM Image with its tumor segmentation (orange blob).
This is after the "resizing" algorithm and writing to disk (same image as before, just the tumor is colored green blob because inconsistency):
Code used for resampling all 4 DICOM Volumes to the same dimensions:
def resize_resample_images(images):
""" Resize all the images to the same dimensions, spacing and origin.
Usage: newImage = resize_image(source_img_plan, source_img_validation, ROI(ablation/tumor)_mask)
1. translate to same origin
2. largest number of slices and interpolate the others.
3. same resolution 1x1x1 mm3 - resample
4. (physical space)
Slice Thickness (0018,0050)
ImagePositionPatient (0020,0032)
ImageOrientationPatient (0020,0037)
PixelSpacing (0028,0030)
Frame Of Reference UID (0020,0052)
"""
# %% Define tuple to store the images
tuple_resized_imgs = collections.namedtuple('tuple_resized_imgs',
['img_plan',
'img_validation',
'ablation_mask',
'tumor_mask'])
# %% Create Reference image with zero origin, identity direction cosine matrix and isotropic dimension
dimension = images.img_plan.GetDimension() #
reference_direction = np.identity(dimension).flatten()
reference_size = [512] * dimension
reference_origin = np.zeros(dimension)
data = [images.img_plan, images.img_validation, images.ablation_mask, images.tumor_mask]
reference_spacing = np.ones(dimension) # resize to isotropic size
reference_image = sitk.Image(reference_size, images.img_plan.GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
reference_center = np.array(
reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize()) / 2.0))
#%% Paste the GT segmentation masks before transformation
tumor_mask_paste = (paste_roi_image(images.img_plan, images.tumor_mask))
ablation_mask_paste = (paste_roi_image(images.img_validation, images.ablation_mask))
images.tumor_mask = tumor_mask_paste
images.ablation_mask = ablation_mask_paste
# %% Apply transforms
data_resized = []
for idx,img in enumerate(data):
transform = sitk.AffineTransform(dimension) # use affine transform with 3 dimensions
transform.SetMatrix(img.GetDirection()) # set the cosine direction matrix
# TODO: check translation when computing the segmentations
transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin) # set the translation.
# Modify the transformation to align the centers of the original and reference image instead of their origins.
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize()) / 2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
# Using the linear interpolator as these are intensity images, if there is a need to resample a ground truth
# segmentation then the segmentation image should be resampled using the NearestNeighbor interpolator so that
# no new labels are introduced.
if (idx==1 or idx==2): # temporary solution to resample the GT image with NearestNeighbour
resampled_img = sitk.Resample(img, reference_image, centered_transform, sitk.sitkNearestNeighbor, 0.0)
else:
resampled_img = sitk.Resample(img, reference_image, centered_transform, sitk.sitkLinear, 0.0)
# append to list
data_resized.append(resampled_img)
# assuming the order stays the same, reassigng back to tuple
resized_imgs = tuple_resized_imgs(img_plan=data_resized[0],
img_validation=data_resized[1],
ablation_mask=data_resized[2],
tumor_mask=data_resized[3])
Code for "pasting" the ROI segmentations images into a correct size. Might be redundant.:
def paste_roi_image(image_source, image_roi):
""" Resize ROI binary mask to size, dimension, origin of its source/original img.
Usage: newImage = paste_roi_image(source_img_plan, roi_mask)
"""
newSize = image_source.GetSize()
newOrigin = image_source.GetOrigin()
newSpacing = image_roi.GetSpacing()
newDirection = image_roi.GetDirection()
if image_source.GetSpacing() != image_roi.GetSpacing():
print('the spacing of the source and derived mask differ')
# re-cast the pixel type of the roi mask
pixelID = image_source.GetPixelID()
caster = sitk.CastImageFilter()
caster.SetOutputPixelType(pixelID)
image_roi = caster.Execute(image_roi)
# black 3D image
outputImage = sitk.Image(newSize, image_source.GetPixelIDValue())
outputImage.SetOrigin(newOrigin)
outputImage.SetSpacing(newSpacing)
outputImage.SetDirection(newDirection)
# transform from physical point to index the origin of the ROI image
# img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize()) / 2.0))
destinationIndex = outputImage.TransformPhysicalPointToIndex(image_roi.GetOrigin())
# paste the roi mask into the re-sized image
pasted_img = sitk.Paste(outputImage, image_roi, image_roi.GetSize(), destinationIndex=destinationIndex)
return pasted_img

Python Open CV - Get coordinates of region

I am a beginner in image processing (and openCV). After applying watershed algorithm to an image, the output that is obtained is something like this -
Is it possible to have the co-ordinates of the regions segmented out ?
The code used is this (in case you wish to have a look) -
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('input.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# sure background area
sure_bg = cv2.dilate(opening,kernel,iterations=3)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening,cv2.cv.CV_DIST_L2,5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,0,0]
plt.imshow(img)
plt.show()
Is there any function or algorithm to extract the co-ordinates of the coloured regions that are separated out ? Any help would be much appreciated !
After this line:
markers = cv2.watershed(img,markers)
markers will be an image with all region segmented, and the pixel value in each region will be an integer (label) greater than 0. Background has label 0, boundaries has label -1.
You already know the number of labels from ret returned by connectedComponents.
You need a data structure to contains the points for each region. For example, the points of each region will go in an array of points. You need several of this (for each region), so another array.
So, if you want to find the pixel of each region, you can do:
1) Scan the image and append the point to an array of arrays of points, where each array of points will contains the points of the same region
// Pseudocode
"labels" is an array of an array of points
initialize labels size to "ret", the length of each array of points is 0.
for r = 1 : markers.rows
for c = 1 : markers.cols
value = markers(r,c)
if(value > 0)
labels{value-1}.append(Point(c,r)) // r = y, c = x
end
end
end
2) Generate a mask for each label value, and collect the points in the mask
// Pseudocode
"labels" is an array of an array of points
initialize labels size to "ret", the length of each array of points is 0.
for value = 1 : ret-1
mask = (markers == value)
labels{value-1} = all points in the mask // You can use cv::boxPoints(...) for this
end
The first approach is likely to be much faster, the second is easier to implement. Sorry, but I can't give you Python code (C++ would have been much better :D ), but you should find your way out whit this.
Hope it helps

Automatically remove hot/dead pixels from an image in python

I am using numpy and scipy to process a number of images taken with a CCD camera. These images have a number of hot (and dead) pixels with very large (or small) values. These interfere with other image processing, so they need to be removed. Unfortunately, though a few of the pixels are stuck at either 0 or 255 and are always at the same value in all of the images, there are some pixels that are temporarily stuck at other values for a period of a few minutes (the data spans many hours).
I am wondering if there is a method for identifying (and removing) the hot pixels already implemented in python. If not, I am wondering what would be an efficient method for doing so. The hot/dead pixels are relatively easy to identify by comparing them with neighboring pixels. I could see writing a loop that looks at each pixel, compares its value to that of its 8 nearest neighbors. Or, it seems nicer to use some kind of convolution to produce a smoother image and then subtract this from the image containing the hot pixels, making them easier to identify.
I have tried this "blurring method" in the code below, and it works okay, but I doubt that it is the fastest. Also, it gets confused at the edge of the image (probably since the gaussian_filter function is taking a convolution and the convolution gets weird near the edge). So, is there a better way to go about this?
Example code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
plt.figure(figsize=(8,4))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
#make a sample image
x = np.linspace(-5,5,200)
X,Y = np.meshgrid(x,x)
Z = 255*np.cos(np.sqrt(x**2 + Y**2))**2
for i in range(0,11):
#Add some hot pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=200,high=255)
#and dead pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=0,high=10)
#Then plot it
ax1.set_title('Raw data with hot pixels')
ax1.imshow(Z,interpolation='nearest',origin='lower')
#Now we try to find the hot pixels
blurred_Z = scipy.ndimage.gaussian_filter(Z, sigma=2)
difference = Z - blurred_Z
ax2.set_title('Difference with hot pixels identified')
ax2.imshow(difference,interpolation='nearest',origin='lower')
threshold = 15
hot_pixels = np.nonzero((difference>threshold) | (difference<-threshold))
#Don't include the hot pixels that we found near the edge:
count = 0
for y,x in zip(hot_pixels[0],hot_pixels[1]):
if (x != 0) and (x != 199) and (y != 0) and (y != 199):
ax2.plot(x,y,'ro')
count += 1
print 'Detected %i hot/dead pixels out of 20.'%count
ax2.set_xlim(0,200); ax2.set_ylim(0,200)
plt.show()
And the output:
Basically, I think that the fastest way to deal with hot pixels is just to use a size=2 median filter. Then, poof, your hot pixels are gone and you also kill all sorts of other high-frequency sensor noise from your camera.
If you really want to remove ONLY the hot pixels, then substituting you can subtract the median filter from the original image, as I did in the question, and replace only these values with the values from the median filtered image. This doesn't work well at the edges, so if you can ignore the pixels along the edge, then this will make things a lot easier.
If you want to deal with the edges, you can use the code below. However, it is not the fastest:
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
plt.figure(figsize=(10,5))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
#make some sample data
x = np.linspace(-5,5,200)
X,Y = np.meshgrid(x,x)
Z = 100*np.cos(np.sqrt(x**2 + Y**2))**2 + 50
np.random.seed(1)
for i in range(0,11):
#Add some hot pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=200,high=255)
#and dead pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=0,high=10)
#And some hot pixels in the corners and edges
Z[0,0] =255
Z[-1,-1] =255
Z[-1,0] =255
Z[0,-1] =255
Z[0,100] =255
Z[-1,100]=255
Z[100,0] =255
Z[100,-1]=255
#Then plot it
ax1.set_title('Raw data with hot pixels')
ax1.imshow(Z,interpolation='nearest',origin='lower')
def find_outlier_pixels(data,tolerance=3,worry_about_edges=True):
#This function finds the hot or dead pixels in a 2D dataset.
#tolerance is the number of standard deviations used to cutoff the hot pixels
#If you want to ignore the edges and greatly speed up the code, then set
#worry_about_edges to False.
#
#The function returns a list of hot pixels and also an image with with hot pixels removed
from scipy.ndimage import median_filter
blurred = median_filter(Z, size=2)
difference = data - blurred
threshold = 10*np.std(difference)
#find the hot pixels, but ignore the edges
hot_pixels = np.nonzero((np.abs(difference[1:-1,1:-1])>threshold) )
hot_pixels = np.array(hot_pixels) + 1 #because we ignored the first row and first column
fixed_image = np.copy(data) #This is the image with the hot pixels removed
for y,x in zip(hot_pixels[0],hot_pixels[1]):
fixed_image[y,x]=blurred[y,x]
if worry_about_edges == True:
height,width = np.shape(data)
###Now get the pixels on the edges (but not the corners)###
#left and right sides
for index in range(1,height-1):
#left side:
med = np.median(data[index-1:index+2,0:2])
diff = np.abs(data[index,0] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[index],[0]] ))
fixed_image[index,0] = med
#right side:
med = np.median(data[index-1:index+2,-2:])
diff = np.abs(data[index,-1] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[index],[width-1]] ))
fixed_image[index,-1] = med
#Then the top and bottom
for index in range(1,width-1):
#bottom:
med = np.median(data[0:2,index-1:index+2])
diff = np.abs(data[0,index] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[0],[index]] ))
fixed_image[0,index] = med
#top:
med = np.median(data[-2:,index-1:index+2])
diff = np.abs(data[-1,index] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[height-1],[index]] ))
fixed_image[-1,index] = med
###Then the corners###
#bottom left
med = np.median(data[0:2,0:2])
diff = np.abs(data[0,0] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[0],[0]] ))
fixed_image[0,0] = med
#bottom right
med = np.median(data[0:2,-2:])
diff = np.abs(data[0,-1] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[0],[width-1]] ))
fixed_image[0,-1] = med
#top left
med = np.median(data[-2:,0:2])
diff = np.abs(data[-1,0] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[height-1],[0]] ))
fixed_image[-1,0] = med
#top right
med = np.median(data[-2:,-2:])
diff = np.abs(data[-1,-1] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[height-1],[width-1]] ))
fixed_image[-1,-1] = med
return hot_pixels,fixed_image
hot_pixels,fixed_image = find_outlier_pixels(Z)
for y,x in zip(hot_pixels[0],hot_pixels[1]):
ax1.plot(x,y,'ro',mfc='none',mec='r',ms=10)
ax1.set_xlim(0,200)
ax1.set_ylim(0,200)
ax2.set_title('Image with hot pixels removed')
ax2.imshow(fixed_image,interpolation='nearest',origin='lower',clim=(0,255))
plt.show()
Output:

Categories

Resources