Combining Dynamic Labels / Regions (Python, scikit-image) - python

I have a set of 480 original images and 480 labels (one for each original) that have been segmented and labelled via a Watershed process. I use the labels, labels_ws, when looking for the mean intensity of various regions in the original images, original_images. These images form a time-series and I am looking to track the mean intensity in each labelled region of this time-series.
Finding the mean intensity of the regions in a single image is pretty easily done in scikit-image using the following code:
regions = measure.regionprops(labels_ws, intensity_image = original_image)
print(["(%s, %s)" % (r, r.mean_intensity) for r in regions])
which prints a whole lot of output that looks like this:
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E5F3F98, 35.46153846153846)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E5F3FD0, 47.0)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B6048, 49.96666666666667)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B6080, 23.0)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B60B8, 32.1)',
Each image probably has around 100-150 regions. The regions are areas in the image where there is a neuron luminescing in a tissue sample during the time the image was taken. As the time-series goes on, the regions (neurons) luminesce in a periodic manner and thus the intensity data for each region should look like a periodic function.
The problem I am having is that in each successive image, the labels / regions are slightly different as the luminescence in each region follows its periodic behaviour. Thus, labels / regions "pop-in/out" over the duration of the time series. I also can't guarantee that the size of, let's say, Region_1 when it first luminesces will be the same size as it is when it luminesces for a second or third time (however any difference is slight, just a couple of pixels).
All of that said, is there a way to combine all of my labels in some way to form a single label that I can track? Should I combine all of the original images in some way then create a master label? How do I handle regions that will definitely overlap, but might be different shapes / sizes by a couple of pixels? Thanks!

I had a similar problem where I wanted to track changing segmented regions over time. My solution is to change all the labels in every image at the center point of each segmented region. This has the effect of propagating the labels through to all the other images.
Of course, this assumes that the regions stay in roughly the same place throughout
You can see the difference in the animation: on the left the labels are constantly changing and on the right they stay consistent. It works despite the missing frames and shifting regions
Animation link: https://imgur.com/a/e1Q7V6O#o4t9HyE
(I don't have enough rep to post the image directly)
Just send your list of segmented and labelled images to standardise_labels_timeline
def standardise_labels_timeline(images_list, start_at_end = True, count_offset = 1000):
"""
Replace labels on similar images to allow tracking over time
:param images_list: a list of segmented and lablled images as numpy arrays
:param start_at_end: relabels the images beginning at the end of the list
:param count_offset: an int greater than the total number of expected labels in a single image
:returns: a list of relablled images as numpy arrays
"""
import numpy as np
images = list(images_list)
if start_at_end:
images.reverse()
# Relabel all images to ensure there are no duplicates
for image in images:
for label in np.unique(image):
if label > 0:
count_offset += 1
image[image == label] = count_offset
# Ensure labels are propagated through image timeline
for i, image in enumerate(images):
labels = get_labelled_centers(image)
# Apply labels to all subsequent images
for j in range(i, len(images)):
images[j] = replace_image_point_labels(images[j], labels)
if start_at_end:
images.reverse()
return images
def get_labelled_centers(image):
"""
Builds a list of labels and their centers
:param image: a segmented and labelled image as a numpy array
:returns: a list of label, co-ordinate tuples
"""
from skimage.measure import regionprops
# Find all labelled areas, disable caching so properties are only calculated if required
rps = regionprops(image, cache = False)
return [(r.label, r.centroid) for r in rps]
def replace_image_point_labels(image, labels):
"""
Replace the labelled at a list of points with new labels
:param image: a segmented and lablled image as a numpy array
:param labels: a list of label, co-ordinate tuples
:returns: a relabelled image as a numpy array
"""
img = image.copy()
for label, point in labels:
row, col = point
# Find the existing label at the point
index = img[int(row), int(col)]
# Replace the existing label with new, excluding background
if index > 0:
img[img == index] = label
return img

-- coding: utf-8 --
"""
Created on %(date)s
#author: %(Ahmed Islam ElManawy)s
a.elmanawy_90#yahoo.com
"""
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import cv2
from skimage.measure import label, regionprops
from sklearn.cluster import KMeans
import numpy as np
## import image
img=cv2.imread('E:\\Data\\Arabidopsis Thaliana HSI image\\20170508\\binarry\\AQC_RT.jpg',1)
## lablelled image
label_image = label(img[:,:,0])
## combined image center using k-means
Center=[]
Box=[]
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 10:
# draw rectangle around segmented coins
Box.append(region.bbox)
Center.append(region.centroid)
Center=np.asarray(Center)
Box=np.asarray(Box)
kmeans=KMeans(n_clusters=12, random_state=0).fit(Center)
label=kmeans.labels_
## plot image with different area
fig, ax = plt.subplots(figsize=(10, 6))
ax.imshow(img)
for l in label:
h=np.where(label==l)
B=Box[h,:]
B=B[0,:,:]
minr, minc, maxr, maxc =np.min(B[:,0]), np.min(B[:,1]), np.max(B[:,2]), np.max(B[:,3])
# plt.imshow(img2[11:88, 2:94,:])
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()

Related

How to Calculate total area of pixels in each class in multi class segmented image

I have a multi class segmented image consisting of labels of 4 different classes represented in 4 different colors ( Darkblue,red,yellow and sky blue ), i would like to calculate the total area of pixels in each class label of segmented prediction.
I tried writing this code for obtaining total number of pixels in each label but i am not able to get any result which consists of total number of pixels in each corresponding class label.
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, io, img_as_ubyte
from skimage.filters import threshold_multiotsu
# Read an image
image = io.imread("images/Ulcer_segmented.jpg")
# Apply multi-Otsu threshold
thresholds = threshold_multiotsu(image, classes=5)
# Digitize (segment) original image into multiple classes.
#np.digitize assign values 0, 1, 2, 3, ... to pixels in each class.
regions = np.digitize(image, bins=thresholds)
output = img_as_ubyte(regions) #Convert 64 bit integer values to uint8
plt.imsave("images/Ulcer_segmented..jpg", output)
props = measure.regionprops_table(label_image, output,
properties=['label',
'area', 'equivalent_diameter',
'mean_intensity', 'solidity'])
This is described in the docs:
from skimage.measure import label, regionprops
# Read an image
image = io.imread("your/image.jpg")
# label image regions
label_image = label(image)
for region in regionprops(label_image):
print(region.area)
Looks like you want to get an image histogram the issue of using np.histogram or skimage.exposure.histogram is that your image is not single-channel and using these functions you would get a histogram of flattened image which would not yield the expected results.
The way you chose to overcome this problem is using otsu thresholding which I'm not sure if works as the documentation states that it expects a single channel (grayscale) image.
The knowledge of the colors used to represent your classes would help here, you could do something like
coors = [
[cls_0_rgb_color],
[cls_1_rgb_color],
[cls_2_rgb_color],
[cls_3_rgb_color]
]
areas = [np.count_nonzero(np.all(img == c, axis=-1)) for c in colors]
If you don't know exactly what colors the classes have you probably have to reduce the last dimension of your image to uniquely represent the 3-dimensional color (I'm not sure exactly how this is done correctly, maybe someone smarter than me can answer this in a new question). What I would do is convert the image to HSV format and use the hue component as a class representation.
from skimage.color import rgb2hsv
hsv = rgb2hsv(image)
hue = hsv[:, :, 0]
areas, bin_edges = np.histogram(hue, bins=4)
What could be tricky here is deciphering which area corresponds to what class but knowing approximately what colors to expect and from knowing how colors in hue space are aligned we could say that the order would be red, yellow, light_blue, dark_blue or yellow, light_blue, dark_blue, red as red hue is symmetrical around 0 or 360 degrees. Checking the bin_edges vector could do the trick here.
# set red_threshold experimentally
if bin_edges[1] < red_threshold:
# (red, yellow, light_blue, dark_blue)
else:
# (yellow, light_blue, dark_blue, red)

Merge image-segments depending on length of the watershed-line in-between using Python, Numpy and Scikit-Image/OpenCV

I am working on a watershedding-based segmentation algorithm to segment fluorescence images such as this one:
As result I obtain a Numpy array with labels for each segment. These are separated by a watershed lines, if the corresponding regions in the fluorescence image have a sufficiently large intensity-drop-off between them. For very large intensity-drop-offs they are completely separated through simple thresholding. The result for the image above is this:
My algorithm performs well for the vast majority of cases. However, it sometimes it has a slight tendency to oversegment. Such as in this case from the image above:
Since these cases will be difficult to improve by working further on the intensity-based segmentation itself (and I run the risk of breaking other things), I want to instead selectively merge adjacent segments based on the length of the watershed-line between them and the averaged maximum width of the two segments above and below.
I know what I have to do on a pixel-for-pixel basis:
Find pixels that have two different label-values in their direct neighborhood. Store these pixels separately for each segment-pair (with corresponding segment-labels).
Calculate the number of these pixels for each pair of adjacent segments to obtain the length of the watershed-line.
Calculate the maximum width (horizontally for simplicity) of the adjacent segments.
Merge the adjacent segments, if the watershed-line is longer than a given threshold-fraction (user-defined) of the averaged width of the two segments. I could do this by converting the labels to a binary mask, filling the watershed line using the stored pixels where applicable, and relabelling the binary mask.
Since in Python iterating over individual pixels is generally slow, I am unsure how to write performant code for this. Therefore I am looking for suggestions on how to implement this with Numpy and Skimage (OpenCV is also an option).
You didn't provide how you got your initial segments. Despite this, I think improving the watershed lines could solve your problem and this can be done in the watershed hierarchy framework, with the Higra package.
I specify an initial ordering of the watershed by the image complement and recompute its watershed lines with another attribute (volume).
The intensity drop and area that you describe are the volume attribute, and you can control the segmentation by its threshold in the hierarchy.
Here it is a working example:
import cv2
import numpy as np
import higra as hg
from skimage.morphology import remove_small_objects, label
import matplotlib.pyplot as plt
def main():
img_path = "fig.png"
img = cv2.imread(img_path)
img = img[:,:,0].copy()
img = img.max() - img
size = img.shape[:2]
graph = hg.get_4_adjacency_graph(size)
edge_weights = hg.weight_graph(graph, img, hg.WeightFunction.mean)
tree, altitudes = hg.quasi_flat_zone_hierarchy(graph, edge_weights)
attr = hg.attribute_volume(tree, altitudes)
saliency = hg.saliency(tree, attr)
# Take a look at this :)
# grid = hg.graph_4_adjacency_2_khalimsky(graph, saliency)
# plt.imshow(grid)
# plt.show()
attr_thold = np.mean(saliency) / 4 # arbitrary
area_thold = 500 # arbitrary
segments = hg.labelisation_horizontal_cut_from_threshold(tree, attr, attr_thold)
segments = label(remove_small_objects(segments, area_thold))
plt.imshow(segments)
plt.show()
if __name__ == "__main__":
main()
Here it is the result.

determining the average colour of a given circular sample of an image?

What I am trying to achieve is similar to photoshop/gimp's eyedropper tool: take a round sample of a given area in an image and return the average colour of that circular sample.
The simplest method I have found is to take a 'regular' square sample, mask it as a circle, then reduce it to 1 pixel, but this is very CPU-demanding (especially when repeated millions of times).
A more mathematically complex method is to take a square area and average only the pixels that fall within a circular area within that sample, but determining what pixel is or isn't within that circle, repeated, is CPU-demanding as well.
Is there a more succinct, less-CPU-demanding means to achieve this?
Here's a little example of skimage.draw.circle() which doesn't actually draw a circle but gives you the coordinates of points within a circle which you can use to index Numpy arrays with.
#!/usr/bin/env python3
import numpy as np
from skimage.io import imsave
from skimage.draw import circle
# Make rectangular canvas of mid-grey
w, h = 200, 100
img = np.full((h, w), 128, dtype=np.uint8)
# Get coordinates of points within a central circle
Ycoords, Xcoords = circle(h//2, w//2, 45)
# Make all points in circle=200, i.e. fill circle with 200
img[Ycoords, Xcoords] = 200
# Get mean of points in circle
print(img[Ycoords, Xcoords].mean()) # prints 200.0
# DEBUG: Save image for checking
imsave('result.png',img)
I'm sure that there's a more succinct way to go about it, but:
import math
import numpy as np
import imageio as ioimg # as scipy's i/o function is now depreciated
from skimage.draw import circle
import matplotlib.pyplot as plt
# base sample dimensions (rest below calculated on this).
# Must be an odd number.
wh = 49
# tmp - this placement will be programmed later
dp = 500
#load work image (from same work directory)
img = ioimg.imread('830.jpg')
# convert to numpy array (droppying the alpha while we're at it)
np_img = np.array(img)[:,:,:3]
# take sample of resulting array
sample = np_img[dp:wh+dp, dp:wh+dp]
#==============
# set up numpy circle mask
## this mask will be multiplied against each RGB layer in extracted sample area
# set up basic square array
sample_mask = np.zeros((wh, wh), dtype=np.uint8)
# set up circle centre coords and radius values
xy, r = math.floor(wh/2), math.ceil(wh/2)
# use these values to populate circle area with ones
rr, cc = circle(xy, xy, r)
sample_mask[rr, cc] = 1
# add axis to make array multiplication possible (do I have to do this)
sample_mask = sample_mask[:, :, np.newaxis]
result = sample * sample_mask
# count number of nonzero values (this will be our median divisor)
nz = np.count_nonzero(sample_mask)
sample_color = []
for c in range(result.shape[2]):
sample_color.append(int(round(np.sum(result[:,:,c])/nz)))
print(sample_color) # will return array like [225, 205, 170]
plt.imshow(result, interpolation='nearest')
plt.show()
Perhaps asking this question here wasn't necessary (it has been a while since I've python-ed, and was hoping that some new library had been developed for this since), but I hope this can be a reference for others who have the same goal.
This operation will be performed for every pixel in the image (sometimes millions of times) for thousands of images (scanned pages), so therein are my performance issue worries, but thanks to numpy, this code is pretty quick.

How to display pixel variance over time from a set of images? (brightness changing object imaged over time)

I would like to display with a heat map the change in intensity/brightness over time of a set of images. These are images of a brightness-changing object imaged over time. This would be useful to see which parts of the object (which pixels) have the highest variance in brightness.
I'm currently using OpenCV to manipulate these images, but cannot find any straightforward way of getting this heatmap. In addition to this, if anyone could suggest a way of calculating the variance without having to create a separate array for the values for each pixel (maybe calculating it directly from the stack of images?) it would be helpful too.
This in an example of what one of the images looks like
Generate some synthetic data:
All pixes change with std of 3
Some pixes change (in shape X) with std of 5
Code:
import cv2
lena = cv2.imread("lena.png", 0)
lena = cv2.resize(dices, (100,100))
images = np.zeros((30, *lena.shape))
images[0] = lena.astype('float64')
mask = np.rot90(np.eye(100)) + np.eye(100)
for i in range(1,30):
img = images[i-1]
img += np.random.randn(*lena.shape)*3
img += mask*5
images[i] = img
The set of images created look like below
code to render images:
plt.close('all')
plt.figure(figsize=(25,25))
for i in range(25):
plt.subplot(5,5,i+1)
plt.imshow(images[i],cmap='gray')
plt.xticks([])
plt.yticks([])
plt.tight_layout()
plt.show()
Finally, heatmap to find the portions of the image which change at a different speed.
import seaborn as sns; sns.set()
ax = sns.heatmap(images.std(axis=0))
plt.show()
We got our mask back.

label2rgb implementation for OpenCV

Does OpenCV have function that can visualise a Mat of labels? Ie, similar o matlabs label2rgb().
The closest I can find is: cv2.applyColorMap(cv2.equalizeHist(segments), cv2.COLORMAP_JET)
However this is not a desired method when doing segmentation of video where the number of labels changes from one frame to the next. The reason being; one frame will have 2 labels (0 and 1 - representing sky and ground) so using jet it might show those 2 segments as dark blue and red respectively. The next frame has 3 labels (0,1,2 - sky, ground and car), so the ground segment has now change colour from red to yellow. So when you visualise this the same segments keeps changing colour and not remaining a consistent colour (red).
Therefore a function like matlabs label2rbg() would be really useful if it exists?
I like to use cv2.LUT for when there are less than 256 labels (since it only works with uint8). If you have more than 256 labels you can always convert to 256 values using (labels % 256).astype(np.uint8).
Then with your labels you simply call: rgb = cv2.LUT(labels, lut).
The only remaining problem is to create a lookup-table (lut) for your labels. You can use matplotlib colormaps as follows:
import numpy as np
import matplotlib.pyplot as plt
import cv2
def label2rgb(labels):
"""
Convert a labels image to an rgb image using a matplotlib colormap
"""
label_range = np.linspace(0, 1, 256)
lut = np.uint8(plt.cm.viridis(label_range)[:,2::-1]*256).reshape(256, 1, 3) # replace viridis with a matplotlib colormap of your choice
return cv2.LUT(cv2.merge((labels, labels, labels)), lut)
For many cases it is better to have the colors of adjacent labels be wildly different. Rick Szelski gives a pseudo code to achieve this in his book, appendix C2: Pseudocolor Generation. I've worked with his algorithm and variants of it in the past, it is fairly straightforward to code something up. Here is an sample code using his algorithm:
import numpy as np
import cv2
def gen_lut():
"""
Generate a label colormap compatible with opencv lookup table, based on
Rick Szelski algorithm in `Computer Vision: Algorithms and Applications`,
appendix C2 `Pseudocolor Generation`.
:Returns:
color_lut : opencv compatible color lookup table
"""
tobits = lambda x, o: np.array(list(np.binary_repr(x, 24)[o::-3]), np.uint8)
arr = np.arange(256)
r = np.concatenate([np.packbits(tobits(x, -3)) for x in arr])
g = np.concatenate([np.packbits(tobits(x, -2)) for x in arr])
b = np.concatenate([np.packbits(tobits(x, -1)) for x in arr])
return np.concatenate([[[b]], [[g]], [[r]]]).T
def labels2rgb(labels, lut):
"""
Convert a label image to an rgb image using a lookup table
:Parameters:
labels : an image of type np.uint8 2D array
lut : a lookup table of shape (256, 3) and type np.uint8
:Returns:
colorized_labels : a colorized label image
"""
return cv2.LUT(cv2.merge((labels, labels, labels)), lut)
if __name__ == '__main__':
labels = np.arange(256).astype(np.uint8)[np.newaxis, :]
lut = gen_lut()
rgb = labels2rgb(labels, lut)
And here is the colormap:

Categories

Resources