label2rgb implementation for OpenCV - python

Does OpenCV have function that can visualise a Mat of labels? Ie, similar o matlabs label2rgb().
The closest I can find is: cv2.applyColorMap(cv2.equalizeHist(segments), cv2.COLORMAP_JET)
However this is not a desired method when doing segmentation of video where the number of labels changes from one frame to the next. The reason being; one frame will have 2 labels (0 and 1 - representing sky and ground) so using jet it might show those 2 segments as dark blue and red respectively. The next frame has 3 labels (0,1,2 - sky, ground and car), so the ground segment has now change colour from red to yellow. So when you visualise this the same segments keeps changing colour and not remaining a consistent colour (red).
Therefore a function like matlabs label2rbg() would be really useful if it exists?

I like to use cv2.LUT for when there are less than 256 labels (since it only works with uint8). If you have more than 256 labels you can always convert to 256 values using (labels % 256).astype(np.uint8).
Then with your labels you simply call: rgb = cv2.LUT(labels, lut).
The only remaining problem is to create a lookup-table (lut) for your labels. You can use matplotlib colormaps as follows:
import numpy as np
import matplotlib.pyplot as plt
import cv2
def label2rgb(labels):
"""
Convert a labels image to an rgb image using a matplotlib colormap
"""
label_range = np.linspace(0, 1, 256)
lut = np.uint8(plt.cm.viridis(label_range)[:,2::-1]*256).reshape(256, 1, 3) # replace viridis with a matplotlib colormap of your choice
return cv2.LUT(cv2.merge((labels, labels, labels)), lut)
For many cases it is better to have the colors of adjacent labels be wildly different. Rick Szelski gives a pseudo code to achieve this in his book, appendix C2: Pseudocolor Generation. I've worked with his algorithm and variants of it in the past, it is fairly straightforward to code something up. Here is an sample code using his algorithm:
import numpy as np
import cv2
def gen_lut():
"""
Generate a label colormap compatible with opencv lookup table, based on
Rick Szelski algorithm in `Computer Vision: Algorithms and Applications`,
appendix C2 `Pseudocolor Generation`.
:Returns:
color_lut : opencv compatible color lookup table
"""
tobits = lambda x, o: np.array(list(np.binary_repr(x, 24)[o::-3]), np.uint8)
arr = np.arange(256)
r = np.concatenate([np.packbits(tobits(x, -3)) for x in arr])
g = np.concatenate([np.packbits(tobits(x, -2)) for x in arr])
b = np.concatenate([np.packbits(tobits(x, -1)) for x in arr])
return np.concatenate([[[b]], [[g]], [[r]]]).T
def labels2rgb(labels, lut):
"""
Convert a label image to an rgb image using a lookup table
:Parameters:
labels : an image of type np.uint8 2D array
lut : a lookup table of shape (256, 3) and type np.uint8
:Returns:
colorized_labels : a colorized label image
"""
return cv2.LUT(cv2.merge((labels, labels, labels)), lut)
if __name__ == '__main__':
labels = np.arange(256).astype(np.uint8)[np.newaxis, :]
lut = gen_lut()
rgb = labels2rgb(labels, lut)
And here is the colormap:

Related

How to Calculate total area of pixels in each class in multi class segmented image

I have a multi class segmented image consisting of labels of 4 different classes represented in 4 different colors ( Darkblue,red,yellow and sky blue ), i would like to calculate the total area of pixels in each class label of segmented prediction.
I tried writing this code for obtaining total number of pixels in each label but i am not able to get any result which consists of total number of pixels in each corresponding class label.
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, io, img_as_ubyte
from skimage.filters import threshold_multiotsu
# Read an image
image = io.imread("images/Ulcer_segmented.jpg")
# Apply multi-Otsu threshold
thresholds = threshold_multiotsu(image, classes=5)
# Digitize (segment) original image into multiple classes.
#np.digitize assign values 0, 1, 2, 3, ... to pixels in each class.
regions = np.digitize(image, bins=thresholds)
output = img_as_ubyte(regions) #Convert 64 bit integer values to uint8
plt.imsave("images/Ulcer_segmented..jpg", output)
props = measure.regionprops_table(label_image, output,
properties=['label',
'area', 'equivalent_diameter',
'mean_intensity', 'solidity'])
This is described in the docs:
from skimage.measure import label, regionprops
# Read an image
image = io.imread("your/image.jpg")
# label image regions
label_image = label(image)
for region in regionprops(label_image):
print(region.area)
Looks like you want to get an image histogram the issue of using np.histogram or skimage.exposure.histogram is that your image is not single-channel and using these functions you would get a histogram of flattened image which would not yield the expected results.
The way you chose to overcome this problem is using otsu thresholding which I'm not sure if works as the documentation states that it expects a single channel (grayscale) image.
The knowledge of the colors used to represent your classes would help here, you could do something like
coors = [
[cls_0_rgb_color],
[cls_1_rgb_color],
[cls_2_rgb_color],
[cls_3_rgb_color]
]
areas = [np.count_nonzero(np.all(img == c, axis=-1)) for c in colors]
If you don't know exactly what colors the classes have you probably have to reduce the last dimension of your image to uniquely represent the 3-dimensional color (I'm not sure exactly how this is done correctly, maybe someone smarter than me can answer this in a new question). What I would do is convert the image to HSV format and use the hue component as a class representation.
from skimage.color import rgb2hsv
hsv = rgb2hsv(image)
hue = hsv[:, :, 0]
areas, bin_edges = np.histogram(hue, bins=4)
What could be tricky here is deciphering which area corresponds to what class but knowing approximately what colors to expect and from knowing how colors in hue space are aligned we could say that the order would be red, yellow, light_blue, dark_blue or yellow, light_blue, dark_blue, red as red hue is symmetrical around 0 or 360 degrees. Checking the bin_edges vector could do the trick here.
# set red_threshold experimentally
if bin_edges[1] < red_threshold:
# (red, yellow, light_blue, dark_blue)
else:
# (yellow, light_blue, dark_blue, red)

How to downscale an image without losing discrete values?

I have an image of a city with discrete colors (Green=meadow, black=buildings, white/yellow=roads). Using Pillow, I import the picture in my (Python) program and convert it to a Numpy array with discrete values for the colors (i.e. green pixels become 1's, black pixels become 2's, etc).
I want to downscale the resolution of the image (for computational purposes) while retaining as much information as possible. However, using Pillow's resize() method, colors deviate from these discrete values. How can I downscale this image while (most importantly) retaining the discrete colors and (also important) with losing as little information as possible?
Here an example of the image: https://i.imgur.com/6Tef55H.png
EDIT: per request, some code:
from PIL import Image
import Numpy as np
picture = Image.open(some_image.png)
width, height = picture.size
pic_array = np.zeros(width,height)
# Turn the image into discrete values
for i in range(0,width):
for j in range(0,height):
red, green, blue = picture.getpixel((i,j))
if red == a and green == b and blue == c:
#An example of how discrete colors are converted to values
pic_array[i][j] = 1
Scaling can be done in two ways:
1) Scaling the original image using Pillow's resize library or
2) rescaling the final array using something like:
scaled_array = pic_array[0:width:5, 0:height,5]
Option 1 is "well" in terms of retaining information but loses discrete values, while option 2 does it the other way around.
I was interested in this question and wrote some code to try out some ideas - specifically the "mode" filter suggested by #jasonharper in the comments. So, I programmed it up.
First of all the input image is not 4 nicely defined classes, but actually has 6,504 different colours, so I made a palette of 4 colours using ImageMagick like this:
magick xc:black xc:white xc:yellow xc:green +append palette.png
Here it is enlarged - in reality is 4x1 pixels:
Then I mapped the colours in the image to the palette of 4 discrete colours:
magick map.png +dither -remap palette.png start.png
Then I tried this code to calculate the median and the mode of each 3x3 window:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
from scipy import stats
from skimage.util import view_as_blocks
# Open image and make into Numpy array
im = Image.open('start.png')
na = np.array(im)
# Make a view as 3x3 blocks - crop anything not a multiple of 3
block_shape=(3,3)
view = view_as_blocks(na[:747,:], block_shape)
flatView = view.reshape(view.shape[0], view.shape[1], -1) # now (249,303,9)
# Get median of each 3x3 block
resMedian = np.median(flatView, axis=2).astype(np.uint8)
Image.fromarray(resMedian*60).save('resMedian.png') # arbitrary scaling by 60 for contrast
# Get mode of each 3x3 block
resMode = stats.mode(flatView, axis=2)[0].reshape((249,303)).astype(np.uint8)
Image.fromarray(resMode*60).save('resMode.png') # arbitrary scaling by 60 for contrast
Here is the result of the median filter:
And here is the result of the "mode" filter which is indeed better IMHO:
Here is animated comparison:
If anyone wants to take the code and adapt it to try new ideas, please feel free!

How can a choropleth map be combined with a shaded raster in Python?

I want to plot characteristics of areas on a map, but with very uneven population density, the larger tiles misleadingly attract attention. Think of averages (of test scores, say) by ZIP codes.
High-resolution maps are available to separate inhabited locales and even density within them. The Python code below does produce a raster colored according to the average such density for every pixel.
However, what I would really need is coloring from a choropleth map of the same area (ZIP codes of Hungary in this case) but the coloring affecting only points that would show up on the raster anyway. The raster could only determine the gamma of the pixel (or maybe height in some 3D analog). What is a good way to go about this?
A rasterio.mask.mask somehow?
(By the way, an overlay with the ZIP code boundaries would also be nice, but I have a better understanding of how that could work with GeoViews.)
import rasterio
import os
import datashader as ds
from datashader import transfer_functions as tf
import xarray as xr
from matplotlib.cm import viridis
# download a GeoTIFF from this location: https://data.humdata.org/dataset/hungary-high-resolution-population-density-maps-demographic-estimates
data_path = '~/Downloads/'
file_name = 'HUN_youth_15_24.tif' # young people
file_path = os.path.join(data_path, file_name)
src = rasterio.open(file_path)
da = xr.open_rasterio(file_path)
cvs = ds.Canvas(plot_width=5120, plot_height=2880)
img = tf.shade(cvs.raster(da,layer=1), cmap=viridis)
ds.utils.export_image(img, "map", export_path=data_path, fmt=".png")
I am not sure if I understand, so please just tell me if I am mistaken. If I understood well, you can achieve what you want using numpy only (I am sure translating this to xarray will be easy):
# ---- snipped code already in the question -----
import numpy as np
import matplotlib.pyplot as plt
# fake a choropleth in a dirty, fast way
height, width = 2880, 5120
choropleth = np.empty((height, width, 3,), dtype=np.uint8)
CHUNKS = 10
x_size = width // CHUNKS
for x_step, x in enumerate(range(0, width, width // CHUNKS)):
y_size = height // CHUNKS
for y_step, y in enumerate(range(0, height, height // CHUNKS)):
choropleth[y: y+y_size, x: x+x_size] = (255-x_step*255//CHUNKS,
0, y_step*255//CHUNKS)
plt.figure("Fake Choropleth")
plt.imshow(choropleth)
# Option 1: play with alpha only
outimage = np.empty((height, width, 4,), dtype=np.uint8) # RGBA image
outimage[:, :, 3] = img # Set alpha channel
outimage[:, :, :3] = choropleth # Set color
plt.figure("Alpha filter only")
plt.imshow(outimage)
# Option 2: clear the empty points
outimage[img == 0, :3] = 0 # White. use 0 for black
plt.figure("Points erased")
plt.imshow(outimage[:,:,:3]) # change to 'outimage' to see the image with alpha
Results:
Dummy choroplet
Alpha filtered figure
Black background, no alpha filter
Note that the images might seem different because of matplotlib's antialiasing.
Datashader will let you combine data of many types into a common raster shape where you can do whatever making or filtering you like using xarray operations based on NumPy. E.g. you can render the choropleth as polygons, then mask out uninhabited regions. How to normalize by area is up to you, and could get very complex, but should be doable once you define precisely what you are intending to do. See the transform code at https://examples.pyviz.org/nyc_taxi/nyc_taxi.html for examples of how to do this, as in:
def transform(overlay):
picks = overlay.get(0).redim(pickup_x='x', pickup_y='y')
drops = overlay.get(1).redim(dropoff_x='x', dropoff_y='y')
pick_agg = picks.data.Count.data
drop_agg = drops.data.Count.data
more_picks = picks.clone(picks.data.where(pick_agg>drop_agg))
more_drops = drops.clone(drops.data.where(drop_agg>pick_agg))
return (hd.shade(more_drops, cmap=['lightcyan', "blue"]) *
hd.shade(more_picks, cmap=['mistyrose', "red"]))
picks = hv.Points(df, ['pickup_x', 'pickup_y'])
drops = hv.Points(df, ['dropoff_x', 'dropoff_y'])
((hd.rasterize(picks) * hd.rasterize(drops))).apply(transform).opts(
bgcolor='white', xaxis=None, yaxis=None, width=900, height=500)
Here it's not really masking anything, but hopefully you can see how masking would work; just get some rasterized object then do a mathematical operation using some other rasterized object. Here the steps are all done in a function using HoloViews objects so that you can have a live interactive plot, but you would probably want to work out the approach using the more basic code at datashader.org where you only have to deal with xarray objects and not a HoloViews pipeline; you can then translate what you did for a single xarray into the HoloViews pipeline that would then allow full interactive usage with pan, zoom, axes, etc.

Combining Dynamic Labels / Regions (Python, scikit-image)

I have a set of 480 original images and 480 labels (one for each original) that have been segmented and labelled via a Watershed process. I use the labels, labels_ws, when looking for the mean intensity of various regions in the original images, original_images. These images form a time-series and I am looking to track the mean intensity in each labelled region of this time-series.
Finding the mean intensity of the regions in a single image is pretty easily done in scikit-image using the following code:
regions = measure.regionprops(labels_ws, intensity_image = original_image)
print(["(%s, %s)" % (r, r.mean_intensity) for r in regions])
which prints a whole lot of output that looks like this:
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E5F3F98, 35.46153846153846)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E5F3FD0, 47.0)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B6048, 49.96666666666667)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B6080, 23.0)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B60B8, 32.1)',
Each image probably has around 100-150 regions. The regions are areas in the image where there is a neuron luminescing in a tissue sample during the time the image was taken. As the time-series goes on, the regions (neurons) luminesce in a periodic manner and thus the intensity data for each region should look like a periodic function.
The problem I am having is that in each successive image, the labels / regions are slightly different as the luminescence in each region follows its periodic behaviour. Thus, labels / regions "pop-in/out" over the duration of the time series. I also can't guarantee that the size of, let's say, Region_1 when it first luminesces will be the same size as it is when it luminesces for a second or third time (however any difference is slight, just a couple of pixels).
All of that said, is there a way to combine all of my labels in some way to form a single label that I can track? Should I combine all of the original images in some way then create a master label? How do I handle regions that will definitely overlap, but might be different shapes / sizes by a couple of pixels? Thanks!
I had a similar problem where I wanted to track changing segmented regions over time. My solution is to change all the labels in every image at the center point of each segmented region. This has the effect of propagating the labels through to all the other images.
Of course, this assumes that the regions stay in roughly the same place throughout
You can see the difference in the animation: on the left the labels are constantly changing and on the right they stay consistent. It works despite the missing frames and shifting regions
Animation link: https://imgur.com/a/e1Q7V6O#o4t9HyE
(I don't have enough rep to post the image directly)
Just send your list of segmented and labelled images to standardise_labels_timeline
def standardise_labels_timeline(images_list, start_at_end = True, count_offset = 1000):
"""
Replace labels on similar images to allow tracking over time
:param images_list: a list of segmented and lablled images as numpy arrays
:param start_at_end: relabels the images beginning at the end of the list
:param count_offset: an int greater than the total number of expected labels in a single image
:returns: a list of relablled images as numpy arrays
"""
import numpy as np
images = list(images_list)
if start_at_end:
images.reverse()
# Relabel all images to ensure there are no duplicates
for image in images:
for label in np.unique(image):
if label > 0:
count_offset += 1
image[image == label] = count_offset
# Ensure labels are propagated through image timeline
for i, image in enumerate(images):
labels = get_labelled_centers(image)
# Apply labels to all subsequent images
for j in range(i, len(images)):
images[j] = replace_image_point_labels(images[j], labels)
if start_at_end:
images.reverse()
return images
def get_labelled_centers(image):
"""
Builds a list of labels and their centers
:param image: a segmented and labelled image as a numpy array
:returns: a list of label, co-ordinate tuples
"""
from skimage.measure import regionprops
# Find all labelled areas, disable caching so properties are only calculated if required
rps = regionprops(image, cache = False)
return [(r.label, r.centroid) for r in rps]
def replace_image_point_labels(image, labels):
"""
Replace the labelled at a list of points with new labels
:param image: a segmented and lablled image as a numpy array
:param labels: a list of label, co-ordinate tuples
:returns: a relabelled image as a numpy array
"""
img = image.copy()
for label, point in labels:
row, col = point
# Find the existing label at the point
index = img[int(row), int(col)]
# Replace the existing label with new, excluding background
if index > 0:
img[img == index] = label
return img
-- coding: utf-8 --
"""
Created on %(date)s
#author: %(Ahmed Islam ElManawy)s
a.elmanawy_90#yahoo.com
"""
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import cv2
from skimage.measure import label, regionprops
from sklearn.cluster import KMeans
import numpy as np
## import image
img=cv2.imread('E:\\Data\\Arabidopsis Thaliana HSI image\\20170508\\binarry\\AQC_RT.jpg',1)
## lablelled image
label_image = label(img[:,:,0])
## combined image center using k-means
Center=[]
Box=[]
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 10:
# draw rectangle around segmented coins
Box.append(region.bbox)
Center.append(region.centroid)
Center=np.asarray(Center)
Box=np.asarray(Box)
kmeans=KMeans(n_clusters=12, random_state=0).fit(Center)
label=kmeans.labels_
## plot image with different area
fig, ax = plt.subplots(figsize=(10, 6))
ax.imshow(img)
for l in label:
h=np.where(label==l)
B=Box[h,:]
B=B[0,:,:]
minr, minc, maxr, maxc =np.min(B[:,0]), np.min(B[:,1]), np.max(B[:,2]), np.max(B[:,3])
# plt.imshow(img2[11:88, 2:94,:])
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()

Edge detection for image stored in matrix

I represent images in the form of 2-D arrays. I have this picture:
How can I get the pixels that are directly on the boundaries of the gray region and colorize them?
I want to get the coordinates of the matrix elements in green and red separately. I have only white, black and gray regions on the matrix.
The following should hopefully be okay for your needs (or at least help). The idea is to split into the various regions using logical checks based on threshold values. The edge between these regions can then be detected using numpy roll to shift pixels in x and y and comparing to see if we are at an edge,
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from skimage.morphology import closing
thresh1 = 127
thresh2 = 254
#Load image
im = sp.misc.imread('jBD9j.png')
#Get threashold mask for different regions
gryim = np.mean(im[:,:,0:2],2)
region1 = (thresh1<gryim)
region2 = (thresh2<gryim)
nregion1 = ~ region1
nregion2 = ~ region2
#Plot figure and two regions
fig, axs = plt.subplots(2,2)
axs[0,0].imshow(im)
axs[0,1].imshow(region1)
axs[1,0].imshow(region2)
#Clean up any holes, etc (not needed for simple figures here)
#region1 = sp.ndimage.morphology.binary_closing(region1)
#region1 = sp.ndimage.morphology.binary_fill_holes(region1)
#region1.astype('bool')
#region2 = sp.ndimage.morphology.binary_closing(region2)
#region2 = sp.ndimage.morphology.binary_fill_holes(region2)
#region2.astype('bool')
#Get location of edge by comparing array to it's
#inverse shifted by a few pixels
shift = -2
edgex1 = (region1 ^ np.roll(nregion1,shift=shift,axis=0))
edgey1 = (region1 ^ np.roll(nregion1,shift=shift,axis=1))
edgex2 = (region2 ^ np.roll(nregion2,shift=shift,axis=0))
edgey2 = (region2 ^ np.roll(nregion2,shift=shift,axis=1))
#Plot location of edge over image
axs[1,1].imshow(im)
axs[1,1].contour(edgex1,2,colors='r',lw=2.)
axs[1,1].contour(edgey1,2,colors='r',lw=2.)
axs[1,1].contour(edgex2,2,colors='g',lw=2.)
axs[1,1].contour(edgey2,2,colors='g',lw=2.)
plt.show()
Which gives the . For simplicity I've use roll with the inverse of each region. You could roll each successive region onto the next to detect edges
Thank you to #Kabyle for offering a reward, this is a problem that I spent a while looking for a solution to. I tried scipy skeletonize, feature.canny, topology module and openCV with limited success... This way was the most robust for my case (droplet interface tracking). Hope it helps!
There is a very simple solution to this: by definition any pixel which has both white and gray neighbors is on your "red" edge, and gray and black neighbors is on the "green" edge. The lightest/darkest neighbors are returned by the maximum/minimum filters in skimage.filters.rank, and a binary combination of masks of pixels that have a lightest/darkest neighbor which is white/gray or gray/black respectively produce the edges.
Result:
A worked solution:
import numpy
import skimage.filters.rank
import skimage.morphology
import skimage.io
# convert image to a uint8 image which only has 0, 128 and 255 values
# the source png image provided has other levels in it so it needs to be thresholded - adjust the thresholding method for your data
img_raw = skimage.io.imread('jBD9j.png', as_grey=True)
img = numpy.zeros_like(img, dtype=numpy.uint8)
img[:,:] = 128
img[ img_raw < 0.25 ] = 0
img[ img_raw > 0.75 ] = 255
# define "next to" - this may be a square, diamond, etc
selem = skimage.morphology.disk(1)
# create masks for the two kinds of edges
black_gray_edges = (skimage.filters.rank.minimum(img, selem) == 0) & (skimage.filters.rank.maximum(img, selem) == 128)
gray_white_edges = (skimage.filters.rank.minimum(img, selem) == 128) & (skimage.filters.rank.maximum(img, selem) == 255)
# create a color image
img_result = numpy.dstack( [img,img,img] )
# assign colors to edge masks
img_result[ black_gray_edges, : ] = numpy.asarray( [ 0, 255, 0 ] )
img_result[ gray_white_edges, : ] = numpy.asarray( [ 255, 0, 0 ] )
imshow(img_result)
P.S. Pixels which have black and white neighbors, or all three colors neighbors, are in an undefined category. The code above doesn't color those. You need to figure out how you want the output to be colored in those cases; but it is easy to extend the approach above to produce another mask or two for that.
P.S. The edges are two pixels wide. There is no getting around that without more information: the edges are between two areas, and you haven't defined which one of the two areas you want them to overlap in each case, so the only symmetrical solution is to overlap both areas by one pixel.
P.S. This counts the pixel itself as its own neighbor. An isolated white or black pixel on gray, or vice versa, will be considered as an edge (as well as all the pixels around it).
While plonser's answer may be rather straight forward to implement, I see it failing when it comes to sharp and thin edges. Nevertheless, I suggest you use part of his approach as preconditioning.
In a second step you want to use the Marching Squares Algorithm. According to the documentation of scikit-image, it is
a special case of the marching cubes algorithm (Lorensen, William and
Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface
Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings)
21(4) July 1987, p. 163-170
There even exists a Python implementation as part of the scikit-image package. I have been using this algorithm (my own Fortran implementation, though) successfully for edge detection of eye diagrams in communications engineering.
Ad 1: Preconditioning
Create a copy of your image and make it two color only, e.g. black/white. The coordinates remain the same, but you make sure that the algorithm can properly make a yes/no-decision independent from the values that you use in your matrix representation of the image.
Ad 2: Edge Detection
Wikipedia as well as various blogs provide you with a pretty elaborate description of the algorithm in various languages, so I will not go into it's details. However, let me give you some practical advice:
Your image has open boundaries at the bottom. Instead of modifying the algorithm, you can artifically add another row of pixels (black or grey to bound the white/grey areas).
The choice of the starting point is critical. If there are not too many images to be processed, I suggest you select it manually. Otherwise you will need to define rules. Since the Marching Squares Algorithm can start anywhere inside a bounded area, you could choose any pixel of a given color/value to detect the corresponding edge (it will initially start walking in one direction to find an edge).
The algorithm returns the exact 2D positions, e.g. (x/y)-tuples. You can either
iterate through the list and colorize the corresponding pixels by assigning a different value or
create a mask to select parts of your matrix and assign the value that corresponds to a different color, e.g. green or red.
Finally: Some Post-Processing
I suggested to add an artificial boundary to the image. This has two advantages:
1. The Marching Squares Algorithm works out of the box.
2. There is no need to distinguish between image boundary and the interface between two areas within the image. Just remove the artificial boundary once you are done setting the colorful edges -- this will remove the colored lines at the boundary of the image.
Basically by follow pyStarter's suggestion of using the marching square algorithm from scikit-image, the desired could contours can be extracted with the following code:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from skimage import measure
import scipy.ndimage as ndimage
from skimage.color import rgb2gray
from pprint import pprint
#Load image
im = rgb2gray(sp.misc.imread('jBD9j.png'))
n, bins_edges = np.histogram(im.flatten(),bins = 100)
# Skip the black area, and assume two distinct regions, white and grey
max_counts = np.sort(n[bins_edges[0:-1] > 0])[-2:]
thresholds = np.select(
[max_counts[i] == n for i in range(max_counts.shape[0])],
[bins_edges[0:-1]] * max_counts.shape[0]
)
# filter our the non zero values
thresholds = thresholds[thresholds > 0]
fig, axs = plt.subplots()
# Display image
axs.imshow(im, interpolation='nearest', cmap=plt.cm.gray)
colors = ['r','g']
for i, threshold in enumerate(thresholds):
contours = measure.find_contours(im, threshold)
# Display all contours found for this threshold
for n, contour in enumerate(contours):
axs.plot(contour[:,1], contour[:,0],colors[i], lw = 4)
axs.axis('image')
axs.set_xticks([])
axs.set_yticks([])
plt.show()
!
However, from your image there is no clear defined gray region, so I took the two largest counts of intensities in the image and thresholded on these. A bit disturbing is the red region in the middle of the white region, however I think this could be tweaked with the number of bins in the histogram procedure. You could also set these manually as Ed Smith did.
Maybe there is a more elegant way to do that ...
but in case your array is a numpy array with dimensions (N,N) (gray scale) you can do
import numpy as np
# assuming black -> 0 and white -> 1 and grey -> 0.5
black_reg = np.where(a < 0.1, a, 10)
white_reg = np.where(a > 0.9, a, 10)
xx_black,yy_black = np.gradient(black_reg)
xx_white,yy_white = np.gradient(white_reg)
# getting the coordinates
coord_green = np.argwhere(xx_black**2 + yy_black**2>0.2)
coord_red = np.argwhere(xx_white**2 + yy_white**2>0.2)
The number 0.2 is just a threshold and needs to be adjusted.
I think you are probably looking for edge detection method for gray scale images. There are many ways to do that. Maybe this can help http://en.m.wikipedia.org/wiki/Edge_detection. For differentiating edges between white and gray and edges between black and gray, try use local average intensity.

Categories

Resources