There is a nice implementation of super resolution segment generation (SLIC) in skimage.segmentation package in the python sklearn package.
The slic() method returns the integer sets of labels. My question is how can I get the segments that are spatial neighbors of each other? What I would like to do is build a graph using these segments and the edges would connect the immediate neighbors. However, I cannot figure out how to get the immediate neighbors of a segment.
The python code to perform the SLIC is as follows:
from skimage import io
from skimage.segmentation import slic
from skimage.segmentation import find_boundaries
# An image of dimensions 300, 300
image = img_as_float(io.imread("image.png"))
# call slic. This returns an numpy array which assigns to every
# pixel in the image an integer label
# So segments is a numpy array of shape (300, 300)
segments = slic(image, 100, sigma = 5)
# Now I want to know the neighbourhood segment for each super-pixel
# There is a method called find_boundaries which returns a boolean
# for every pixel to show if it is a boundary pixel or not.
b = find_boundaries(segments)
Here, I am stuck. I would like to know how to parse this boundary indices and find out for a given label index (say 0), which label indexes share a boundary with label of index 0. Is there a way to do this efficiently without looping through the boundary array for every label index?
The way I do it is to build a graph containing an edge from each pixel to its left and bottom pixel (so a 4 neighborhood), label them with their superpixel number and remove duplicates.
You can find code and details in my blog post.
You can find some related functions here, thought they are not very well documented (yet).
A simple method using just np.unique posing each segment-image pixel vs. the one to the right as well as below:
from skimage.data import astronaut
from skimage.segmentation import slic
from scipy.spatial import Delaunay
from skimage.segmentation import mark_boundaries
from matplotlib.lines import Line2D
img = astronaut().astype(np.float32) / 255.
# SLIC
segments = slic(img, n_segments=500, compactness=20)
segments_ids = np.unique(segments)
# centers
centers = np.array([np.mean(np.nonzero(segments==i),axis=1) for i in segments_ids])
vs_right = np.vstack([segments[:,:-1].ravel(), segments[:,1:].ravel()])
vs_below = np.vstack([segments[:-1,:].ravel(), segments[1:,:].ravel()])
bneighbors = np.unique(np.hstack([vs_right, vs_below]), axis=1)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.imshow(mark_boundaries(img, segments))
plt.scatter(centers[:,1],centers[:,0], c='y')
for i in range(bneighbors.shape[1]):
y0,x0 = centers[bneighbors[0,i]]
y1,x1 = centers[bneighbors[1,i]]
l = Line2D([x0,x1],[y0,y1], alpha=0.5)
ax.add_line(l)
An alternative (and somewhat incomplete) method, using Delaunay tessellation:
# neighbors via Delaunay tesselation
tri = Delaunay(centers)
# draw centers and neighbors
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.imshow(mark_boundaries(img, segments))
plt.scatter(centers[:,1],centers[:,0], c='y')
# this contains the neighbors list: tri.vertex_neighbor_vertices
indptr,indices = tri.vertex_neighbor_vertices
# draw lines from each center to its neighbors
for i in range(len(indptr)-1):
N = indices[indptr[i]:indptr[i+1]] # list of neighbor superpixels
centerA = np.repeat([centers[i]], len(N), axis=0)
centerB = centers[N]
for y0,x0,y1,x1 in np.hstack([centerA,centerB]):
l = Line2D([x0,x1],[y0,y1], alpha=0.5)
ax.add_line(l)
Incomplete because some boundary neighbors will not arise from the tessellation.
Related
I have two images that consist of colored squares with different grid step (10x10 and 12x12).
What I want is to make the first image to be smoothly transformed into the second one.
When I use a plain image overlay with cv2.addWeighted() function, the result (left) is not good because of the intersected grid spaces. I suppose it would be better to shift remaining grid cells to the borders and clear out the rest (right).
Is there any algorithm to deal with this task?
Thanks.
You can interpolate each pixel individually between different images.
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
np.random.seed(200)
num_images = 2
images = np.random.rand(num_images, 8,8)
for index, im in enumerate(images):
print(f'Images {index}')
fig = plt.imshow(im)
plt.show()
Interpolating these images:
n_frames = 4
x_array = np.linspace(0, 1, int(n_frames))
def interpolate_images(frame):
intermediate_image = np.zeros((1, *images.shape[1:]))
for lay in range(images.shape[1]):
for lat in range(images.shape[2]):
tck = interpolate.splrep(np.linspace(0, 1, images.shape[0]), images[:, lay, lat], k = 1)
intermediate_image[:, lay, lat] = interpolate.splev(x_array[frame], tck)
return intermediate_image
for frame in range(n_frames):
im = interpolate_images(int(frame))
fig = plt.imshow(im[0])
plt.show()
What I am trying to achieve is similar to photoshop/gimp's eyedropper tool: take a round sample of a given area in an image and return the average colour of that circular sample.
The simplest method I have found is to take a 'regular' square sample, mask it as a circle, then reduce it to 1 pixel, but this is very CPU-demanding (especially when repeated millions of times).
A more mathematically complex method is to take a square area and average only the pixels that fall within a circular area within that sample, but determining what pixel is or isn't within that circle, repeated, is CPU-demanding as well.
Is there a more succinct, less-CPU-demanding means to achieve this?
Here's a little example of skimage.draw.circle() which doesn't actually draw a circle but gives you the coordinates of points within a circle which you can use to index Numpy arrays with.
#!/usr/bin/env python3
import numpy as np
from skimage.io import imsave
from skimage.draw import circle
# Make rectangular canvas of mid-grey
w, h = 200, 100
img = np.full((h, w), 128, dtype=np.uint8)
# Get coordinates of points within a central circle
Ycoords, Xcoords = circle(h//2, w//2, 45)
# Make all points in circle=200, i.e. fill circle with 200
img[Ycoords, Xcoords] = 200
# Get mean of points in circle
print(img[Ycoords, Xcoords].mean()) # prints 200.0
# DEBUG: Save image for checking
imsave('result.png',img)
I'm sure that there's a more succinct way to go about it, but:
import math
import numpy as np
import imageio as ioimg # as scipy's i/o function is now depreciated
from skimage.draw import circle
import matplotlib.pyplot as plt
# base sample dimensions (rest below calculated on this).
# Must be an odd number.
wh = 49
# tmp - this placement will be programmed later
dp = 500
#load work image (from same work directory)
img = ioimg.imread('830.jpg')
# convert to numpy array (droppying the alpha while we're at it)
np_img = np.array(img)[:,:,:3]
# take sample of resulting array
sample = np_img[dp:wh+dp, dp:wh+dp]
#==============
# set up numpy circle mask
## this mask will be multiplied against each RGB layer in extracted sample area
# set up basic square array
sample_mask = np.zeros((wh, wh), dtype=np.uint8)
# set up circle centre coords and radius values
xy, r = math.floor(wh/2), math.ceil(wh/2)
# use these values to populate circle area with ones
rr, cc = circle(xy, xy, r)
sample_mask[rr, cc] = 1
# add axis to make array multiplication possible (do I have to do this)
sample_mask = sample_mask[:, :, np.newaxis]
result = sample * sample_mask
# count number of nonzero values (this will be our median divisor)
nz = np.count_nonzero(sample_mask)
sample_color = []
for c in range(result.shape[2]):
sample_color.append(int(round(np.sum(result[:,:,c])/nz)))
print(sample_color) # will return array like [225, 205, 170]
plt.imshow(result, interpolation='nearest')
plt.show()
Perhaps asking this question here wasn't necessary (it has been a while since I've python-ed, and was hoping that some new library had been developed for this since), but I hope this can be a reference for others who have the same goal.
This operation will be performed for every pixel in the image (sometimes millions of times) for thousands of images (scanned pages), so therein are my performance issue worries, but thanks to numpy, this code is pretty quick.
I have a set of 480 original images and 480 labels (one for each original) that have been segmented and labelled via a Watershed process. I use the labels, labels_ws, when looking for the mean intensity of various regions in the original images, original_images. These images form a time-series and I am looking to track the mean intensity in each labelled region of this time-series.
Finding the mean intensity of the regions in a single image is pretty easily done in scikit-image using the following code:
regions = measure.regionprops(labels_ws, intensity_image = original_image)
print(["(%s, %s)" % (r, r.mean_intensity) for r in regions])
which prints a whole lot of output that looks like this:
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E5F3F98, 35.46153846153846)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E5F3FD0, 47.0)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B6048, 49.96666666666667)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B6080, 23.0)',
'(skimage.measure._regionprops._RegionProperties object at
0x000000000E7B60B8, 32.1)',
Each image probably has around 100-150 regions. The regions are areas in the image where there is a neuron luminescing in a tissue sample during the time the image was taken. As the time-series goes on, the regions (neurons) luminesce in a periodic manner and thus the intensity data for each region should look like a periodic function.
The problem I am having is that in each successive image, the labels / regions are slightly different as the luminescence in each region follows its periodic behaviour. Thus, labels / regions "pop-in/out" over the duration of the time series. I also can't guarantee that the size of, let's say, Region_1 when it first luminesces will be the same size as it is when it luminesces for a second or third time (however any difference is slight, just a couple of pixels).
All of that said, is there a way to combine all of my labels in some way to form a single label that I can track? Should I combine all of the original images in some way then create a master label? How do I handle regions that will definitely overlap, but might be different shapes / sizes by a couple of pixels? Thanks!
I had a similar problem where I wanted to track changing segmented regions over time. My solution is to change all the labels in every image at the center point of each segmented region. This has the effect of propagating the labels through to all the other images.
Of course, this assumes that the regions stay in roughly the same place throughout
You can see the difference in the animation: on the left the labels are constantly changing and on the right they stay consistent. It works despite the missing frames and shifting regions
Animation link: https://imgur.com/a/e1Q7V6O#o4t9HyE
(I don't have enough rep to post the image directly)
Just send your list of segmented and labelled images to standardise_labels_timeline
def standardise_labels_timeline(images_list, start_at_end = True, count_offset = 1000):
"""
Replace labels on similar images to allow tracking over time
:param images_list: a list of segmented and lablled images as numpy arrays
:param start_at_end: relabels the images beginning at the end of the list
:param count_offset: an int greater than the total number of expected labels in a single image
:returns: a list of relablled images as numpy arrays
"""
import numpy as np
images = list(images_list)
if start_at_end:
images.reverse()
# Relabel all images to ensure there are no duplicates
for image in images:
for label in np.unique(image):
if label > 0:
count_offset += 1
image[image == label] = count_offset
# Ensure labels are propagated through image timeline
for i, image in enumerate(images):
labels = get_labelled_centers(image)
# Apply labels to all subsequent images
for j in range(i, len(images)):
images[j] = replace_image_point_labels(images[j], labels)
if start_at_end:
images.reverse()
return images
def get_labelled_centers(image):
"""
Builds a list of labels and their centers
:param image: a segmented and labelled image as a numpy array
:returns: a list of label, co-ordinate tuples
"""
from skimage.measure import regionprops
# Find all labelled areas, disable caching so properties are only calculated if required
rps = regionprops(image, cache = False)
return [(r.label, r.centroid) for r in rps]
def replace_image_point_labels(image, labels):
"""
Replace the labelled at a list of points with new labels
:param image: a segmented and lablled image as a numpy array
:param labels: a list of label, co-ordinate tuples
:returns: a relabelled image as a numpy array
"""
img = image.copy()
for label, point in labels:
row, col = point
# Find the existing label at the point
index = img[int(row), int(col)]
# Replace the existing label with new, excluding background
if index > 0:
img[img == index] = label
return img
-- coding: utf-8 --
"""
Created on %(date)s
#author: %(Ahmed Islam ElManawy)s
a.elmanawy_90#yahoo.com
"""
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import cv2
from skimage.measure import label, regionprops
from sklearn.cluster import KMeans
import numpy as np
## import image
img=cv2.imread('E:\\Data\\Arabidopsis Thaliana HSI image\\20170508\\binarry\\AQC_RT.jpg',1)
## lablelled image
label_image = label(img[:,:,0])
## combined image center using k-means
Center=[]
Box=[]
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 10:
# draw rectangle around segmented coins
Box.append(region.bbox)
Center.append(region.centroid)
Center=np.asarray(Center)
Box=np.asarray(Box)
kmeans=KMeans(n_clusters=12, random_state=0).fit(Center)
label=kmeans.labels_
## plot image with different area
fig, ax = plt.subplots(figsize=(10, 6))
ax.imshow(img)
for l in label:
h=np.where(label==l)
B=Box[h,:]
B=B[0,:,:]
minr, minc, maxr, maxc =np.min(B[:,0]), np.min(B[:,1]), np.max(B[:,2]), np.max(B[:,3])
# plt.imshow(img2[11:88, 2:94,:])
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()
I want to use OCR to capture the bowling scores from the monitor at the lances. I had a look at this sudoku solver, as I think its pretty similar - numbers and grids right? It has trouble finding the horizontal lines. Has anyone got any tips for pre-processing this image to make it easier to detect the lines (or numbers!). Also any tips for how to deal with the split (the orange ellipse around some of the 8's int he image)?
So far I have got the outline of the score area and cropped it.
import matplotlib
matplotlib.use('TkAgg')
from skimage import io
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
from skimage.color import rgb2gray
# import pytesseract
from matplotlib.path import Path
from qhd import *
def polygonArea(poly):
"""
Return area of an unclosed polygon.
:see: https://stackoverflow.com/a/451482
:param poly: (n,2)-array
"""
# we need a plain list for the following operations
if isinstance(poly, np.ndarray):
poly = poly.tolist()
segments = zip(poly, poly[1:] + [poly[0]])
return 0.5 * abs(sum(x0*y1 - x1*y0
for ((x0, y0), (x1, y1)) in segments))
filename = 'good.jpg'
image = io.imread(filename)
image = rgb2gray(image)
# Find contours at a constant value of 0.8
contours = measure.find_contours(image, 0.4)
# Display the image and plot all contours found
fig, ax = plt.subplots()
c = 0
biggest = None
biggest_size = 0
for n, contour in enumerate(contours):
curr_size = polygonArea(contour)
if curr_size > biggest_size:
biggest = contour
biggest_size = curr_size
biggest = qhull2D(biggest)
# Approximate that so we just get a rectangle.
biggest = measure.approximate_polygon(biggest, 500)
# vertices of the cropping polygon
yc = biggest[:,0]
xc = biggest[:,1]
xycrop = np.vstack((xc, yc)).T
# xy coordinates for each pixel in the image
nr, nc = image.shape
ygrid, xgrid = np.mgrid[:nr, :nc]
xypix = np.vstack((xgrid.ravel(), ygrid.ravel())).T
# construct a Path from the vertices
pth = Path(xycrop, closed=False)
# test which pixels fall within the path
mask = pth.contains_points(xypix)
# reshape to the same size as the image
mask = mask.reshape(image.shape)
# create a masked array
masked = np.ma.masked_array(image, ~mask)
# if you want to get rid of the blank space above and below the cropped
# region, use the min and max x, y values of the cropping polygon:
xmin, xmax = int(xc.min()), int(np.ceil(xc.max()))
ymin, ymax = int(yc.min()), int(np.ceil(yc.max()))
trimmed = masked[ymin:ymax, xmin:xmax]
plt.imshow(trimmed, cmap=plt.cm.gray), plt.title('trimmed')
plt.show()
https://imgur.com/LijB85I is an example of how the score is displayed.