Drawing grid lines across the image using OpenCV Python - python

Using OpenCV python, I want to make a grid when I switch on my camera. Can you guys help me with a logic or code.
Please find the image link below for better understanding.
Camera switched on and pointed to a floor
Grid Lines are split across the whole image

Here's a simple solution for creating an m x n grid (split as evenly as possible):
import cv2 as cv # tested with version 4.5.3.56 (pip install opencv-python)
import numpy as np
def draw_grid(img, grid_shape, color=(0, 255, 0), thickness=1):
h, w, _ = img.shape
rows, cols = grid_shape
dy, dx = h / rows, w / cols
# draw vertical lines
for x in np.linspace(start=dx, stop=w-dx, num=cols-1):
x = int(round(x))
cv.line(img, (x, 0), (x, h), color=color, thickness=thickness)
# draw horizontal lines
for y in np.linspace(start=dy, stop=h-dy, num=rows-1):
y = int(round(y))
cv.line(img, (0, y), (w, y), color=color, thickness=thickness)
return img
Here's a script that wraps this function in a CLI:
https://gist.github.com/mathandy/389ddbad48810d188bdc997c3a1dab0c

Here is the solution for my question guys. Make use of it.
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
try:
from PIL import Image
except ImportError:
import Image
# Open image file
image = Image.open('bird.jpg')
my_dpi=200.
# Set up figure
fig=plt.figure(figsize=(float(image.size[0])/my_dpi,float(image.size[1])/my_dpi),dpi=my_dpi)
ax=fig.add_subplot(111)
# Remove whitespace from around the image
fig.subplots_adjust(left=0,right=1,bottom=0,top=1)
# Set the gridding interval: here we use the major tick interval
myInterval=300.
loc = plticker.MultipleLocator(base=myInterval)
ax.xaxis.set_major_locator(loc)
ax.yaxis.set_major_locator(loc)
# Add the grid
ax.grid(which='major', axis='both', linestyle='-', color='g')
# Add the image
ax.imshow(image)
# Find number of gridsquares in x and y direction
nx=abs(int(float(ax.get_xlim()[1]-ax.get_xlim()[0])/float(myInterval)))
ny=abs(int(float(ax.get_ylim()[1]-ax.get_ylim()[0])/float(myInterval)))
# Save the figure
fig.savefig('birdgrid_without_Label.jpg')

def draw_grid(img, line_color=(0, 255, 0), thickness=1, type_=_cv2.LINE_AA, pxstep=50):
'''(ndarray, 3-tuple, int, int) -> void
draw gridlines on img
line_color:
BGR representation of colour
thickness:
line thickness
type:
8, 4 or cv2.LINE_AA
pxstep:
grid line frequency in pixels
'''
x = pxstep
y = pxstep
while x < img.shape[1]:
_cv2.line(img, (x, 0), (x, img.shape[0]), color=line_color, lineType=type_, thickness=thickness)
x += pxstep
while y < img.shape[0]:
_cv2.line(img, (0, y), (img.shape[1], y), color=line_color, lineType=type_, thickness=thickness)
y += pxstep

You can draw lines on the input image using the cv2.line() function. So depending on where you want to draw the lines, your basic code will look like:
img = cv2.imread(r"path\to\img")
cv2.line(img, (start_x, start_y), (end_x, end_y), (255, 0, 0), 1, 1)
To get the dimensions of the image, you can use img.shape which will return (height, width).
To draw a vertical line through the center for example, your code would look like:
cv2.line(img, (int(img.shape[1]/2), 0),(int(img.shape[1]/2), img.shape[0]), (255, 0, 0), 1, 1)

Related

How to paint text in an image using python

I have a image:
It has some time in it.
But I need to convert it to this using python:
this:
to this
In short, I need to paint the text in that image black using python. I think maybe opencv is useful for this, But not sure. And I need to do that for any time that will be in that same type of image(like the image before editing, just different digits).
I think we can use canny to detect edges of the digits and the dots and then paint the area between them black. I don't know the perfect way of doing this, but I think it might work. Hoping for a solution. Thanks in advance.
This is how I did it, I recommend that you play with morphological operations and also look for ways to remove blobs of certain sizes to fix the "22" problem. You can also adjust the tolerance value.
import cv2
imgray = cv2.imread('j3iIp.png',0)
point = (0,0)
src = imgray.copy()
tolerance = 25
connectivity = 4
flags = connectivity
flags |= cv2.FLOODFILL_FIXED_RANGE
cv2.floodFill(src, None, point, (0, 255, 255), (tolerance,) * 3, (tolerance,) * 3, flags)
src = cv2.subtract(255, src)
cv2.imshow('filled', src)
cv2.waitKey(0)
cv2.imwrite("result.jpg",src)
cv2.destroyAllWindows()
The result is:
As mentioned, due to adjascent 22 it is creating problem, so the procedure will be like this
import numpy as np
import matplotlib.pyplot as plt
from skimage.segmentation import flood_fill
img = io.imread('number.png', as_gray=True)
plt.imshow(img, cmap='gray')
You app ly flood fill
ff = flood_fill(img, (0,0), 125)
ff[ff != 125] = 0
ff[ff == 125] = 255
plt.imshow(ff, cmap='gray')
Finally, save the image
io.imsave('out.png', ff)
The border pixels are turned on to distinguish the characters from background.
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('j3iIp.jpg',0)
ret,thresh2 = cv2.threshold(img,120,255,cv2.THRESH_BINARY_INV)
cv2.imshow("result", thresh2)
cv2.waitKey(0)
cv2.destroyAllWindows()
img1 = cv2.imwrite("thresh4.jpg",thresh2)
I first tried using cv2.findContours() but there is no hierarchical difference between the white space inside each character and the white space locked in between 22. The "22 problem" displayed in Epsi95's solution can't be solved using contour hierarchy information.
The problem can be solved using cv2.floodFill() and horizontal edge detection to determine a seed point per object.
import cv2
import matplotlib.pyplot as plt
import numpy as np
# Fixed colon locations (assumed to be known constants)
Y1, Y2 = (24, 48)
X1, X2 = (45, 93)
# Plot grayscale image
img = np.uint8(cv2.imread("digits.png", cv2.IMREAD_GRAYSCALE) / 255)
fig, axs = plt.subplots(2)
axs[0].imshow(img, cmap="gray")
# Horizontal edge detection at Y1 to determine the seed points (for flood filling)
kernel = np.array([-1, -1, 1, 1], dtype=np.int8) # bbww detector
edge_det = np.correlate(img[Y1, :], kernel, mode="same") # 2 # bbWw
edge_xloc = np.flatnonzero(edge_det == 2)
seeds_yx = np.array(
[
[Y1, edge_xloc[1]],
[Y1, edge_xloc[edge_xloc < X1][-2]],
[Y1, X1],
[Y1, edge_xloc[edge_xloc > X1][1]],
[Y1, edge_xloc[edge_xloc < X2][-2]],
[Y1, X2],
[Y1, edge_xloc[edge_xloc > X2][1]],
[Y1, edge_xloc[-2]],
[Y2, X1],
[Y2, X2],
]
)
axs[0].plot(seeds_yx[:, 1], seeds_yx[:, 0], "ro")
# Flood fill at seed points
for y, x in seeds_yx:
cv2.floodFill(img, mask=None, seedPoint=(x, y), newVal=0)
axs[1].imshow(img, cmap="gray")
fig.show()
Code results in this plot

How do I generate a curved tube from 2D slices by shifting center coordinates?

I am trying to generate a 3D matrix with a tube structure running through it. I can make the tube straight by copying a 2D numpy array with a circle centered at (x,y) inside, and I can make the tube slanted by adding an int to either the x or y axis for each slice I generate. My question is, how can I move the (x,y) coordinates so that they can form a curve? I can't add step sizes of curved functions like sine and cosine to the coordinates since to index the numpy array it must be an integer. What is a smart way to generate a curved tube from 2D slices by shifting the center coordinates?
Here is the code I am using to generate a straight tube as a 3D matrix:
import numpy as np
import cv2
import matplotlib.pyplot as plt
slice_2d = np.zeros((128,128))
circle_center = (50,50)
radius=10
slice_2d = cv2.circle(slice_2d, circle_center, radius, color=1, thickness=-1)
plt.imshow(slice_2d)
# then we repeat the slice 128 times to create a straight tube in a 3D matrix of 128,128,128
tube_matrix = []
for i in range(0,128):
tube_matrix.append(slice_2d)
tube_matrix = np.array(tube_matrix)
You may use any curve, scale and add offset as needed, and round the center coordinates to integer.
I used the curve for this post.
Here is the loop that adds the slices:
tube_matrix = []
for i in range(128):
circle_center = np.round(curve[i]*12 + 15).astype(int)
slice_2d = cv2.circle(np.zeros((128,128)), tuple(circle_center), radius, color=1, thickness=-1)
tube_matrix.append(slice_2d)
Each iteration the circle center changes according to the value of curve[i].
Note that curve[i] is scaled and rounded (and converted to int).
Here is the complete code (with some testing code):
import numpy as np
import cv2
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
# https://stackoverflow.com/questions/52014197/how-to-interpolate-a-2d-curve-in-python
# Define some points:
points = np.array([[0, 1, 8, 2, 2],
[1, 0, 6, 7, 2]]).T # a (nbre_points x nbre_dim) array
# Linear length along the line:
distance = np.cumsum( np.sqrt(np.sum( np.diff(points, axis=0)**2, axis=1 )) )
distance = np.insert(distance, 0, 0)/distance[-1]
alpha = np.linspace(0, 1, 128)
method = 'cubic'
interpolator = interp1d(distance, points, kind=method, axis=0)
curve = interpolator(alpha)
#slice_2d = np.zeros((128,128))
#circle_center = (30, 30)
img = np.zeros((128, 128, 3), np.uint8) + 255
radius = 10
tube_matrix = []
for i in range(128):
circle_center = np.round(curve[i]*12 + 15).astype(int)
slice_2d = cv2.circle(np.zeros((128,128)), tuple(circle_center), radius, color=1, thickness=-1)
tube_matrix.append(slice_2d)
#Draw cicle on image - for testing
img = cv2.circle(img, tuple(circle_center), radius, color=(i*10 % 255, i*20 % 255, i*30 % 255), thickness=2)
# Graph:
plt.figure(figsize=(7,7))
plt.plot(*curve.T, 'o')
plt.axis('equal'); plt.legend(); plt.xlabel('x'); plt.ylabel('y')
plt.figure(figsize=(7,7))
plt.imshow(img)
plt.show()
Testing image (img):

Fully convert a black and white image to a set of lines (aka vectorize using only lines)

I have a number of black and white images and would like to convert them to a set of lines, such that I can fully, or at least close to fully, reconstruct the original image from the lines. In other words I'm trying to vectorize the image to a set of lines.
I have already looked at the HoughLinesTransform, however this does not cover every part of the image and is more about finding the lines in the image rather than fully converting the image to a line representation. In addition the line transform does not encode the actual width of the lines leaving me guessing at how to reconstruct the images back (which I need to do as this is a preproccesing step towards training a machine learning algorithm).
So far I tried the following code using the houghLineTransform:
import numpy as np
import cv2
MetersPerPixel=0.1
def loadImageGray(path):
img=(cv2.imread(path,0))
return img
def LineTransform(img):
edges = cv2.Canny(img,50,150,apertureSize = 3)
minLineLength = 10
maxLineGap = 20
lines = cv2.HoughLines(edges,1,np.pi/180,100,minLineLength,maxLineGap)
return lines;
def saveLines(liness):
img=np.zeros((2000,2000,3), np.uint8)
for lines in liness:
for x1,y1,x2,y2 in lines:
print(x1,y1,x2,y2)
img=cv2.line(img,(x1,y1),(x2,y2),(0,255,0),3)
cv2.imwrite('houghlines5.jpg',img)
def main():
img=loadImageGray("loadtest.png")
lines=LineTransform(img)
saveLines(lines)
main()
However when tested using the following
I got this image:
As you can see it is missing lines that are not axis aligned and if you look closely even the detected lines have been split into 2 lines with some space between them. I also had to draw these images with a preset width while the real width isn't known.
Edit: on the suggestion of #MarkSetchell I tried the pypotrace by using the following code, currently it largely ignored bezier curves and just tries to act like they are straight lines, I will focus on that problem later, however right now the results aren't optimal either:
def TraceLines(img):
bmp = potrace.Bitmap(bitmap(img))
path=bmp.trace()
lines=[]
i=0
for curve in path:
for segment in curve:
print(repr(segment))
if segment.is_corner:
c_x, c_y = segment.c
c2_x ,c2_y= segment.end_point
lines.append([[int(c_x), int(c_y),int(c2_x) ,int(c2_y)]])
else:
c_x, c_y = segment.c1
c2_x ,c2_y= segment.end_point
i=i+1
return lines
this results in this image , which is an improvement, however while the problem with the circle can be addressed at a later point the missing parts of the square and the weird artefacts on the other straight lines are more problematic. Anyone know how to fix them? Any tips on how to get the line widths?
Anybody got any suggestions on how to better approach this problem?
edit edit: here is another test image : , it includes multiple line widths I would like to capture.
OpenCV
Using OpenCV's findContours and drawContours it is possible to first vectorise the lines and then exactly recreate the original image:
import numpy as np
import cv2
img = cv2.imread('loadtest.png', 0)
result_fill = np.ones(img.shape, np.uint8) * 255
result_borders = np.zeros(img.shape, np.uint8)
# the '[:-1]' is used to skip the contour at the outer border of the image
contours = cv2.findContours(img, cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE)[0][:-1]
# fill spaces between contours by setting thickness to -1
cv2.drawContours(result_fill, contours, -1, 0, -1)
cv2.drawContours(result_borders, contours, -1, 255, 1)
# xor the filled result and the borders to recreate the original image
result = result_fill ^ result_borders
# prints True: the result is now exactly the same as the original
print(np.array_equal(result, img))
cv2.imwrite('contours.png', result)
Result
Scikit-Image
Using scikit-image's find_contours and approximate_polygon allows you to reduce the number of lines by approximating polygons (based on this example):
import numpy as np
from skimage.measure import approximate_polygon, find_contours
import cv2
img = cv2.imread('loadtest.png', 0)
contours = find_contours(img, 0)
result_contour = np.zeros(img.shape + (3, ), np.uint8)
result_polygon1 = np.zeros(img.shape + (3, ), np.uint8)
result_polygon2 = np.zeros(img.shape + (3, ), np.uint8)
for contour in contours:
print('Contour shape:', contour.shape)
# reduce the number of lines by approximating polygons
polygon1 = approximate_polygon(contour, tolerance=2.5)
print('Polygon 1 shape:', polygon1.shape)
# increase tolerance to further reduce number of lines
polygon2 = approximate_polygon(contour, tolerance=15)
print('Polygon 2 shape:', polygon2.shape)
contour = contour.astype(np.int).tolist()
polygon1 = polygon1.astype(np.int).tolist()
polygon2 = polygon2.astype(np.int).tolist()
# draw contour lines
for idx, coords in enumerate(contour[:-1]):
y1, x1, y2, x2 = coords + contour[idx + 1]
result_contour = cv2.line(result_contour, (x1, y1), (x2, y2),
(0, 255, 0), 1)
# draw polygon 1 lines
for idx, coords in enumerate(polygon1[:-1]):
y1, x1, y2, x2 = coords + polygon1[idx + 1]
result_polygon1 = cv2.line(result_polygon1, (x1, y1), (x2, y2),
(0, 255, 0), 1)
# draw polygon 2 lines
for idx, coords in enumerate(polygon2[:-1]):
y1, x1, y2, x2 = coords + polygon2[idx + 1]
result_polygon2 = cv2.line(result_polygon2, (x1, y1), (x2, y2),
(0, 255, 0), 1)
cv2.imwrite('contour_lines.png', result_contour)
cv2.imwrite('polygon1_lines.png', result_polygon1)
cv2.imwrite('polygon2_lines.png', result_polygon2)
Results
Python output:
Contour shape: (849, 2)
Polygon 1 shape: (28, 2)
Polygon 2 shape: (9, 2)
Contour shape: (825, 2)
Polygon 1 shape: (31, 2)
Polygon 2 shape: (9, 2)
Contour shape: (1457, 2)
Polygon 1 shape: (9, 2)
Polygon 2 shape: (8, 2)
Contour shape: (879, 2)
Polygon 1 shape: (5, 2)
Polygon 2 shape: (5, 2)
Contour shape: (973, 2)
Polygon 1 shape: (5, 2)
Polygon 2 shape: (5, 2)
Contour shape: (224, 2)
Polygon 1 shape: (4, 2)
Polygon 2 shape: (4, 2)
Contour shape: (825, 2)
Polygon 1 shape: (13, 2)
Polygon 2 shape: (13, 2)
Contour shape: (781, 2)
Polygon 1 shape: (13, 2)
Polygon 2 shape: (13, 2)
contour_lines.png:
polygon1_lines.png:
polygon2_lines.png:
The length of the lines can then be calculated by applying Pythagoras' theorem to the coordinates: line_length = math.sqrt(abs(x2 - x1)**2 + abs(y2 - y1)**2). If you want to get the width of the lines as numerical values, take a look at the answers of "How to determine the width of the lines?" for some suggested approaches.
I made an attempt at this and am not altogether happy with the results but thought I would share my ideas and some code and anyone else is welcome to take, borrow, steal or develop any ideas further.
I think some of the issues stem from the choice of Canny as the edge detection because it results in two edges, so my first plan of attack was to replace that with a skeletonisaton from scikit-image. That gives this as the edge image:
Then I decided to use HoughLinesP rather than HoughLines, but it didn't seem to find much. I tried increasing and decreasing the resolution parameters but it didn't help. So, I decided to dilate (fatten) the skeleton a bit and then it seems to start detecting the shapes better, and I get this:
I am not sure why it is so sensitive to line thickness and, as I said, if anyone else want to take it and experiment, here's where I got to with the code:
#!/usr/bin/env python3
import numpy as np
import cv2
from skimage.morphology import medial_axis, dilation, disk
def loadImageGray(path):
img=cv2.imread(path,0)
return img
def LineTransform(img):
# Try skeletonising image rather than Canny edge - only one line instead of both sides of line
skeleton = (medial_axis(255-img)*255).astype(np.uint8)
cv2.imwrite('skeleton.png',skeleton)
# Try dilating skeleton to make it fatter and more detectable
selem = disk(2)
fatskel = dilation(skeleton,selem)
cv2.imwrite('fatskeleton.png',fatskel)
minLineLength = 10
maxLineGap = 20
lines = cv2.HoughLinesP(fatskel,1,np.pi/180,100,minLineLength,maxLineGap)
return lines
def saveLines(liness):
img=np.zeros((2000,2000,3), np.uint8)
for lines in liness:
for x1,y1,x2,y2 in lines:
print(x1,y1,x2,y2)
img=cv2.line(img,(x1,y1),(x2,y2),(0,255,0),3)
cv2.imwrite('houghlines.png',img)
img=loadImageGray("loadtest.png")
lines=LineTransform(img)
saveLines(lines)
In fact, if you take the code above and ignore the skeletonisation and fattening, and just use the inverse of the original image for HoughLinesP, the results are pretty similar:
def LineTransform(img):
minLineLength = 10
maxLineGap = 20
lines = cv2.HoughLinesP(255-img,1,np.pi/180,100,minLineLength,maxLineGap)
return lines
#Thijser, in OpenCV you can do the following:
import cv2
from matplotlib import pyplot as plt
import numpy as np
filename = "three.jpg"
src = cv2.imread(filename)
max_lowThreshold = 100
window_name = 'Edge Map'
title_trackbar = 'Min Threshold:'
ratio = 3
kernel_size = 3
def CannyThreshold(val):
low_threshold = val
img_blur = cv2.blur(src_gray, (3,3))
detected_edges = cv2.Canny(img_blur, low_threshold, low_threshold*ratio, kernel_size)
mask = detected_edges != 0
dst = src * (mask[:,:,None].astype(src.dtype))
cv2.imshow(window_name, dst)
src_gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
cv2.namedWindow(window_name)
cv2.createTrackbar(title_trackbar, window_name , 0, max_lowThreshold, CannyThreshold)
CannyThreshold(0)
cv2.waitKey()
You will get:

How to find a point after wrapTransform?

I have a set of coordinates/points I found under the original image before warpPerspective, how do I get the corresponding points in the now cropped & corrected image which is perspective corrected ?
For example:
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
img = cv.imread('sudoku.png')
rows,cols,ch = img.shape
pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
point = np.array([[10,10]])
M = cv.getPerspectiveTransform(pts1,pts2)
dst = cv.warpPerspective(img,M,(300,300))
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
How do I get the new coordinate [10,10] in img map to the dst image ?
You have to perform the same transformations (mathematically) as you have done on the image. In this case it means using cv2.perspectiveTransform (note that the input needs to have 1 row per point, 1 column, and 2 channels -- first being X, second Y cordinate).
This function will transform all the input points, it doesn't perform and cropping. You will need to post-process the transformed coordinates, and discard ones that fall outside the crop area. In your case you want to retain points where (0 <= x < 300) and (0 <= y < 300).
Sample code:
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
img = cv.imread('sudoku.png')
rows,cols,ch = img.shape
pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
points = np.float32([[[10, 10]], [[116,128]], [[254,261]]])
M = cv.getPerspectiveTransform(pts1,pts2)
dst = cv.warpPerspective(img,M,(300,300))
# Transform the points
transformed = cv.perspectiveTransform(points, M)
# Perform the cropping -- filter out points that are outside the crop area
cropped = []
for pt in transformed:
x, y = pt[0]
if x >= 0 and x < dst.shape[1] and y >= 0 and y < dst.shape[0]:
print "Valid point (%d, %d)" % (x, y)
cropped.append([[x,y]])
else:
print "Out-of-bounds point (%d, %d)" % (x, y)
# Turn it back into a single numpy array
cropped = np.hstack(cropped)
# Visualize
plt.subplot(121)
plt.imshow(img)
for pt in points:
x, y = pt[0]
plt.scatter(x, y, s=100, c='red', marker='x')
plt.title('Input')
plt.subplot(122)
plt.imshow(dst)
for pt in transformed:
x, y = pt[0]
plt.scatter(x, y, s=100, c='red', marker='x')
plt.title('Output')
plt.show()
Console Output:
Out-of-bounds point (-53, -63)
Valid point (63, 67)
Valid point (192, 194)
Visualization:

Image processing - fill in hollow circles

I have a binary black and white images that looks like this
I want to fill in those white circles to be solid white disks. How can I do this in Python, preferrably using skimage?
You can detect circles with skimage's methods hough_circle and hough_circle_peaks and then draw over them to "fill" them.
In the following example most of the code is doing "hierarchy" computation for the best fitting circles to avoid drawing circles which are one inside another:
# skimage version 0.14.0
import math
import numpy as np
import matplotlib.pyplot as plt
from skimage import color
from skimage.io import imread
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle
from skimage.util import img_as_ubyte
INPUT_IMAGE = 'circles.png' # input image name
BEST_COUNT = 6 # how many circles to draw
MIN_RADIUS = 20 # min radius should be bigger than noise
MAX_RADIUS = 60 # max radius of circles to be detected (in pixels)
LARGER_THRESH = 1.2 # circle is considered significantly larger than another one if its radius is at least so much bigger
OVERLAP_THRESH = 0.1 # circles are considered overlapping if this part of the smaller circle is overlapping
def circle_overlap_percent(centers_distance, radius1, radius2):
'''
Calculating the percentage area overlap between circles
See Gist for comments:
https://gist.github.com/amakukha/5019bfd4694304d85c617df0ca123854
'''
R, r = max(radius1, radius2), min(radius1, radius2)
if centers_distance >= R + r:
return 0.0
elif R >= centers_distance + r:
return 1.0
R2, r2 = R**2, r**2
x1 = (centers_distance**2 - R2 + r2 )/(2*centers_distance)
x2 = abs(centers_distance - x1)
y = math.sqrt(R2 - x1**2)
a1 = R2 * math.atan2(y, x1) - x1*y
if x1 <= centers_distance:
a2 = r2 * math.atan2(y, x2) - x2*y
else:
a2 = math.pi * r2 - a2
overlap_area = a1 + a2
return overlap_area / (math.pi * r2)
def circle_overlap(c1, c2):
d = math.sqrt((c1[0]-c2[0])**2 + (c1[1]-c2[1])**2)
return circle_overlap_percent(d, c1[2], c2[2])
def inner_circle(cs, c, thresh):
'''Is circle `c` is "inside" one of the `cs` circles?'''
for dc in cs:
# if new circle is larger than existing -> it's not inside
if c[2] > dc[2]*LARGER_THRESH: continue
# if new circle is smaller than existing one...
if circle_overlap(dc, c)>thresh:
# ...and there is a significant overlap -> it's inner circle
return True
return False
# Load picture and detect edges
image = imread(INPUT_IMAGE, 1)
image = img_as_ubyte(image)
edges = canny(image, sigma=3, low_threshold=10, high_threshold=50)
# Detect circles of specific radii
hough_radii = np.arange(MIN_RADIUS, MAX_RADIUS, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent circles (in order from best to worst)
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii)
# Determine BEST_COUNT circles to be drawn
drawn_circles = []
for crcl in zip(cy, cx, radii):
# Do not draw circles if they are mostly inside better fitting ones
if not inner_circle(drawn_circles, crcl, OVERLAP_THRESH):
# A good circle found: exclude smaller circles it covers
i = 0
while i<len(drawn_circles):
if circle_overlap(crcl, drawn_circles[i]) > OVERLAP_THRESH:
t = drawn_circles.pop(i)
else:
i += 1
# Remember the new circle
drawn_circles.append(crcl)
# Stop after have found more circles than needed
if len(drawn_circles)>BEST_COUNT:
break
drawn_circles = drawn_circles[:BEST_COUNT]
# Actually draw circles
colors = [(250, 0, 0), (0, 250, 0), (0, 0, 250)]
colors += [(200, 200, 0), (0, 200, 200), (200, 0, 200)]
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in drawn_circles:
circy, circx = circle(center_y, center_x, radius, image.shape)
color = colors.pop(0)
image[circy, circx] = color
colors.append(color)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()
Result:
Do a morphological closing (explanation) to fill those tiny gaps, to complete the circles. Then fill the resulting binary image.
Code :
from skimage import io
from skimage.morphology import binary_closing, disk
import scipy.ndimage as nd
import matplotlib.pyplot as plt
# Read image, binarize
I = io.imread("FillHoles.png")
bwI =I[:,:,1] > 0
fig=plt.figure(figsize=(24, 8))
# Original image
fig.add_subplot(1,3,1)
plt.imshow(bwI, cmap='gray')
# Dilate -> Erode. You might not want to use a disk in this case,
# more asymmetric structuring elements might work better
strel = disk(4)
I_closed = binary_closing(bwI, strel)
# Closed image
fig.add_subplot(1,3,2)
plt.imshow(I_closed, cmap='gray')
I_closed_filled = nd.morphology.binary_fill_holes(I_closed)
# Filled image
fig.add_subplot(1,3,3)
plt.imshow(I_closed_filled, cmap='gray')
Result :
Note how the segmentation trash has melded to your object on the lower right and the small cape on the lower part of the middle object has been closed. You might want to continue with an morphological erosion or opening after this.
EDIT: Long response to comments below
The disk(4) was just the example I used to produce the results seen in the image. You will need to find a suitable value yourself. Too big of a value will lead to small objects being melded into bigger objects near them, like on the right side cluster in the image. It will also close gaps between objects, whether you want it or not. Too small of a value will lead to the algorithm failing to complete the circles, so the filling operation will then fail.
Morphological erosion will erase a structuring element sized zone from the borders of the objects. Morphological opening is the inverse operation of closing, so instead of dilate->erode it will do erode->dilate. The net effect of opening is that all objects and capes smaller than the structuring element will vanish. If you do it after filling then the large objects will stay relatively the same. Ideally it should remove a lot of the segmentation artifacts caused by the morphological closing I used in the code example, which might or might not be pertinent to you based on your application.
I don't know skimage but if you'd use OpenCv, I would do a Hough transform for circles, and then just draw them over.
Hough Transform is robust, if there are some small holes in the circles that is no problem.
Something like:
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
# you can check size etc here.
for (x, y, r) in circles:
# draw the circle in the output image
# you can fill here.
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
cv2.waitKey(0)
See more info here: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/

Categories

Resources