I need to create a half circle in angle 45 (a moon) , of radius 20 in the left side of a pic. I'm new to the image processing in Python. I've downloaded the PIL library, can anyone give me an advice?
Thanks
This might do what you want:
import Image, ImageDraw
im = Image.open("Two_Dalmatians.jpg")
draw = ImageDraw.Draw(im)
# Locate the "moon" in the upper-left region of the image
xy=[x/4 for x in im.size+im.size]
# Bounding-box is 40x40, so radius of interior circle is 20
xy=[xy[0]-20, xy[1]-20, xy[2]+20, xy[3]+20]
# Fill a chord that starts at 45 degrees and ends at 225 degrees.
draw.chord(xy, 45, 45+180, outline="white", fill="white")
del draw
# save to a different file
with open("Two_Dalmatians_Plus_Moon.png", "wb") as fp:
im.save(fp, "PNG")
Ref: http://effbot.org/imagingbook/imagedraw.htm
This program might satisfy the newly-described requirements:
import Image, ImageDraw
def InitializeMoonData():
''''
Return a 40x40 half-circle, tilted 45 degrees, as raw data
Only call once, at program initialization
'''
im = Image.new("1", (40,40))
draw = ImageDraw.Draw(im)
# Draw a 40-diameter half-circle, tilted 45 degrees
draw.chord((0,0,40,40),
45,
45+180,
outline="white",
fill="white")
del draw
# Fetch the image data:
moon = list(im.getdata())
# Pack it into a 2d matrix
moon = [moon[i:i+40] for i in range(0, 1600, 40)]
return moon
# Store a copy of the moon data somewhere useful
moon = InitializeMoonData()
def ApplyMoonStamp(matrix, x, y):
'''
Put a moon in the matrix image at location x,y
Call whenever you need a moon
'''
# UNTESTED
for i,row in enumerate(moon):
for j,pixel in enumerate(row):
if pixel != 0:
# If moon pixel is not black,
# set image pixel to white
matrix[x+i][y+j] = 255
# In your code:
# m = Matrix(1024,768)
# m = # some kind of math to create the image #
# ApplyMoonStamp(m, 128,128) # Adds the moon to your image
Draw half circle easily using the pieslice function:
from PIL import Image, ImageDraw
# Create a new empty 100x100 image for the sake of example.
# Use Image.open() to draw on your image instead, like this:
# img = Image.open('my_image.png')
img = Image.new('RGB', (100, 100))
radius = 25
# The circle position and size are specified by
# two points defining the bounding rectangle around the circle
topLeftPoint = (0, 0)
bottomRightPoint = (radius * 2, radius * 2)
draw = ImageDraw.Draw(img)
# Zero angle is at positive X axis, and it's going clockwise.
# start = 0, end = 180 would be bottom half circle.
# Adding 45 degrees, we get the diagonal half circle.
draw.pieslice((topLeftPoint, bottomRightPoint), start = 45, end = 180 + 45, fill='yellow')
img.save('moon.png')
Result:
Related
I have Lego cubes forming 4x4 shape, and I'm trying to infer the status of a zone inside the image:
empty/full and the color whether if yellow or Blue.
to simplify my work I have added red marker to define the border of the shape since the camera is shaking sometimes.
Here is a clear image of the shape I'm trying to detect taken by my phone camera
( EDIT : Note that this image is not my input image, it is used just to demonstrate the required shape clearly ).
The shape from the side camera that I'm supposed to use looks like this:
(EDIT : Now this is my input image)
to focus my work on the working zone I have created a mask:
what I have tried so far is to locate the red markers by color (simple threshold without HSV color-space) as following:
import numpy as np
import matplotlib.pyplot as plt
import cv2
img = cv2.imread('sample.png')
RGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
mask = cv2.imread('mask.png')
masked = np.minimum(RGB, mask)
masked[masked[...,1]>25] = 0
masked[masked[...,2]>25] = 0
masked = masked[..., 0]
masked = cv2.medianBlur(masked,5)
plt.imshow(masked, cmap='gray')
plt.show()
and I have spotted the markers so far:
But I'm still confused:
how to detect the external borders of the desired zone, and the internal borders (each Lego cube(Yellow-Blue-Green) borders) inside the red markers precisely?.
thanks in advance for your kind advice.
I tested this approach using your undistorted image. Suppose you have the rectified camera image, so you see the lego bricks through a "bird's eye" perspective. Now, the idea is to use the red markers to estimate a center rectangle and crop that portion of the image. Then, as you know each brick's dimensions (and they are constant) you can trace a grid and extract each cell of the grid, You can compute some HSV-based masks to estimate the dominant color on each grid, and that way you know if the space is occupied by a yellow or blue brick, of it is empty.
These are the steps:
Get an HSV mask of the red markers
Use each marker to estimate the center rectangle through each marker's coordinates
Crop the center rectangle
Divide the rectangle into cells - this is the grid
Run a series of HSV-based maks on each cell and compute the dominant color
Label each cell with the dominant color
Let's see the code:
# Importing cv2 and numpy:
import numpy as np
import cv2
# image path
path = "D://opencvImages//"
fileName = "Bg9iB.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Store a deep copy for results:
inputCopy = inputImage.copy()
# Convert the image to HSV:
hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV)
# The HSV mask values (Red):
lowerValues = np.array([127, 0, 95])
upperValues = np.array([179, 255, 255])
# Create the HSV mask
mask = cv2.inRange(hsvImage, lowerValues, upperValues)
The first part is very straightforward. You set the HSV range and use cv2.inRange to get a binary mask of the target color. This is the result:
We can further improve the binary mask using some morphology. Let's apply a closing with a somewhat big structuring element and 10 iterations. We want those markers as clearly defined as possible:
# Set kernel (structuring element) size:
kernelSize = 5
# Set operation iterations:
opIterations = 10
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
Which yields:
Very nice. Now, let's detect contours on this mask. We will approximate each contour to a bounding box and store its starting point and dimensions. The idea being that, while we will detect every contour, we are not sure of their order. We can sort this list later and get each bounding box from left to right, top to bottom to better estimate the central rectangle. Let's detect contours:
# Create a deep copy, convert it to BGR for results:
maskCopy = mask.copy()
maskCopy = cv2.cvtColor(maskCopy, cv2.COLOR_GRAY2BGR)
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Bounding Rects are stored here:
boundRectsList = []
# Process each contour 1-1:
for i, c in enumerate(contours):
# Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Get the bounding rect's data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# Set a min area threshold
minArea = 100
# Filter blobs by area:
if rectArea > minArea:
#Store the rect:
boundRectsList.append(boundRect)
I also created a deep copy of the mask image for further use. Mainly to create this image, which is the result of the contour detection and bounding box approximation:
Notice that I have included a minimum area condition. I want to ignore noise below a certain threshold defined by minArea. Alright, now we have the bounding boxes in the boundRectsList variable. Let's sort this boxes using the Y coordinate:
# Sort the list based on ascending y values:
boundRectsSorted = sorted(boundRectsList, key=lambda x: x[1])
The list is now sorted and we can enumerate the boxes from left to right, top to bottom. Like this: First "row" -> 0, 1, Second "Row" -> 2, 3. Now, we can define the big, central, rectangle using this info. I call these "inner points". Notice the rectangle is defined as function of all the bounding boxes. For example, its top left starting point is defined by bounding box 0's bottom right ending point (both x and y). Its width is defined by bounding box 1's bottom left x coordinate, height is defined by bounding box 2's rightmost y coordinate. I'm gonna loop through each bounding box and extract their relevant dimensions to construct the center rectangle in the following way: (top left x, top left y, width, height). There's more than one way yo achieve this. I prefer to use a dictionary to get the relevant data. Let's see:
# Rectangle dictionary:
# Each entry is an index of the currentRect list
# 0 - X, 1 - Y, 2 - Width, 3 - Height
# Additionally: -1 is 0 (no dimension):
pointsDictionary = {0: (2, 3),
1: (-1, 3),
2: (2, -1),
3: (-1, -1)}
# Store center rectangle coordinates here:
centerRectangle = [None]*4
# Process the sorted rects:
rectCounter = 0
for i in range(len(boundRectsSorted)):
# Get sorted rect:
currentRect = boundRectsSorted[i]
# Get the bounding rect's data:
rectX = currentRect[0]
rectY = currentRect[1]
rectWidth = currentRect[2]
rectHeight = currentRect[3]
# Draw sorted rect:
cv2.rectangle(maskCopy, (int(rectX), int(rectY)), (int(rectX + rectWidth),
int(rectY + rectHeight)), (0, 255, 0), 5)
# Get the inner points:
currentInnerPoint = pointsDictionary[i]
borderPoint = [None]*2
# Check coordinates:
for p in range(2):
# Check for '0' index:
idx = currentInnerPoint[p]
if idx == -1:
borderPoint[p] = 0
else:
borderPoint[p] = currentRect[idx]
# Draw the border points:
color = (0, 0, 255)
thickness = -1
centerX = rectX + borderPoint[0]
centerY = rectY + borderPoint[1]
radius = 50
cv2.circle(maskCopy, (centerX, centerY), radius, color, thickness)
# Mark the circle
org = (centerX - 20, centerY + 20)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(maskCopy, str(rectCounter), org, font,
2, (0, 0, 0), 5, cv2.LINE_8)
# Show the circle:
cv2.imshow("Sorted Rects", maskCopy)
cv2.waitKey(0)
# Store the coordinates into list
if rectCounter == 0:
centerRectangle[0] = centerX
centerRectangle[1] = centerY
else:
if rectCounter == 1:
centerRectangle[2] = centerX - centerRectangle[0]
else:
if rectCounter == 2:
centerRectangle[3] = centerY - centerRectangle[1]
# Increase rectCounter:
rectCounter += 1
This image shows each inner point with a red circle. Each circle is enumerated from left to right, top to bottom. The inner points are stored in the centerRectangle list:
If you join each inner point you get the center rectangle we have been looking for:
# Check out the big rectangle at the center:
bigRectX = centerRectangle[0]
bigRectY = centerRectangle[1]
bigRectWidth = centerRectangle[2]
bigRectHeight = centerRectangle[3]
# Draw the big rectangle:
cv2.rectangle(maskCopy, (int(bigRectX), int(bigRectY)), (int(bigRectX + bigRectWidth),
int(bigRectY + bigRectHeight)), (0, 0, 255), 5)
cv2.imshow("Big Rectangle", maskCopy)
cv2.waitKey(0)
Check it out:
Now, just crop this portion of the original image:
# Crop the center portion:
centerPortion = inputCopy[bigRectY:bigRectY + bigRectHeight, bigRectX:bigRectX + bigRectWidth]
# Store a deep copy for results:
centerPortionCopy = centerPortion.copy()
This is the central portion of the image:
Cool, now let's create the grid. You know that there must be 4 bricks per width and 4 bricks per height. We can divide the image using this info. I'm storing each sub-image, or cell, in a list. I'm also estimating each cell's center, for additional processing. These are stored in a list too. Let's see the procedure:
# Dive the image into a grid:
verticalCells = 4
horizontalCells = 4
# Cell dimensions
cellWidth = bigRectWidth / verticalCells
cellHeight = bigRectHeight / horizontalCells
# Store the cells here:
cellList = []
# Store cell centers here:
cellCenters = []
# Loop thru vertical dimension:
for j in range(verticalCells):
# Cell starting y position:
yo = j * cellHeight
# Loop thru horizontal dimension:
for i in range(horizontalCells):
# Cell starting x position:
xo = i * cellWidth
# Cell Dimensions:
cX = int(xo)
cY = int(yo)
cWidth = int(cellWidth)
cHeight = int(cellHeight)
# Crop current cell:
currentCell = centerPortion[cY:cY + cHeight, cX:cX + cWidth]
# into the cell list:
cellList.append(currentCell)
# Store cell center:
cellCenters.append((cX + 0.5 * cWidth, cY + 0.5 * cHeight))
# Draw Cell
cv2.rectangle(centerPortionCopy, (cX, cY), (cX + cWidth, cY + cHeight), (255, 255, 0), 5)
cv2.imshow("Grid", centerPortionCopy)
cv2.waitKey(0)
This is the grid:
Let's now process each cell individually. Of course, you can process each cell on the last loop, but I'm not currently looking for optimization, clarity is my priority. We need to generate a series of HSV masks with the target colors: yellow, blue and green (empty). I prefer to, again, implement a dictionary with the target colors. I'll generate a mask for each color and I'll count the number of white pixels using cv2.countNonZero. Again, I set a minimum threshold. This time of 10. With this info I can determine which mask generated the maximum number of white pixels, thus, giving me the dominant color:
# HSV dictionary - color ranges and color name:
colorDictionary = {0: ([93, 64, 21], [121, 255, 255], "blue"),
1: ([20, 64, 21], [30, 255, 255], "yellow"),
2: ([55, 64, 21], [92, 255, 255], "green")}
# Cell counter:
cellCounter = 0
for c in range(len(cellList)):
# Get current Cell:
currentCell = cellList[c]
# Convert to HSV:
hsvCell = cv2.cvtColor(currentCell, cv2.COLOR_BGR2HSV)
# Some additional info:
(h, w) = currentCell.shape[:2]
# Process masks:
maxCount = 10
cellColor = "None"
for m in range(len(colorDictionary)):
# Get current lower and upper range values:
currentLowRange = np.array(colorDictionary[m][0])
currentUppRange = np.array(colorDictionary[m][1])
# Create the HSV mask
mask = cv2.inRange(hsvCell, currentLowRange, currentUppRange)
# Get max number of target pixels
targetPixelCount = cv2.countNonZero(mask)
if targetPixelCount > maxCount:
maxCount = targetPixelCount
# Get color name from dictionary:
cellColor = colorDictionary[m][2]
# Get cell center, add an x offset:
textX = int(cellCenters[cellCounter][0]) - 100
textY = int(cellCenters[cellCounter][1])
# Draw text on cell's center:
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(centerPortion, cellColor, (textX, textY), font,
2, (0, 0, 255), 5, cv2.LINE_8)
# Increase cellCounter:
cellCounter += 1
cv2.imshow("centerPortion", centerPortion)
cv2.waitKey(0)
This is the result:
From here it is easy to identify the empty spaces on the grid. What I didn't cover was the perspective rectification of your distorted image, but there's plenty of info on how to do that. Hope this helps you out!
Edit:
If you want to apply this approach to your distorted image you need to undo the fish-eye and the perspective distortion. Your rectified image should look like this:
You probably will have to tweak some values because some of the distortion still remains, even after rectification.
Here is a test program. I started with two random dots and the line connecting them. Now, I want to take a given image (with x,y dimensions of 79 x 1080) and blit it on top of the guide line. I understand that arctan will give me the angle between the points on a cartesian grid, but because y is backwards the screen (x,y), I have to invert some values. I'm confused about the negating step.
If you run this repeatedly, you'll see the image is always parallel to the line, and sometimes on top, but not consistently.
import math
import pygame
import random
pygame.init()
screen = pygame.display.set_mode((600,600))
#target = (126, 270)
#start = (234, 54)
target = (random.randrange(600), random.randrange(600))
start = (random.randrange(600), random.randrange(600))
BLACK = (0,0,0)
BLUE = (0,0,128)
GREEN = (0,128,0)
pygame.draw.circle(screen, GREEN, start, 15)
pygame.draw.circle(screen, BLUE, target, 15)
pygame.draw.line(screen, BLUE, start, target, 5)
route = pygame.Surface((79,1080))
route.set_colorkey(BLACK)
BMP = pygame.image.load('art/trade_route00.png').convert()
(bx, by, bwidth, bheight) = route.get_rect()
route.blit(BMP, (0,0), area=route.get_rect())
# get distance within screen in pixels
dist = math.sqrt((start[0] - target[0])**2 + (start[1] - target[1])**2)
# scale to fit: use distance between points, and make width extra skinny.
route = pygame.transform.scale(route, (int(bwidth * dist/bwidth * 0.05), int( bheight * dist/bheight)))
# and rotate... (invert, as negative is for clockwise)
angle = math.degrees(math.atan2(-1*(target[1]-start[1]), target[0]-start[0]))
route = pygame.transform.rotate(route, angle + 90 )
position = route.get_rect()
HERE = (abs(target[0] - position[2]), target[1]) # - position[3]/2)
print(HERE)
screen.blit(route, HERE)
pygame.display.update()
print(start, target, dist, angle, position)
The main problem
The error is not due to the inverse y coordinates (0 at top, max at bottom) while rotating as you seems to think. That part is correct. The error is here:
HERE = (abs(target[0] - position[2]), target[1]) # - position[3]/2)
HERE must be the coordinates of the top-left corner of the rectangle inscribing your green and blue dots connected by the blue line. At those coordinates, you need to place the Surface route after rescaling.
You can get this vertex by doing:
HERE = (min(start[0], target[0]), min(start[1], target[1]))
This should solve the problem, and your colored dots should lay on the blue line.
A side note
Another thing you might wish to fix is the scaling parameter of route:
route = pygame.transform.scale(route, (int(bwidth * dist/bwidth * 0.05), int( bheight * dist/bheight)))
If my guess is correct and you want to preserve the original widht/height ratio in the rescaled route (since your original image is not a square) this should be:
route = pygame.transform.scale(route, (int(dist* bwidth/bheight), int(dist)))
assuming that you want height (the greater size in the original) be scaled to dist. So you may not need the 0.05, or maybe you can use a different shrinking parameter (probably 0.05 will shrink it too much).
I am using OpenCV for a robot vision project - navigating a maze. I can detect the lines where the walls of the maze meet the floor. And now need to use these detected lines to calculate which way the robot should turn.
In order to work out which way the robot should move I believe the solution is to calculate the angle of the walls in relation to the position of the robot. However where both walls are found how do I select which points to use as a reference.
I understand that I can use the python atan2 formula to calculate the angle between two points but after that I am completely lost.
Here is my code:
# https://towardsdatascience.com/finding-driving-lane-line-live-with-opencv-f17c266f15db
# Testing edge detection for maze
import cv2
import numpy as np
import math
image = cv2.imread("/Users/BillHarvey/Documents/Electronics_and_Robotics/Robot_Vision_Project/mazeme/maze1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size,kernel_size),0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
# create a mask of the edges image using cv2.filpoly()
mask = np.zeros_like(edges)
ignore_mask_color = 255
# define the Region of Interest (ROI) - source code sets as a trapezoid for roads
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(100, 420), (1590, 420),(imshape[1],imshape[0])]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_edges = cv2.bitwise_and(edges, mask)
# mybasic ROI bounded by a blue rectangle
#ROI = cv2.rectangle(image,(0,420),(1689,839),(0,255,0),3)
# define the Hough Transform parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 #minimum number of pixels making up a line
max_line_gap = 30 # maximum gap in pixels between connectable line segments
# make a blank the same size as the original image to draw on
line_image = np.copy(image)*0
# run Hough on edge detected image
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)
angle = math.atan2(x2-x1, y2-y1)
angle = angle * 180 / 3.14
print("Angle = ", angle)
# draw the line on the original image
lines_edges = cv2.addWeighted(image, 0.8, line_image, 1, 0)
#return lines_edges
#cv2.imshow("original", image)
#cv2.waitKey(0)
#cv2.imshow("edges", edges)
#cv2.waitKey(0)
cv2.imshow("detected", lines_edges)
cv2.waitKey(0)
cv2.imwrite("lanes_detected.jpg", lines_edges)
cv2.destroyAllWindows()
I have added the athn2 forumla in the piece of code that draws blue lines where HoughLinesP has detected lines.
And to convert the results (angle) to degrees I found this formula:
angle = angle * 180 / 3.14
The following piece of code:
print("Angle = ", angle)
Prints 13 angles which may or may not equate to the lines in the pic, do they? To avoid getting a - degrees I had to do x2-x1, y2-y1 rather than the other way around which I have seen elsewhere.
I do apologise for my fundental lack of python and mathematical knowledge but any help would be gratefully received.
I have a binary black and white images that looks like this
I want to fill in those white circles to be solid white disks. How can I do this in Python, preferrably using skimage?
You can detect circles with skimage's methods hough_circle and hough_circle_peaks and then draw over them to "fill" them.
In the following example most of the code is doing "hierarchy" computation for the best fitting circles to avoid drawing circles which are one inside another:
# skimage version 0.14.0
import math
import numpy as np
import matplotlib.pyplot as plt
from skimage import color
from skimage.io import imread
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle
from skimage.util import img_as_ubyte
INPUT_IMAGE = 'circles.png' # input image name
BEST_COUNT = 6 # how many circles to draw
MIN_RADIUS = 20 # min radius should be bigger than noise
MAX_RADIUS = 60 # max radius of circles to be detected (in pixels)
LARGER_THRESH = 1.2 # circle is considered significantly larger than another one if its radius is at least so much bigger
OVERLAP_THRESH = 0.1 # circles are considered overlapping if this part of the smaller circle is overlapping
def circle_overlap_percent(centers_distance, radius1, radius2):
'''
Calculating the percentage area overlap between circles
See Gist for comments:
https://gist.github.com/amakukha/5019bfd4694304d85c617df0ca123854
'''
R, r = max(radius1, radius2), min(radius1, radius2)
if centers_distance >= R + r:
return 0.0
elif R >= centers_distance + r:
return 1.0
R2, r2 = R**2, r**2
x1 = (centers_distance**2 - R2 + r2 )/(2*centers_distance)
x2 = abs(centers_distance - x1)
y = math.sqrt(R2 - x1**2)
a1 = R2 * math.atan2(y, x1) - x1*y
if x1 <= centers_distance:
a2 = r2 * math.atan2(y, x2) - x2*y
else:
a2 = math.pi * r2 - a2
overlap_area = a1 + a2
return overlap_area / (math.pi * r2)
def circle_overlap(c1, c2):
d = math.sqrt((c1[0]-c2[0])**2 + (c1[1]-c2[1])**2)
return circle_overlap_percent(d, c1[2], c2[2])
def inner_circle(cs, c, thresh):
'''Is circle `c` is "inside" one of the `cs` circles?'''
for dc in cs:
# if new circle is larger than existing -> it's not inside
if c[2] > dc[2]*LARGER_THRESH: continue
# if new circle is smaller than existing one...
if circle_overlap(dc, c)>thresh:
# ...and there is a significant overlap -> it's inner circle
return True
return False
# Load picture and detect edges
image = imread(INPUT_IMAGE, 1)
image = img_as_ubyte(image)
edges = canny(image, sigma=3, low_threshold=10, high_threshold=50)
# Detect circles of specific radii
hough_radii = np.arange(MIN_RADIUS, MAX_RADIUS, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent circles (in order from best to worst)
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii)
# Determine BEST_COUNT circles to be drawn
drawn_circles = []
for crcl in zip(cy, cx, radii):
# Do not draw circles if they are mostly inside better fitting ones
if not inner_circle(drawn_circles, crcl, OVERLAP_THRESH):
# A good circle found: exclude smaller circles it covers
i = 0
while i<len(drawn_circles):
if circle_overlap(crcl, drawn_circles[i]) > OVERLAP_THRESH:
t = drawn_circles.pop(i)
else:
i += 1
# Remember the new circle
drawn_circles.append(crcl)
# Stop after have found more circles than needed
if len(drawn_circles)>BEST_COUNT:
break
drawn_circles = drawn_circles[:BEST_COUNT]
# Actually draw circles
colors = [(250, 0, 0), (0, 250, 0), (0, 0, 250)]
colors += [(200, 200, 0), (0, 200, 200), (200, 0, 200)]
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in drawn_circles:
circy, circx = circle(center_y, center_x, radius, image.shape)
color = colors.pop(0)
image[circy, circx] = color
colors.append(color)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()
Result:
Do a morphological closing (explanation) to fill those tiny gaps, to complete the circles. Then fill the resulting binary image.
Code :
from skimage import io
from skimage.morphology import binary_closing, disk
import scipy.ndimage as nd
import matplotlib.pyplot as plt
# Read image, binarize
I = io.imread("FillHoles.png")
bwI =I[:,:,1] > 0
fig=plt.figure(figsize=(24, 8))
# Original image
fig.add_subplot(1,3,1)
plt.imshow(bwI, cmap='gray')
# Dilate -> Erode. You might not want to use a disk in this case,
# more asymmetric structuring elements might work better
strel = disk(4)
I_closed = binary_closing(bwI, strel)
# Closed image
fig.add_subplot(1,3,2)
plt.imshow(I_closed, cmap='gray')
I_closed_filled = nd.morphology.binary_fill_holes(I_closed)
# Filled image
fig.add_subplot(1,3,3)
plt.imshow(I_closed_filled, cmap='gray')
Result :
Note how the segmentation trash has melded to your object on the lower right and the small cape on the lower part of the middle object has been closed. You might want to continue with an morphological erosion or opening after this.
EDIT: Long response to comments below
The disk(4) was just the example I used to produce the results seen in the image. You will need to find a suitable value yourself. Too big of a value will lead to small objects being melded into bigger objects near them, like on the right side cluster in the image. It will also close gaps between objects, whether you want it or not. Too small of a value will lead to the algorithm failing to complete the circles, so the filling operation will then fail.
Morphological erosion will erase a structuring element sized zone from the borders of the objects. Morphological opening is the inverse operation of closing, so instead of dilate->erode it will do erode->dilate. The net effect of opening is that all objects and capes smaller than the structuring element will vanish. If you do it after filling then the large objects will stay relatively the same. Ideally it should remove a lot of the segmentation artifacts caused by the morphological closing I used in the code example, which might or might not be pertinent to you based on your application.
I don't know skimage but if you'd use OpenCv, I would do a Hough transform for circles, and then just draw them over.
Hough Transform is robust, if there are some small holes in the circles that is no problem.
Something like:
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
# you can check size etc here.
for (x, y, r) in circles:
# draw the circle in the output image
# you can fill here.
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
cv2.waitKey(0)
See more info here: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/
I have created a triangle positioned in the centre of the screen.
from PIL import Image, ImageDraw
GRAY = (190, 190, 190)
im = Image.new('RGBA', (400, 400), WHITE)
points = (250, 250), (100, 250), (250, 100)
draw = ImageDraw.Draw(im)
draw.polygon(points, GRAY)
How do I duplicate this image and reflect it along each sides of the triangle at different random points. For example...
Plan: First find a random point on the edge of the big triangle where to put a smaller one, and then rotate it so it fits properly on the edge.
Suppose we can access the points of the triangle with something like this
triangle.edges[0].x,
triangle.edges[0].y,
triangle.edges[1].x,
etc
We can then find an arbitrary point by first selecting an edge, and "walk a random distance to the next edge":
r = randInt(3) # random integer between 0 and 2
first_edge = triangle.edges[r]
second_edge = r == 2 ? triangle.edges[0] : triangle.edges[r + 1]
## The next lines is kind of pseudo-code
r = randFloat(1)
random_point = (second_edge - first_edge)*r + first_edge
Our next problem is how to rotate a triangle. If you have done some algebra you might recognise this:
def rotatePointAroundOrigin(point, angle):
new_point = Point()
new_point.x = cos(angle)*point.x - sin(angle)*point.y
new_point.y = sin(angle).point.x + cos(angle)*point.y
return new_point
(see https://en.wikipedia.org/wiki/Rotation_matrix)
In addition to this you need to determine just how much to rotate the triangle, and then apply the function above to all of the points.