How to calculate the distance between two lines? - python

I have been trying to calculate the distance between two lines in an image in Python. For example, in the image given below, I want to find the perpendicular distance between the two ends of the yellow block. So far I have been only able to derive the distance between two pixels.
The code I could make was to find the distance between red and blue pixels. I figured I could improve this to make the distance between the two points/lines in this image, but no luck yet.
import numpy as np
from PIL import Image
import math
# Load image and ensure RGB - just in case palettised
im = Image.open("2points.png").convert("RGB")
# Make numpy array from image
npimage = np.array(im)
# Describe what a single red pixel looks like
red = np.array([255,0,0],dtype=np.uint8)
# Find [x,y] coordinates of all red pixels
reds = np.where(np.all((npimage==red),axis=-1))
print(reds)
# Describe what a single blue pixel looks like
blue=np.array([0,0,255],dtype=np.uint8)
# Find [x,y] coordinates of all blue pixels
blues=np.where(np.all((npimage==blue),axis=-1))
print(blues)
dx2 = (blues[0][0]-reds[0][0])**2 # (200-10)^2
dy2 = (blues[1][0]-reds[1][0])**2 # (300-20)^2
distance = math.sqrt(dx2 + dy2)
print(distance)

While preparing this answer, I realized, that my hint regarding cv2.boxPoints was misleading. Of course, I had cv2.boundingRect on my mind – sorry for that!
Nevertheless, here's the full step-by-step approach:
Use cv2.inRange to mask all yellow pixels. Attention: Your image has JPG artifacts, such that you get a lot of noise in the mask, cf. the output:
Use cv2.findContours to find all contours in the mask. That'll be over 50, due to the many tiny artifacts.
Use Python's max function on the (list of) found contours using cv2.contourArea as key to get the largest contour.
Finally, use cv2.boundingRect to get the bounding rectangle of the contour. That's a tuple (x, y, widht, height). Just use the last two elements, and you have your desired information.
That'd be my code:
import cv2
# Read image with OpenCV
img = cv2.imread('path/to/your/image.ext')
# Mask yellow color (0, 255, 255) in image; Attention: OpenCV uses BGR ordering
yellow_mask = cv2.inRange(img, (0, 255, 255), (0, 255, 255))
# Find contours in yellow mask w.r.t the OpenCV version
cnts = cv2.findContours(yellow_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Get the largest contour
cnt = max(cnts, key=cv2.contourArea)
# Get width and height from bounding rectangle of largest contour
(x, y, w, h) = cv2.boundingRect(cnt)
print('Width:', w, '| Height:', h)
The output
Width: 518 | Height: 320
seems reasonable.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
OpenCV: 4.5.1
----------------------------------------

Related

How to find rectangles in a full transparent object?

I have an input image of a fully transparent object:
I need to detect the 42 rectangles in this image. This is an example of the output image I need (I marked 6 rectangles for better understanding):
The problem is that the rectangles look really different. I have to use this input image.
How can I achieve this?
Edit 1: Here is a input image as png:
If you calculate the variance down the rows and across the columns, using:
import cv2
import numpy as np
im = cv2.imread('YOURIMAGE', cv2.IMREAD_GRAYSCALE)
# Calculate horizontal and vertical variance
h = np.var(im, axis=1)
v = np.var(im, axis=0)
You can plot them and hopefully locate the peaks of variance which should be your objects:
Mark Setchell's idea is out-of-the-box. Here is a more traditional approach.
Approach:
The image contains boxes whose intensity fades away in the lower rows. Using global equalization would fail here since the intensity changes of the entire image is taken into account. I opted for a local equalization approach in OpenCV this is available as CLAHE (Contrast Limited Adaptive Histogram Equalization))
Using CLAHE:
Equalization is applied on individual regions of the image whose size can be predefined.
To avoid over amplification, contrast limiting is applied, (hence the name).
Let's see how to use it in our problem:
Code:
# read image and store green channel
green_channel = img[:,:,1]
# grid-size for CLAHE
ts = 8
# initialize CLAHE function with parameters
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(ts, ts))
# apply the function
cl = clahe.apply(green_channel)
Notice the image above, the boxes in the lower regions appear slightly darker as expected. This will help us later on.
# apply Otsu threshold
r,th_cl = cv2.threshold(cl, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# dilation performed using vertical kernels to connect disjoined boxes
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 3))
dilate = cv2.dilate(th_cl, vertical_kernel, iterations=1)
# find contours and draw bounding boxes
contours, hierarchy = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
img2 = img.copy()
for c in contours:
area = cv2.contourArea(c)
if area > 100:
x, y, w, h = cv2.boundingRect(c)
img2 = cv2.rectangle(img2, (x, y), (x + w, y + h), (0,255,255), 1)
(The top-rightmost box isn't covered properly. You would need to tweak the various parameters to get an accurate result)
Other pre-processing approaches you can try:
Global equalization
Contrast stretching
Normalization

how to fill the hollow lines opencv

I have an image like this:
after I applied some processings e.g. cv2.Canny(), it looks like this now:
As you can see that the black lines become hollow.
I have tried erosion and dilation, but if I do them many times, the 2 entrances will be closed(meaning become connected line or closed contour).
How could I make those lines solid like the below image while keep the 2 entrances not affected?
Update 1
I have tested the following answers with a few of photos, but the code seems customized to only be able to handle this one particular picture. Due to the restriction of SOF, I cannot upload photos larger than 2MB, so I uploaded them into my Microsoft OneDrive folder for your convenience to test.
https://1drv.ms/u/s!Asflam6BEzhjgbIhgkL4rt1NLSjsZg?e=OXXKBK
Update 2
I picked up #fmw42's post as answer as his answer is the most detailed one. It doesn't answer my question but points out the correct way to process maze which is my ultimate goal. I like his approach of answering questions, firstly tells you what each step should do so that you have a clear idea about how to do the task, then provide the full code example from beginning to end. Very helpful.
Due to the limitation of SOF, I can only pick up one answer. If multiple answers are allowed, I would also pick up Shamshirsaz.Navid's answer. His answer not only points to the correct direction to solve the issue, but also the explanation with visualization really works well for me~! I guess it works equally well for all people who are trying to understand why each line of code is needed. Also he follows up my questions in comments, this makes the SOF a bit interactive :)
The Threshold track bar in Ann Zen's answer is also a very useful tip for people to quickly find out a optimal value.
Here is one way to process the maze and rectify it in Python/OpenCV.
Read the input
Convert to gray
Threshold
Use morphology close to remove the thinnest (extraneous) black lines
Invert the threshold
Get the external contours
Keep on those contours that are larger than 1/4 of both the width and height of the input
Draw those contours as white lines on black background
Get the convex hull from the white contour lines image
Draw the convex hull as white lines on black background
Use GoodFeaturesToTrack to get the 4 corners from the white hull lines image
Sort the 4 corners by angle relative to the centroid so that they are ordered clockwise: top-left, top-right, bottom-right, bottom-left
Set these points as the array of conjugate control points for the input
Use 1/2 the dimensions of the input to define the array of conjugate control points for the output
Compute the perspective transformation matrix
Warp the input image using the perspective matrix
Save the results
Input:
import cv2
import numpy as np
import math
# load image
img = cv2.imread('maze.jpg')
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# use morphology to remove the thin lines
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (5,1))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# invert so that lines are white so that we can get contours for them
thresh_inv = 255 - thresh
# get external contours
contours = cv2.findContours(thresh_inv, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# keep contours whose bounding boxes are greater than 1/4 in each dimension
# draw them as white on black background
contour = np.zeros((hh,ww), dtype=np.uint8)
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
if w > ww/4 and h > hh/4:
cv2.drawContours(contour, [cntr], 0, 255, 1)
# get convex hull from contour image white pixels
points = np.column_stack(np.where(contour.transpose() > 0))
hull_pts = cv2.convexHull(points)
# draw hull on copy of input and on black background
hull = img.copy()
cv2.drawContours(hull, [hull_pts], 0, (0,255,0), 2)
hull2 = np.zeros((hh,ww), dtype=np.uint8)
cv2.drawContours(hull2, [hull_pts], 0, 255, 2)
# get 4 corners from white hull points on black background
num = 4
quality = 0.001
mindist = max(ww,hh) // 4
corners = cv2.goodFeaturesToTrack(hull2, num, quality, mindist)
corners = np.int0(corners)
for corner in corners:
px,py = corner.ravel()
cv2.circle(hull, (px,py), 5, (0,0,255), -1)
# get angles to each corner relative to centroid and store with x,y values in list
# angles are clockwise between -180 and +180 with zero along positive X axis (to right)
corner_info = []
center = np.mean(corners, axis=0)
centx = center.ravel()[0]
centy = center.ravel()[1]
for corner in corners:
px,py = corner.ravel()
dx = px - centx
dy = py - centy
angle = (180/math.pi) * math.atan2(dy,dx)
corner_info.append([px,py,angle])
# function to define sort key as element 2 (i.e. angle)
def takeThird(elem):
return elem[2]
# sort corner_info on angle so result will be TL, TR, BR, BL order
corner_info.sort(key=takeThird)
# make conjugate control points
# get input points from corners
corner_list = []
for x, y, angle in corner_info:
corner_list.append([x,y])
print(corner_list)
# define input points from (sorted) corner_list
input = np.float32(corner_list)
# define output points from dimensions of image, say half of input image
width = ww // 2
height = hh // 2
output = np.float32([[0,0], [width-1,0], [width-1,height-1], [0,height-1]])
# compute perspective matrix
matrix = cv2.getPerspectiveTransform(input,output)
# do perspective transformation setting area outside input to black
result = cv2.warpPerspective(img, matrix, (width,height), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(0,0,0))
# save output
cv2.imwrite('maze_thresh.jpg', thresh)
cv2.imwrite('maze_contour.jpg', contour)
cv2.imwrite('maze_hull.jpg', hull)
cv2.imwrite('maze_rectified.jpg', result)
# Display various images to see the steps
cv2.imshow('thresh', thresh)
cv2.imshow('contour', contour)
cv2.imshow('hull', hull)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded Image after morphology:
Filtered Contours on black background:
Convex hull and 4 corners on input image:
Result from perspective warp:
You can try a simple threshold to detect the lines of the maze, as they are conveniently black:
import cv2
img = cv2.imread("maze.jpg")
gray = cv2.cvtColor(img, cv2.BGR2GRAY)
_, thresh = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)
cv2.imshow("Image", thresh)
cv2.waitKey(0)
Output:
You can adjust the threshold yourself with trackbars:
import cv2
cv2.namedWindow("threshold")
cv2.createTrackbar("", "threshold", 0, 255, id)
img = cv2.imread("maze.jpg")
while True:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t = cv2.getTrackbarPos("", "threshold")
_, thresh = cv2.threshold(gray, t, 255, cv2.THRESH_BINARY)
cv2.imshow("Image", thresh)
if cv2.waitKey(1) & 0xFF == ord("q"): # If you press the q key
break
Canny is an edge detector. It detects the lines along which color changes. A line in your input image has two such transitions, one on each side. Therefore you see two parallel lines on each side of a line in the image. This answer of mine explains the difference between edges and lines.
So, you shouldn’t be using an edge detector to detect lines in an image.
If a simple threshold doesn't properly binarize this image, try using a local threshold ("adaptive threshold" in OpenCV). Another thing that works well for images like these is applying a top hat filter (for this image, it would be a closing(img) - img), where the structuring element is adjusted to the width of the lines you want to find. This will result in an image that is easy to threshold and will preserve all lines thinner than the structuring element.
Check this:
import cv2
import numpy as np
im=cv2.imread("test2.jpg",1)
#convert 2 gray
mask=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
#convert 2 black and white
mask=cv2.threshold(mask,127,255,cv2.THRESH_BINARY)[1]
#remove thin lines and texts and then remake main lines
mask=cv2.dilate(mask,np.ones((5, 5), 'uint8'))
mask=cv2.erode(mask,np.ones((4, 4), 'uint8'))
#smooth lines
mask=cv2.medianBlur(mask,3)
#write output mask
cv2.imwrite("mask2.jpg",mask)
From now on, everything can be done. You can delete extra blobs, you can extract lines from the original image according to the mask, and things like that.
Median:
Median changes are not much for this project. And it can be safely removed. But I prefer it because it rounds the ends of the lines a bit. You have to zoom in a lot to see the pixels. But this technique is usually used to remove salt/pepper noise.
Erode Kernel:
In the case of the kernel, the larger the number, the thicker the lines. Well, this is not always good. Because it causes the path lines to stick to the arrow and later it becomes difficult to separate the paths from the arrow.
Update:
It does not matter if part of the Maze is cleared. The important thing is that from this mask you can draw a rectangle around this shape and create a new mask for this image.
Make a white rectangle around these paths in a new mask. Completely whiten the inside of the mask with FloodFill or any other technique. Now you have a new mask that can take the whole shape out of the original image. Now in the next step you can correct Perspective.

How to detect intersecting shapes with OpenCV findContours?

I have two intersecting ellipses in a black and white image. I am trying to use OpenCV findContours to identify the separate shapes as separate contours using this code (and attached image below).
import numpy as np
import matplotlib.pyplot as plt
import cv2
import skimage.morphology
img_3d = cv2.imread("C:/temp/test_annotation_overlap.png")
img_grey = cv2.cvtColor(img_3d, cv2.COLOR_BGR2GRAY)
contours = cv2.findContours(img_grey, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
fig, ax = plt.subplots(len(contours)+1,1, figsize=(5, 20))
thicker_img_grey = skimage.morphology.dilation(img_grey, skimage.morphology.disk(radius=3))
ax[0].set_title("ORIGINAL IMAGE")
ax[0].imshow(thicker_img_grey, cmap="Greys")
for i, contour in enumerate(contours):
new_img = np.zeros_like(img_grey)
cv2.drawContours(new_img, contour, -1, (255,255,255), 10)
ax[i+1].set_title(f"Contour {i}")
ax[i+1].imshow(new_img, cmap="Greys")
plt.show()
However four contours are found, none of which are the original contour:
How can I configure OpenCV.findContours to identify the two separate shapes? (Note I have already played around with Hough circles and found it unreliable for the images I am analysing)
Maybe I overkilled with this approach but it could be used as a working approach. You could find all the contours on the image - you will get the two contours that are like a "semicircle", the contour of the intersection and the contour that is the outer shape of the two addjointed circles. Smallest three contours should be the two semicircles and the intersection. If you draw combinations of two out of these three contours, you will get three mask out of which two will have the combination of one semicircle and the intersection. If you perform closing on the mask you will get your circle. Then you should simply make an algorithm to detect which two masks represent a full circle and you will get your result. Here is the sample solution:
import numpy as np
import cv2
# Function for returning solidity of contour - ratio of contour area to its
# convex hull area.
def checkSolidity(cnt):
area = cv2.contourArea(cnt)
hull = cv2.convexHull(cnt)
hull_area = cv2.contourArea(hull)
solidity = float(area)/hull_area
return solidity
img_orig = cv2.imread("circles.png")
# Had to dilate the image so the contour was completly connected.
img = cv2.dilate(img_orig, np.ones((3, 3), np.uint8))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Grayscale transformation.
# Otsu threshold.
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
# Search for contours.
contours = cv2.findContours(thresh, cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
# Sorting contours from smallest to biggest.
contours.sort(key=lambda cnt: cv2.contourArea(cnt))
# Three contours - two semi circles and the intersection of the circles.
cnt1 = contours[0]
cnt2 = contours[1]
cnt3 = contours[2]
# Create three empty images
h, w = img.shape[:2]
mask1 = np.zeros((h, w), np.uint8)
mask2 = np.zeros((h, w), np.uint8)
mask3 = np.zeros((h, w), np.uint8)
# Draw all combinations of two out of three contours on the masks.
# The goal here is to draw one semicircle and the intersection together.
cv2.drawContours(mask1, [cnt1], 0, (255, 255, 255), -1)
cv2.drawContours(mask1, [cnt3], 0, (255, 255, 255), -1)
cv2.drawContours(mask2, [cnt2], 0, (255, 255, 255), -1)
cv2.drawContours(mask2, [cnt3], 0, (255, 255, 255), -1)
cv2.drawContours(mask3, [cnt1], 0, (255, 255, 255), -1)
cv2.drawContours(mask3, [cnt2], 0, (255, 255, 255), -1)
# Perform closing operation on the masks so that you get uniform contours.
kernel_size = 25
kernel = np.ones((kernel_size, kernel_size), np.uint8)
mask1 = cv2.morphologyEx(mask1, cv2.MORPH_CLOSE, kernel)
mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernel)
mask3 = cv2.morphologyEx(mask3, cv2.MORPH_CLOSE, kernel)
masks = [] # List for storing all the masks.
masks.append(mask1)
masks.append(mask2)
masks.append(mask3)
# List where you will append solidity of the found biggest contour of every mask.
solidity = []
for mask in masks:
cnts = cv2.findContours(mask, cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
cnt = max(cnts, key=lambda c: cv2.contourArea(c))
s = checkSolidity(cnt)
solidity.append(s)
# Index of the mask with smallest solidity.
min_solidity = solidity.index(min(solidity))
# The mask with the contour that has smallest solidity should be the one that
# has two semicirles drawn instead of one semicircle and the intersection.
#You could build a better function to check which mask is the one with
# two semicircles... like maybe the contour with the largest
# height and width of the bounding box etc.
# I chose solidity because it is enough for this example.
# Selection of colors.
colors = {
0: (0, 0, 255),
1: (0, 255, 0),
2: (255, 0, 0),
}
# Draw contours of the mask other two masks - those two that have the
# semicircle and the intersection.
for i, s in enumerate(solidity):
if s != solidity[min_solidity]:
cnts = cv2.findContours(
masks[i], cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
cnt = max(cnts, key=lambda c: cv2.contourArea(c))
cv2.drawContours(img_orig, [cnt], 0, colors[i], 1)
# Display result
cv2.imshow("img", img_orig)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Philosophically, you want to find two circles, because you search for them, you expect centers and radii. Graphically the figures are connected, we can see them separated because we know what a "circle" is and extrapolate the coordinates, which match the parts which overlap.
So what about finding the minimum enclosing circle for each contour (or in some cases fitEllipse and use the their parameters): https://docs.opencv.org/master/dd/d49/tutorial_py_contour_features.html
Then say draw that circle in a clear image and take the pixel coordinates which are not zero - by a mask or compute them by drawing a circle step by step.
Then compare these coordinates with the coordinates in the other contours with some precision and append the matching coordinates to the current contour.
Finally: draw the extended contour on a clear canvas and apply HoughCircles for a single non-overlapping circle. (Or compute the center and radius, the coordinates of a circle and comparing to the contour with a precision.)
For reference here I will post the solution I came up with based on some ideas here and a few more. This solution was 99.9% effective and recovering ellipses from images often including many overlapping, contained, and with other image noise such as lines, text and so on.
The code is too length and distributed to post here but the pseudocode is as follows.
Segment the image
Run cv2 findContours with RETR_EXTERNAL to get separate regions in the image
For each image, fill in the interior, apply mask, and extract the region to be processed independently of other regions.
Remaining steps are executed for each region independently
Run cv2 findContours with RETR_LIST to get all internal and external contours
For each contour found, apply polygon smoothing to reduce effect of pixellation
For each smoothed contour, identify the continuous segments within that contour which have the same curvature sign i.e. segments which are entirely curving right, or curving left (just compute angles and sign changes)
Within each segment, fit an ellipse model with least squares (scikit-learn EllipseModel)
Perform the Lee algorithm on the original image to compute for each pixel its minimum distance to a white pixel
For each model, perform a greedy local neighbourhood search to improve the fit against the original - the fit being the fitted ellipse maximum distance to white pixel, from the output of the lee algorithm
Not simple or elegant but is highly accurate for the content I am dealing with confirmed from manual review of a large number of images.

Watershed segmentation on spherical reflecting objects

I am trying to do some image segmentation with watershed to detect all balls in an image. I have followed a pyimage tuto. But I am getting very poor results. My guess is that the reflexion is the problem. Still the image is pretty clean and the instances look quite separable.
Am I using the correct approach here? Did I miss somethings?
I tested cellpose and I get almost perfect results. It's not the same approach of course and I was hopping to get something with "classical" computer-vision techniques.
Following is the code I have, the original image and the current result. I have tried to change the parameters, but I am not sure about what I am doing here. I also looked at inRange, but I am afraid the balls are never of the same color.
original image: https://i.stack.imgur.com/7595R.jpg
import numpy as np
from scipy import ndimage
import cv2
from skimage.feature import peak_local_max
from skimage.segmentation import watershed
import imutils
from matplotlib import pyplot as plt
img = cv2.imread('balls.jpg')
gray = - cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
# Things I tried...
# gray = cv2.dilate(gray,kernel,iterations = 1)
# hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# h, s, v = cv2.split(hsv)
thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=30, labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then appy the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
# draw on mask
for label in np.unique(labels):
# if the label is zero -> 'background'
if label == 0:
continue
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# detect contours in the mask and grab the largest one
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# draw a circle enclosing the object
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(img, (int(x), int(y)), int(r), (0, 255, 0), 2)
cv2.putText(img, "#{}".format(label), (int(x) - 10, int(y)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
plt.imshow(img)
The labels : https://i.stack.imgur.com/M6hZb.png
matchTemplate "solution" with opencv/samples/mouse_and_match.py
use whatever you like to find the peaks.
yes, with this approach you gotta pick a template manually.
to fix that, there could be approaches exploiting self-similarity (auto-correlation). exercise to the reader.
you can't pick whole balls because their sizes vary, so that's a huge downside to template matching already, but also a rectangle around a circle contains a significant amount of non-object pixels, which drives down the correlation score wherever that part varies.
picking the reflection works (off of a medium ball) because the reflection shows an environment with nice strong contrast.
notice the one small ball near the top, slightly to the right? that's not doing so well for a bunch of reasons.

Detect contour into the method cv2.Circle

I am developing a project and would like a little help. I am using opencv + python for image processing, I use the Canny method to extract the edges of a can and I use findContour to draw the contours found by the Canny method ...
I draw the outline found in the image, then create a circle using the cv.Circle method as shown in the image:
Image
The red circle is the circle created by the cv.Circle method.
The green circle is the outline found by the Canny method.
What I need now is to know if it is possible to identify whether any part of the green outline is into the red circle, is it possible to make this identification?
Script used:
gray = cv2.cvtColor (image, cv2.COLOR_BGR2GRAY)
gray = cv2.blur (gray, (3, 3))
edged = cv2.Canny (gray, canny1, canny1 * 3, kernel)
outline, hierarchy = cv2.findContours (edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea (contour) for contour in contour]
if areas:
(contour, areas) = ​​zip (* sorted (zip (contour, areas), key = lambda a: a [1]))
cnt = outline [-1]
#print (areas)
(x, y), radius = cv2.minEnclosingCircle (cnt)
center = (int (x), int (y))
radius = int (radius)
circle = cv2.circle (image, center, radius - 10, (0,0,255), 2)
circle = cv2.circle (image, center, radius + 10, (0,0,255), 2)
cv2.drawContours (image, [outline [-1]], -1, (0, 255, 0), 2)
cv2.imshow ('Canny Edges After Contouring', edged)
cv2.imshow ('Contours', image)
Approach 1
According to OpenCV Documentation,
contours – Detected contours. Each contour is stored as a vector of points.
You can manually calculate the points of the circle (or annulus in your case)
Now that you have both the points, you can perform an intersection of both
And then later plot the intersection
Approach 2
Generate 2 Different images of a fixed colour other than green and red
One with Circles and contours on them (your current image) (green and red)
Now remove all Red Color from Image and replace it with the background color
Generate an Image with the Contours (green)
Find the absolute difference between both the images
Et viola. What remains are indeed your points
Test each (x,y) point on the contour relative to the center of the circle and its radius:
if (x-center_x)**2 + (y-center_y)**2 <= radius**2:
# inside circle

Categories

Resources