I am developing a project and would like a little help. I am using opencv + python for image processing, I use the Canny method to extract the edges of a can and I use findContour to draw the contours found by the Canny method ...
I draw the outline found in the image, then create a circle using the cv.Circle method as shown in the image:
Image
The red circle is the circle created by the cv.Circle method.
The green circle is the outline found by the Canny method.
What I need now is to know if it is possible to identify whether any part of the green outline is into the red circle, is it possible to make this identification?
Script used:
gray = cv2.cvtColor (image, cv2.COLOR_BGR2GRAY)
gray = cv2.blur (gray, (3, 3))
edged = cv2.Canny (gray, canny1, canny1 * 3, kernel)
outline, hierarchy = cv2.findContours (edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea (contour) for contour in contour]
if areas:
(contour, areas) = ​​zip (* sorted (zip (contour, areas), key = lambda a: a [1]))
cnt = outline [-1]
#print (areas)
(x, y), radius = cv2.minEnclosingCircle (cnt)
center = (int (x), int (y))
radius = int (radius)
circle = cv2.circle (image, center, radius - 10, (0,0,255), 2)
circle = cv2.circle (image, center, radius + 10, (0,0,255), 2)
cv2.drawContours (image, [outline [-1]], -1, (0, 255, 0), 2)
cv2.imshow ('Canny Edges After Contouring', edged)
cv2.imshow ('Contours', image)
Approach 1
According to OpenCV Documentation,
contours – Detected contours. Each contour is stored as a vector of points.
You can manually calculate the points of the circle (or annulus in your case)
Now that you have both the points, you can perform an intersection of both
And then later plot the intersection
Approach 2
Generate 2 Different images of a fixed colour other than green and red
One with Circles and contours on them (your current image) (green and red)
Now remove all Red Color from Image and replace it with the background color
Generate an Image with the Contours (green)
Find the absolute difference between both the images
Et viola. What remains are indeed your points
Test each (x,y) point on the contour relative to the center of the circle and its radius:
if (x-center_x)**2 + (y-center_y)**2 <= radius**2:
# inside circle
Related
This is the picture I have, and I want to detect the red ball:
However, I simply cannot get the code to work. I've tried experimenting with different param1 and param2 values, larger dp values, and even rescaling the image.
Any help on this (or even an alternate method for detecting the ball) would be much appreciated.
CODE:
frame = cv.imread("cricket_ball.png")
# Convert frame to grayscale
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
# cv.HoughCircles returns a 3-element floating-point vector (x,y,radius) for each circle detected
circles = cv.HoughCircles(gray,cv.HOUGH_GRADIENT,1,minDist=100, minRadius=2.5,maxRadius=10) # Cricket ball on videos are approximately 10 pixels in diameter.
print(circles)
# Ensure at least one circle was found
if circles is not None:
# Converts (x,y,radius to integers)
circles = np.uint8(np.around(circles))
for i in circles[0,:]:
cv.circle(frame, (i[0],i[1]), i[2], (0,255,0), 20) # Produce circle outline
cv.imshow("Ball", frame)
cv.waitKey(0)
Here's my attempt. The idea is to find the ball assuming is (one) of the most saturated objects in the scene. This should cover all bright objects, independent of their color.
I don't use Hough's circles because its a little bit difficult to parametrize and it often doesn't scale well to other image. Instead, I just detect blobs on a binary image and calculate blob circularity, assuming the thing I'm looking for is close to a circle (and its circularity should be close to 1.0).
This is the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "fv8w3.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert the image to the HSV color space:
hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV)
# Set the HSV values:
lowRange = np.array([0, 120, 0])
uppRange = np.array([179, 255, 255])
# Create the HSV mask
binaryMask = cv2.inRange(hsvImage, lowRange, uppRange)
Let's check out what kind of HSV mask we get looking only for high Saturation values:
It's all right, the object of interest is there, but the mask is noisy. Let's try some morphology to define a little bit more those blobs:
# Apply Dilate + Erode:
kernel = np.ones((3, 3), np.uint8)
binaryMask = cv2.morphologyEx(binaryMask, cv2.MORPH_DILATE, kernel, iterations=1)
This is the filtered image:
Now, let me detect contours and compute contour properties to filter the noise. I'll store the blobs of interest in a list called detectedCircles:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(binaryMask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the circles here:
detectedCircles = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
# Get blob area:
blobArea = cv2.contourArea(c)
print(blobArea)
# Get blob perimeter:
blobPerimeter = cv2.arcLength(c, True)
print(blobPerimeter)
# Compute circulariity
blobCircularity = (4 * 3.1416 * blobArea)/(blobPerimeter**2)
print(blobCircularity)
# Set min circularuty:
minCircularity = 0.8
# Set min Area
minArea = 35
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Set Red color (unfiltered blob)
color = (0, 0, 255)
# Process only big blobs:
if blobCircularity > minCircularity and blobArea > minArea:
# Set Blue color (filtered blob)
color = (255, 0, 0)
# Store the center and radius:
detectedCircles.append([center, radius])
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, color, 2)
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
I've set a circularity and minimum area test to filter the noisy blobs. All the relevant blobs are stored in the detectedCircles list as fitted circles. Let's see the result:
Looks good. The blob of interested is enclosed by a blue circle and the noise with a red one. Now, let's try another color for the ball. I created a version of the image with a blue ball instead of a red one, this is the result:
I already have code that can detect the brightest point in an image (just gaussian blurring + finding the brightest pixel). I am working with photographs of sunsets, and right now can very easily get results like this:
My issue is that the radius of the circle is tied to how much gaussian blur i use - I would like to make it so that the radius reflects the size of the sun in the photo (I have a dataset of ~500 sunset photos I am trying to process).
Here is an image with no circle:
I don't even know where to start on this, my traditional computer vision knowledge is lacking.. If I don't get an answer I might try and do something like calculate the distance from the center of the circle to the nearest edge (using canny edge detection) - if there is a better way please let me know. Thank you for reading
Here is one way to get a representative circle in Python/OpenCV. It finds the minimum enclosing circle.
Read the input
Crop out the white on the right side
Convert to gray
Apply median filtering
Do Canny edge detection
Get the coordinates of all the white pixels (canny edges)
Compute minimum enclosing circle to get center and radius
Draw a circle with that center and radius on a copy of the input
Save the result
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('sunset.jpg')
hh, ww = img.shape[:2]
# shave off white region on right side
img = img[0:hh, 0:ww-2]
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# median filter
median = cv2.medianBlur(gray, 3)
# do canny edge detection
canny = cv2.Canny(median, 100, 200)
# get canny points
# numpy points are (y,x)
points = np.argwhere(canny>0)
# get min enclosing circle
center, radius = cv2.minEnclosingCircle(points)
print('center:', center, 'radius:', radius)
# draw circle on copy of input
result = img.copy()
x = int(center[1])
y = int(center[0])
rad = int(radius)
cv2.circle(result, (x,y), rad, (255,255,255), 1)
# write results
cv2.imwrite("sunset_canny.jpg", canny)
cv2.imwrite("sunset_circle.jpg", result)
# show results
cv2.imshow("median", median)
cv2.imshow("canny", canny)
cv2.imshow("result", result)
cv2.waitKey(0)
Canny Edges:
Resulting Circle:
center: (265.5, 504.5) radius: 137.57373046875
Alternate
Fit ellipse to Canny points and then get the average of the two ellipse radii for the radius of the circle. Note a slight change in the Canny arguments to get only the top part of the sunset.
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('sunset.jpg')
hh, ww = img.shape[:2]
# shave off white region on right side
img = img[0:hh, 0:ww-2]
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# median filter
median = cv2.medianBlur(gray, 3)
# do canny edge detection
canny = cv2.Canny(median, 100, 250)
# transpose canny image to compensate for following numpy points as y,x
canny_t = cv2.transpose(canny)
# get canny points
# numpy points are (y,x)
points = np.argwhere(canny_t>0)
# fit ellipse and get ellipse center, minor and major diameters and angle in degree
ellipse = cv2.fitEllipse(points)
(x,y), (d1,d2), angle = ellipse
print('center: (', x,y, ')', 'diameters: (', d1, d2, ')')
# draw ellipse
result = img.copy()
cv2.ellipse(result, (int(x),int(y)), (int(d1/2),int(d2/2)), angle, 0, 360, (0,0,0), 1)
# draw circle on copy of input of radius = half average of diameters = (d1+d2)/4
rad = int((d1+d2)/4)
xc = int(x)
yc = int(y)
print('center: (', xc,yc, ')', 'radius:', rad)
cv2.circle(result, (xc,yc), rad, (0,255,0), 1)
# write results
cv2.imwrite("sunset_canny_ellipse.jpg", canny)
cv2.imwrite("sunset_ellipse_circle.jpg", result)
# show results
cv2.imshow("median", median)
cv2.imshow("canny", canny)
cv2.imshow("result", result)
cv2.waitKey(0)
Canny Edge Image:
Ellipse and Circle drawn on Input:
Use Canny edge first. Then try either Hough circle or Hough ellipse on the edge image. These are brute force methods, so they will be slow, but they are resistant to non-circular or non-elliptical contours. You can easily filter results such that the detected result has a center near the brightest point. Also, knowing the estimated size of the sun will help with computation speed.
You can also look into using cv2.findContours and cv2.approxPolyDP to extract continuous contours from your images. You could filter by perimeter length and shape and then run a least squares fit, or Hough fit.
EDIT
It may be worth trying an intensity filter before the Canny edge detection. I suspect it will clean up the edge image considerably.
I have been trying to calculate the distance between two lines in an image in Python. For example, in the image given below, I want to find the perpendicular distance between the two ends of the yellow block. So far I have been only able to derive the distance between two pixels.
The code I could make was to find the distance between red and blue pixels. I figured I could improve this to make the distance between the two points/lines in this image, but no luck yet.
import numpy as np
from PIL import Image
import math
# Load image and ensure RGB - just in case palettised
im = Image.open("2points.png").convert("RGB")
# Make numpy array from image
npimage = np.array(im)
# Describe what a single red pixel looks like
red = np.array([255,0,0],dtype=np.uint8)
# Find [x,y] coordinates of all red pixels
reds = np.where(np.all((npimage==red),axis=-1))
print(reds)
# Describe what a single blue pixel looks like
blue=np.array([0,0,255],dtype=np.uint8)
# Find [x,y] coordinates of all blue pixels
blues=np.where(np.all((npimage==blue),axis=-1))
print(blues)
dx2 = (blues[0][0]-reds[0][0])**2 # (200-10)^2
dy2 = (blues[1][0]-reds[1][0])**2 # (300-20)^2
distance = math.sqrt(dx2 + dy2)
print(distance)
While preparing this answer, I realized, that my hint regarding cv2.boxPoints was misleading. Of course, I had cv2.boundingRect on my mind – sorry for that!
Nevertheless, here's the full step-by-step approach:
Use cv2.inRange to mask all yellow pixels. Attention: Your image has JPG artifacts, such that you get a lot of noise in the mask, cf. the output:
Use cv2.findContours to find all contours in the mask. That'll be over 50, due to the many tiny artifacts.
Use Python's max function on the (list of) found contours using cv2.contourArea as key to get the largest contour.
Finally, use cv2.boundingRect to get the bounding rectangle of the contour. That's a tuple (x, y, widht, height). Just use the last two elements, and you have your desired information.
That'd be my code:
import cv2
# Read image with OpenCV
img = cv2.imread('path/to/your/image.ext')
# Mask yellow color (0, 255, 255) in image; Attention: OpenCV uses BGR ordering
yellow_mask = cv2.inRange(img, (0, 255, 255), (0, 255, 255))
# Find contours in yellow mask w.r.t the OpenCV version
cnts = cv2.findContours(yellow_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Get the largest contour
cnt = max(cnts, key=cv2.contourArea)
# Get width and height from bounding rectangle of largest contour
(x, y, w, h) = cv2.boundingRect(cnt)
print('Width:', w, '| Height:', h)
The output
Width: 518 | Height: 320
seems reasonable.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
OpenCV: 4.5.1
----------------------------------------
Problem Statement & Background Info:
EDIT: Constraints: The red coloring on the flange changes over time, so I'm not trying to use color recognition to identify my object at this moment unless it can be robust. Addition,ally external illumination my be a factor since this will be in an outdoor area in the future.
I have RGB-Depth camera and with it, I'm able to capture this scene. Where each pixel (x,y) has a depth value.
Applying a gradient magnitude filter to the depth map associated with my image I'm able to get the following edge map.
The gradient magnitudes are given the value of 0 if they had a magnitude that wasn't zero. Black (255) is for magnitude values associated with 0 (homogenous depth or a flat surface).
From this edge map I dialated the edges so picking up the contours would be easier.
Then I found the contours in the image and tried to plot just the 5 biggest contours.
PROBLEM
Is there a way to reliably find the contours associated with my objects (the red box and metal fixture) and then find their geometric centroid? I keep running into the issue that I can find contours in the image, but I have no way of selectively screening for the contours that are my objects and not noise.
I have provided the image I used for the image processing, but for some reason, OpenCV saves the image as a black image, and when you read it in using...
gray = cv2.imread('GRAYTEST.jpeg', cv2.IMREAD_GRAYSCALE)
it appears blue-ish and not a binary white/black image as I show. So sorry about that.
Here is the image:
Sorry, I don't know why it saved just as a black image, but if you read it in OpenCV it should show up with the same lines as "magnitude of gradients" plot.
MY CODE
gray = cv2.imread('GRAYTEST.jpeg', cv2.IMREAD_GRAYSCALE)
plt.imshow(gray)
plt.title('gray start image')
plt.show()
blurred = cv2.bilateralFilter(gray, 8, 25, 25) # blurr image while preserving edges
kernel = np.ones((3, 3), np.uint8) # define a kernel (block) to apply filters to
dialated = cv2.dilate(blurred, kernel, iterations=1)
plt.title('dialated')
plt.imshow(dialated)
plt.show()
#Just performs a canny edge dectection on an image
edges_empty = self.Commons.CannyE_Auto(dialated) # Canny edge image for some sigma
#makes an empty image using the same diemensions of the given image
empty2 = self.Commons.make_empty(gray)
_, contours, _ = cv2.findContours(edges_empty, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(contours, key=cv2.contourArea, reverse=True)[:5] # get largest five contour area
cv2.drawContours(empty2, cnts, -1, (255, 0, 0), thickness=1)
plt.title('contours')
plt.imshow(empty2)
plt.show()
Instead of performing the blurring operation, dialation, canny edge detection on my already thresholded image, I just performed the contour detection on my original image.
Then I was able to find a decent contour for the outline of my image by modifying findContour command.
_, contours, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
replacing cv2.RETR_TREE with cv2.RETR_EXTERNAL I was able to get only the contours that were associated with an object's outline, rather than trying to get contours within the object. Switching to cv2.CHAIN_APPROX_NONE didn't show any noticeable improvements, but it may provide better contours for more complex geometries.
for c in cnts:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(empty2, [c], -1, (255, 0, 0), thickness=1)
perimeter = np.around(cv2.arcLength(c, True), decimals=3)
area = np.around(cv2.contourArea(c), decimals=3)
cv2.circle(empty2, (cX, cY), 7, (255, 255, 255), -1)
cv2.putText(empty2, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
cv2.putText(empty2, "P:{}".format(perimeter), (cX - 50, cY - 50),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
cv2.putText(empty2, "A:{}".format(area), (cX - 100, cY - 100),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
Using the above code I was able to label centroid of each contour as well as information about each contours perimeter and area.
However, I was unable to perform a test that would select which contour was my desired contour. I have an idea to capture my object in a more ideal setting and find its centroid, perimeter, and associated area. This way when I find a new contour I can compare it with how close it is to my known values.
I think this method could work to remove contours that are too large or too small.
If anyone knows of a better solution that would be fantastic!
What I'm doing: I have a robotic arm and I want to find x,y coordinates for objects on a piece of paper.
I am able to find a contour of a sheet of paper and get its dimensions (h,w). I want the coordinates of my upper left corner so when I place objects onto my piece of paper I can get image coordinates relative to that point. From there I'll convert those pixel coordinates to cm and I'll be able to return x,y coordinates to my robotic arm.
Problem: I find the center of my contour and I thought the upper left corner would then be the...
center x coordinate - (width/2), center y coordinate - (height/2)
Picture of the contour box I'm getting.
*Picture of contour with my box that should be around the upperleft corner of my contour
However, I get a coordinate out of the bounds of my piece of paper. Is there an easier way to find my upper left coordinates?
code
class Boundary(object):
def __init__(self, image):
self.frame = image
self.DefineBounds()
def DefineBounds(self):
# convert the image to grayscale, blur it, and detect edges
# other options are four point detection, white color detection to search for the board?
gray = cv2.cvtColor(self.frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 35, 125)
# find the contours in the edged image and keep the largest one;
# we'll assume that this is our piece of paper in the image
# (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
th, contours, hierarchy = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
c = max(contours, key=cv2.contourArea)
# compute the bounding box of the of the paper region and return it
cv2.drawContours(self.frame, c, -1, (0, 255, 0), 3)
cv2.imshow("B and W", edged)
cv2.imshow("capture", self.frame)
cv2.waitKey(0)
# minAreaRect returns (center (x,y), (width, height), angle of rotation )
# width = approx 338 (x-direction
# height = 288.6 (y-direction)
self.CenterBoundBox = cv2.minAreaRect(c)[0]
print("Center location of bounding box is {}".format(self.CenterBoundBox))
CxBBox = cv2.minAreaRect(c)[0][1]
CyBBox = cv2.minAreaRect(c)[0][0]
# prints picture resolution
self.OGImageHeight, self.OGImageWidth = self.frame.shape[:2]
#print("OG width {} and height {}".format(self.OGImageWidth, self.OGImageHeight))
print(cv2.minAreaRect(c))
BboxWidth = cv2.minAreaRect(c)[1][1]
BboxHeight = cv2.minAreaRect(c)[1][0]
self.Px2CmWidth = BboxWidth / 21.5 # 1cm = x many pixels
self.Px2CmHeight = BboxHeight / 18 # 1cm = x many pixels
print("Bbox diemensions {} x {}".format(BboxHeight, BboxWidth))
print("Conversion values Px2Cm width {}, Px2Cm height {}".format(self.Px2CmWidth, self.Px2CmHeight))
self.TopLeftCoords = (abs(CxBBox - BboxWidth/2), abs(CyBBox - BboxHeight/2))
x = int(round(self.TopLeftCoords[0]))
y = int(round(self.TopLeftCoords[1]))
print("X AND Y COORDINATES")
print(x)
print(y)
cv2.rectangle(self.frame, (x, y), (x+10, y+10), (0, 255, 0), 3)
print(self.TopLeftCoords)
cv2.imshow("BOX",self.frame)
cv2.waitKey(0)
Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
From: OpenCV docs
So the reason for your problem is obvious, your countour has a slight slant, so the minimum rectangle which encloses the whole contour will be out of bounds on the lower side.
Since
contours
just holds a vector of points (talking about the C++ interface here) it should be easy to find the upper left corner by searching for the point with lowest x and highest y value in the largest contour.