How can I detect circle by using openCV in python? - python

I want to detect circle in a picture by using haar cascade. I created an cascade xml.
import cv2
import numpy as np
img = cv2.imread("C://OpenCVcascade//resimler//coins.jpg")
circles_cascade = cv2.CascadeClassifier("C://Cascade//dairetanima.xml")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
circles = circles_cascade.detectMultiScale(gray, 1.1, 1)
if circles is not None:
circles = np.uint16(np.around(circles))
for (x, y, w, h) in circles:
center = (x + w // 2, y + h // 2)
radius = (w + h) // 4
cv2.circle(img, center, radius, (255, 0, 0), 2)
cv2.imshow('image', img)
cv2.waitKey()
cv2.destroyAllWindows()
My result:
I already know, there are different method to detect circle. But I am trying to do with cascade method. Because after this part, I will use it for real time detection.

Related

Extract already know shape from image

I´m trying to extract this piece
From this
Ive tried to detect shapes, no way, train an haarscascade...(Idont have negatives) no way, .... the position can vary (not all of them are inserted) and the angle is not the same.. I cannot crop one by one :-(
Any suggestion ??? Thanks in advance
PS Original image is here https://pasteboard.co/JaTSoJF.png (sorry > 2Mb)
After working on #ganeshtata we got
import cv2
import numpy as np
img = cv2.imread('cropsmall.png')
height, width = img.shape[:2]
green_channel = img[:,0:] # Blue channel extraction
res = cv2.fastNlMeansDenoising(green_channel, None, 3, 7, 21) # Non-local means denoising
cv2.imshow('denoised',res)
edges = cv2.Canny(res, 11, 11, 3) # Edge detection
kernel = np.ones((30, 30),np.uint8)
closing = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel) # Morphological closing
im2, contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Find all contours in the image
for cnt in contours: # Iterate through all contours
x, y, w, h = cv2.boundingRect(cnt) # Reject contours whose height is less than half the image height
if h < height / 2:
continue
y = 0 # Assuming that all shapes start from the top of the image
cv2.rectangle(img, (x, y), \
(x + w, y + h), (0, 255, 0), 2)
cv2.imshow('IMG',img)
cv2.imwrite("test.jpg",img)
cv2.waitKey(0)
That gives us
Not bad...
I used the following approach to extract the pattern specified in the question.
Read the image and extract the blue channel from the image.
import cv2
import numpy as np
img = cv2.imread('image.png')
height, width = img.shape[:2]
blue_channel = img[:,:,0]
Blue Channel -
Apply OpenCV's Non-local Means Denoising algorithm on the blue channel image. This ensures that most of the random noise in the image is smoothed.
res = cv2.fastNlMeansDenoising(blue_channel, None, 3, 7, 21)
Denoised image -
Apply Canny edge detection.
edges = cv2.Canny(res, 1, 10, 3)
Edge output -
Apply Morpological Closing to try and close small gaps/holes in the image.
kernel = np.ones((30, 30),np.uint8)
closing = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel)
Image after applying morphological closing -
Find all contours in the image using cv2.findContours. After finding all contours, we can determine the bounding box of each contour using cv2.boundingRect.
im2, contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Find all contours
for cnt in contours: # Iterate through all contours
x, y, w, h = cv2.boundingRect(cnt) $ Get contour bounding box
if h < height / 2: # Reject contours whose height is less than half the image height
continue
y = 0 # Assuming that all shapes start from the top of the image
cv2.rectangle(img, (x, y), \
(x + w, y + h), (0, 255, 0), 2)
Final result -
The complete code -
import cv2
import numpy as np
img = cv2.imread('image.png')
height, width = img.shape[:2]
blue_channel = img[:,:,0] # Blue channel extraction
res = cv2.fastNlMeansDenoising(blue_channel, None, 3, 7, 21) # Non-local means denoising
edges = cv2.Canny(res, 1, 10, 3) # Edge detection
kernel = np.ones((30, 30),np.uint8)
closing = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel) # Morphological closing
im2, contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Find all contours in the image
for cnt in contours: # Iterate through all contours
x, y, w, h = cv2.boundingRect(cnt) # Reject contours whose height is less than half the image height
if h < height / 2:
continue
y = 0 # Assuming that all shapes start from the top of the image
cv2.rectangle(img, (x, y), \
(x + w, y + h), (0, 255, 0), 2)
Note - This approach works for the sample image posted by you. It might/might not generalize for all images.

Detect overlapping noisy circles in image

I try to recognize two areas in the following image. The area inside the inner and the area between the outer and inner - the border - circle with python openCV.
I tried different approaches like:
Detecting circles images using opencv hough circles
Find and draw contours using opencv python
That does not fit very well.
Is this even possible with classical image processing or do I need some neuronal networking?
Edit: Detecting circles images using opencv hough circles
# import the necessary packages
import numpy as np
import argparse
import cv2
from PIL import Image
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 500)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
img = Image.fromarray(image)
if img.height > 1500:
imS = cv2.resize(np.hstack([image, output]), (round((img.width * 2) / 3), round(img.height / 3)))
else:
imS = np.hstack([image, output])
# Resize image
cv2.imshow("gray", gray)
cv2.imshow("output", imS)
cv2.waitKey(0)
else:
print("No circle detected")
Testimage:
General mistake: While using HoughCircles() , the parameters should be chosen appropriately. I see that you are only using first 4 parameters in your code. Ypu can check here to get a good idea about those parameters.
Experienced idea: While using HoughCircles , I noticed that if 2 centers of 2 circles are same or almost close to each other, HoughCircles cant detect them. Even if you assign min_dist parameter to a small value. In your case, the center of circles also same.
My suggestion: I will attach the appropriate parameters with the code for both circles. I couldnt find 2 circles with one parameter list because of the problem I explained above. My suggestion is that apply these two parameters double time for the same image and just get the circles and get the result.
For outer circle result and parameters included code:
Result:
# import the necessary packages
import numpy as np
import argparse
import cv2
from PIL import Image
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread('image.jpg')
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,15)
rows = gray.shape[0]
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1, rows / 8,
param1=100, param2=30,
minRadius=200, maxRadius=260)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
img = Image.fromarray(image)
if img.height > 1500:
imS = cv2.resize(np.hstack([image, output]), (round((img.width * 2) / 3), round(img.height / 3)))
else:
imS = np.hstack([image, output])
# Resize image
cv2.imshow("gray", gray)
cv2.imshow("output", imS)
cv2.waitKey(0)
else:
print("No circle detected")
For inner circle the parameters:
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1, rows / 8,
param1=100, param2=30,
minRadius=100, maxRadius=200)
Result:

How to extract only circular ROI portion of the image and show Radius of the circle with a button click in Tkinter window of Python OpenCV GUI

Extract Circular ROI & Show Radius of the Circle in Tkinter Label
I am requesting help from python experts in this community. I have searched about my problem all over Stackexchange as well as the Github community. But I didn't find anything helpful.
I have created a Tkinter GUI. In this GUI, I can upload my image from the destination folder. In Select of the evaluation section, I have written a script through which I can automatically view my ROI region in the circular part. The GUI is displayed at the bottom part of this question.
Help required Section: I am having trouble in creating a script through which:
when I click on Upload ROI button, only the selected ROI portion
of the image gets saved at the destination folder i.e path =
'Data/images/' + name + '_' + method + ext
I can view the Radius of the circle somewhere on the the Tkinter GUI.
def ROI(self, image, method):
if method == 'ROI':
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blimage = cv2.medianBlur(image, 15)
circles = cv2.HoughCircles(blimage, cv2.HOUGH_GRADIENT, 1, 255, param1=100, param2=60, minRadius=0,
maxRadius=0)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
cv2.circle(image, (i[0], i[1]), i[2], (0, 255, 0), 6)
cv2.circle(image, (i[0], i[1]), 2, (0, 0, 255), 3)
cv2.waitKey()
else:
print('method is wrong')
return image
GUI
UPDATE:
I added variable border to calculate x1,y1,x2,y2 so now it crops with borderline. Images show results for old code without border.
If you have only one circle (x,y,r) then you can use it to crop image
image = image[y-r:y+r, x-r:x+r]
I test it on some image with circle bigger then image and I had to use int16 instead of unit16 to get -1 instead of 65535 for 170-171 (y-r). Add I had to use min(), max()to get0instead-1`
def ROI(self, image, method):
if method == 'ROI':
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blimage = cv2.medianBlur(image, 15)
circles = cv2.HoughCircles(blimage, cv2.HOUGH_GRADIENT, 1, 255, param1=100, param2=60, minRadius=0,
maxRadius=0)
if circles is not None:
#print(circles)
# need `int` instead of `uint` to correctly calculate `y-r` (to get `-1` instead of `65535`)
circles = np.int16(np.around(circles))
for x,y,r in circles[0, :]:
print('x, y, r:', x, y, r)
border = 6
cv2.circle(image, (x, y), r, (0, 255, 0), border)
cv2.circle(image, (x, y), 2, (0, 0, 255), 3)
height, width = image.shape
print('height, width:', height, width)
# calculate region to crop
x1 = max(x-r - border//2, 0) # eventually -(border//2+1)
x2 = min(x+r + border//2, width) # eventually +(border//2+1)
y1 = max(y-r - border//2, 0) # eventually -(border//2+1)
y2 = min(y+r + border//2, height) # eventually +(border//2+1)
print('x1, x2:', x1, x2)
print('y1, y2:', y1, y2)
# crop image
image = image[y1:y2,x1:x2]
print('height, width:', image.shape)
else:
print('method is wrong')
return image
For more circles you would have to first calculate region used for all circles (get drom all circles minimal values x-r,y-r and maximal values x+r,y+r) and next crop image.
Later I will try to use alpha channel to remove backgroud outside circle.
Image used for test (if someone else would like to test code)
EDIT: I added code which create black image with white circle to remove background.
def ROI(self, image, method):
if method == 'ROI':
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blimage = cv2.medianBlur(image, 15)
circles = cv2.HoughCircles(blimage, cv2.HOUGH_GRADIENT, 1, 255, param1=100, param2=60, minRadius=0,
maxRadius=0)
if circles is not None:
print(circles)
circles = np.int16(np.around(circles)) # need int instead of uint to correctly calculate y-r (to get -1 instead of 65535)
for x,y,r in circles[0, :]:
print('x, y, r:', x, y, r)
height, width = image.shape
print('height, width:', height, width)
border = 6
cv2.circle(image, (x, y), r, (0, 255, 0), border)
cv2.circle(image, (x, y), 2, (0, 0, 255), 3)
mask = np.zeros(image.shape, np.uint8) # black background
cv2.circle(mask, (x, y), r, (255), border) # white mask for black border
cv2.circle(mask, (x, y), r, (255), -1) # white mask for (filled) circle
#image = cv2.bitwise_and(image, mask) # image with black background
image = cv2.bitwise_or(image, ~mask) # image with white background
x1 = max(x-r - border//2, 0) # eventually -(border//2+1)
x2 = min(x+r + border//2, width) # eventually +(border//2+1)
y1 = max(y-r - border//2, 0) # eventually -(border//2+1)
y2 = min(y+r + border//2, height) # eventually +(border//2+1)
print('x1, x2:', x1, x2)
print('y1, y2:', y1, y2)
image = image[y1:y2,x1:x2]
print('height, width:', image.shape)
else:
print('method is wrong')
return image

findContour() detect unintended internal edges and calculates area wierd for them OpenCV python

I'm not sure what is going on here but when I use the findContours() function using the cv2.RETER_EXTERNAL on this image:
it still seem to detect inside contours and calculates the area of them weirdly which prevents me from filtering the unwanted contours....
Any clue to why is that?
Here is the original and dialated tresh images:
here's the code so far:
import cv2
import PIL
import numpy as np
import imutils
imgAddr = "ADisplay2.jpg"
cropX = 20
cropY = 200
cropAngle = 2
CropIndex = (cropX, cropY, cropAngle)
img = cv2.imread(imgAddr)
cv2.imshow("original image",img)
(h, w) = img.shape[:2]
(cX, cY) = (w / 2, h / 2)
# rotate our image by 45 degrees
M = cv2.getRotationMatrix2D((cX, cY), -1.2, 1.0)
rotated = cv2.warpAffine(img, M, (w, h))
#cv2.imshow("Rotated by 45 Degrees", rotated)
cropedImg = rotated[300:700, 100:1500]
# grab the dimensions of the image and calculate the center of the image
#cv2.imshow("croped img", cropedImg)
grayImg = cv2.cvtColor(cropedImg, cv2.COLOR_BGR2GRAY)
#cv2.imshow("gray scale image", grayImg)
blurredImg = cv2.GaussianBlur(grayImg, (9, 9), 0)
cv2.imshow("Blurred_Img", blurredImg)
(T, threshInvImg) = cv2.threshold(blurredImg, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
cv2.imshow("ThresholdInvF.jpg", threshInvImg)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7,19))
#opening = cv2.morphologyEx(threshInvImg, cv2.MORPH_OPEN, kernel)
#cv2.imshow("openingImg", opening)
dialeteImg = cv2.morphologyEx(threshInvImg, cv2.MORPH_DILATE, kernel)
cv2.imshow("erodeImg", dialeteImg)
cannyImg = cv2.Canny(dialeteImg, 100,200)
cv2.imshow("Canny_img", cannyImg)
hierarchy,cntsImg,_ = cv2.findContours(cannyImg,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#print("Img cnts: {}".format(cntsImg))
#print("Img hierarchy: {}".format(hierarchy))
txtOffset = (25, 50)
for cntIdx, cnt in enumerate(cntsImg):
cntArea = cv2.contourArea(cnt)
print("Area of contour #{} = {}".format(cntIdx, cntArea))
(x, y, w, h) = cv2.boundingRect(cnt)
cv2.rectangle(cropedImg, (x, y), (x + w, y + h), (0, 255, 0), 2)
txtIdxPos = [x,y]
txtPos = ((txtIdxPos[0] + txtOffset[0]), (txtIdxPos[1] + txtOffset[1]))
cv2.putText(cropedImg, "#{}".format(cntIdx), txtPos, cv2.FONT_HERSHEY_SIMPLEX, 1.25, (0, 0, 255), 4)
cv2.imshow("drawCntsImg.jpg", cropedImg)
cv2.waitKey(0)
Thanks for helping :D
What you could do is to only use them if they're within a certain size. For this you could use contourArea(). It seems you already compute this anyhow.
For example:
for cntIdx, cnt in enumerate(cntsImg):
cntArea = cv2.contourArea(cnt)
#########################
# Skip iteration if area is too big or small to filter out non-digits
if cntArea < 50 or cntArea > 100: continue # Need to fiddle with these values
#########################
print("Area of contour #{} = {}".format(cntIdx, cntArea))
(x, y, w, h) = cv2.boundingRect(cnt)
cv2.rectangle(cropedImg, (x, y), (x + w, y + h), (0, 255, 0), 2)
txtIdxPos = [x,y]
txtPos = ((txtIdxPos[0] + txtOffset[0]), (txtIdxPos[1] + txtOffset[1]))
cv2.putText(cropedImg, "#{}".format(cntIdx), txtPos, cv2.FONT_HERSHEY_SIMPLEX,1.25, (0, 0, 255), 4)
You are already printing out each contour's area. You could use that to get an idea of what sizes to let through.
If the size digits might vary between images it could still be a problem. For that you could, for example, calculate the average contour area, which should be very close to the typical digit area. Then say that each contour should be at least this close to the average area.
Note: Just remember to make the minimum area large enough to let a 1 through.
Update:
If you want to rather use aspect ratio, then it's easy to change your formula, as you already calculate the height and width.
# If height is smaller than 1.5*w or larger than 2.5*w, then skip
if not 1.5 < h/w < 2.5: continue # Need to fiddle with these values
You could even use this to calculate the area. It has a chance as being different from contourArea. For example:
cntArea = w*h

Helplessly lost with openCV and HoughCircles

I'm trying to detect this black circle here. Shouldn't be too difficult but for some reason I just get 0 circles or approximately 500 circles everywhere, depending on the arguments. But there is no middle ground. Feels like I have tried to play with the arguments for hours, but absolutely no success. Is there a problem using HoughCircles and black or white picture? The task seems simple to a human eye, but is this difficult to the computer for some reason?
Here's my code:
import numpy as np
import cv2
image = cv2.imread('temp.png')
output = image.copy()
blurred = cv2.blur(image,(10,10))
gray = cv2.cvtColor(blurred, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.5, 20, 100, 600, 10, 100)
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
print len(circles)
for (x, y, r) in circles:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
show the output image
cv2.imshow("output", np.hstack([output]))
cv2.waitKey(0)
There are few minor mistakes in your approach.
Here is the code I used from the documentation:
img = cv2.imread('temp.png',0)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
cimg1 = cimg.copy()
circles = cv2.HoughCircles img,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,255,255),3)
cv2.imshow('detected circles.jpg',cimg)
joint = np.hstack([cimg1, cimg]) #---Posting the original image along with the image having the detected circle
cv2.imshow('detected circle and output', joint )

Categories

Resources