Radius Determination for 2D data cloud - python

I have a 2D dataset and converted it to a point cloud using cv2.threshold. For this point cloud I want to determine the radius of the central circle. The approach with Hough Circles does not seem to work.
An example file from my dataset is here and my code for determining the point cloud is:
import numpy as np
import cv2
import copy
import matplotlib.pyplot as plt
import sys
image = cv2.imread("600.png")
orig_image = np.copy(image)
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#gray = cv2.medianBlur(gray, 1)
gray = cv2.GaussianBlur(gray, (1,1), 0)
# apply binary thresholding
ret, thresh = cv2.threshold(gray, 145, 255, cv2.THRESH_BINARY)
cv2.imshow('image', thresh)
cv2.waitKey(0)
cv2.imwrite('image_thres1.png', thresh)
cv2.destroyAllWindows()
contours, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
image_copy = image.copy()
cv2.drawContours(image=image_copy, contours=contours, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
cv2.imshow('None approximation', image_copy)
cv2.waitKey(0)
cv2.imwrite('contours_none_image1.png', image_copy)
cv2.destroyAllWindows()
For example my point cloud looks like this.
I then tried to estimate the radius by specifying what percentage of the data points should be in a circle, but this is not useful as small percentage changes change the radius extremely.
I can see the central circle without problems with my eye, but I can't find a way to determine the radius or just an enclosing circle.
Is there an elegant solution or algorithm for my problem?

After Gauss lowpass and binarization:
Note that your circle is not really a circle.

Related

Find circle objects (stamps) in document image Python OpenCV

I wrote a simple code to search for circles in documents (since seals have a rounded shape).
But due to the poor image quality, the print outline is fuzzy, and opencv cannot always detect it. I edited the picture in photoshop and enhanced the dark colors. I saved the picture and sent it for processing. It helped me. Opencv has identified a circle representing a low-quality print (there are no such problems in high-quality documents). My code:
import numpy as np
import cv2
img = cv2.imread(r"C:\buh\doc.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# I tried experimenting with bluer, but opencv doesn't see circles in this case
# blurred = cv2.bilateralFilter(gray.copy(), 15, 15, 15 )
# imS = cv2.resize(blurred, (960, 540))
# cv2.imshow('img', imS)
# cv2.waitKey(0)
minDist = 100
param1 = 30 #500
param2 = 100 #200 #smaller value-> more false circles
minRadius = 90
maxRadius = 200 #10
# docstring of HoughCircles: HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circles
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 255, 0), 2)
# Show result for testing:
imS = cv2.resize(img, (960, 540))
cv2.imshow('img', imS)
cv2.waitKey(0)
The seals in the documents are circles as in the photo:
Unfortunately, I cannot add photo of the document where the original seals are located, since this is a private information...
So, I need to enhance the shades of black in the photo before trying to look for circles. How can I do this? I would also listen to other suggestions for improving the contours of seals (stamps), if someone has already encountered this.
Thank you.
Example:
Here's a simple approach:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, then Otsu's threshold.
Merge small contours into a single large contour. We dilate using cv2.dilate to merge circles into a single contour.
Find external contours. Finally we find external contours with the external cv2.RETR_EXTERNAL flag and cv2.drawContours()
Visualization of the image pipeline
Input image
Threshold for binary image
Dilate
Detected contours in green
Code
import cv2
import numpy as np
# Load image, grayscale, Gaussian blur, Otsus threshold, dilate
image = cv2.imread('3.PNG')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
dilate = cv2.dilate(thresh, kernel, iterations=1)
# Find contours
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image, [c], -1, (36,255,12), 3)
cv2.imshow('image', image)
cv2.imshow('dilate', dilate)
cv2.imshow('thresh', thresh)
cv2.waitKey()

Find area with content and get its bouding rect

I'm using OpenCV 4 - python 3 - to find an specific area in a black & white image.
This area is not a 100% filled shape. It may hame some gaps between the white lines.
This is the base image from where I start processing:
This is the rectangle I expect - made with photoshop -:
Results I got with hough transform lines - not accurate -
So basically, I start from the first image and I expect to find what you see in the second one.
Any idea of how to get the rectangle of the second image?
I'd like to present an approach which might be computationally less expensive than the solution in fmw42's answer only using NumPy's nonzero function. Basically, all non-zero indices for both axes are found, and then the minima and maxima are obtained. Since we have binary images here, this approach works pretty well.
Let's have a look at the following code:
import cv2
import numpy as np
# Read image as grayscale; threshold to get rid of artifacts
_, img = cv2.threshold(cv2.imread('images/LXSsV.png', cv2.IMREAD_GRAYSCALE), 0, 255, cv2.THRESH_BINARY)
# Get indices of all non-zero elements
nz = np.nonzero(img)
# Find minimum and maximum x and y indices
y_min = np.min(nz[0])
y_max = np.max(nz[0])
x_min = np.min(nz[1])
x_max = np.max(nz[1])
# Create some output
output = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.rectangle(output, (x_min, y_min), (x_max, y_max), (0, 0, 255), 2)
# Show results
cv2.imshow('img', img)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I borrowed the cropped image from fmw42's answer as input, and my output should be the same (or most similar):
Hope that (also) helps!
In Python/OpenCV, you can use morphology to connect all the white parts of your image and then get the outer contour. Note I have modified your image to remove the parts at the top and bottom from your screen snap.
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('blackbox.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
_,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)
# apply close to connect the white areas
kernel = np.ones((75,75), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours (presumably just one around the outside)
result = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
cv2.rectangle(result, (x, y), (x+w, y+h), (0, 0, 255), 2)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("Bounding Box", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save resulting images
cv2.imwrite('blackbox_thresh.png',thresh)
cv2.imwrite('blackbox_result.png',result)
Input:
Image after morphology:
Result:
Here's a slight modification to #fmw42's answer. The idea is connect the desired regions into a single contour is very similar however you can find the bounding rectangle directly since there's only one object. Using the same cropped input image, here's the result.
We can optionally extract the ROI too
import cv2
# Grayscale, threshold, and dilate
image = cv2.imread('3.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Connect into a single contour and find rect
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.dilate(thresh, kernel, iterations=1)
x,y,w,h = cv2.boundingRect(dilate)
ROI = original[y:y+h,x:x+w]
cv2.rectangle(image, (x, y), (x+w, y+h), (36, 255, 12), 2)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.waitKey()

How to detect blurry blobs?

I would like to detect all the bright spots in this image (https://i.imgur.com/UnTWWHz.png)
The code I've tried is via thresholding, but it only detects the very bright ones. As you can see in the image below.
But some of the spots are out of focus which I need to also detect them.
Could you suggest a method? The picture below shows the blurred spots that I'd like to detect in yellow circles
I tried with the following code
import os
import cv2
import numpy as np
path="C:/Slides/Fluoroscent/E_03_G_O_subpics"
imgname="sub_2_4.png"
image = cv2.imread(os.path.join(path,imgname))
# constants
BINARY_THRESHOLD = 10
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 18
thr=50
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# threshold the black/ non-black areas
_, thresh = cv2.threshold(gray_image, BINARY_THRESHOLD, thr, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(image, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (0,0,255), thickness=1)
cv2.imwrite(os.path.join(path,"result_thresh_"+str(thr)+".png"), image)
cv2.imshow("result", image)
cv2.waitKey(0)
As mentioned in the comments you will get better results by changing the threshold values. I changed the values to 20 and 255 respectively and added erosion to get rid of some noise. You can play around with morphological transformations to get the exact desired result. Read more here .
Code:
import cv2
import numpy as np
kernel = np.ones((5,5),np.uint8)
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 18
img = cv2.imread('blobs.png')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_img, 20, 255, cv2.THRESH_BINARY)
erosion = cv2.erode(thresh,kernel,iterations = 1)
components = cv2.connectedComponentsWithStats(erosion, CONNECTIVITY, cv2.CV_32S)
centers = components[3]
for center in centers:
cv2.circle(img, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (0,0,255), thickness=1)
cv2.imshow('Original', img)
cv2.imshow('Thresh', thresh)
cv2.imshow('Erosion', erosion)
cv2.waitKey(0)
Results:
Threshold
Erosion
Original with circles

Contours from a image appear very sloppy with cv2 findContours. How to improve?

I'm trying to find the contour of an image using cv2. There are many related questions, but the answer always appear to be very specific and not applicable to my case.
I have an black and white image that I change into color.
thresh = cv2.cvtColor(thresh, cv2.COLOR_RGB2GRAY)
plt.imshow(thresh)
Next, I try to find the contours.
image, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
and then I visualize it by plotting it on a black background.
blank_image = np.zeros((thresh.shape[0],thresh.shape[1],3), np.uint8)
img = cv2.drawContours(blank_image, contours, 0, (255,255,255), 3)
plt.imshow(img)
The contour follows the actual contour, i.e. surrounding the whole thing. How do I get something like this very bad paint impression:
You can use Canny edge detection to do this:
import cv2
frame = cv2.imread("iCyrOT3.png") # read a frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # turn it gray
edges = cv2.Canny(gray, 100, 200) # get canny edges
cv2.imshow('Test', edges) # display the result
cv2.waitKey(0)

Detecting Overlapping Circles in OpenCV

I'm using the OpenCV library for Python to detect the circles in an image. As a test case, I'm using the following image:
bottom of can:
I've written the following code, which should display the image before detection, then display the image with the detected circles added:
import cv2
import numpy as np
image = cv2.imread('can.png')
image_rgb = image.copy()
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
grayscaled_image = cv2.cvtColor(image_copy, cv2.COLOR_GRAY2BGR)
cv2.imshow("confirm", grayscaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
circles = cv2.HoughCircles(image_copy, cv2.HOUGH_GRADIENT, 1.3, 20, param1=60, param2=33, minRadius=10,maxRadius=28)
if circles is not None:
print("FOUND CIRCLES")
circles = np.round(circles[0, :]).astype("int")
print(circles)
for (x, y, r) in circles:
cv2.circle(image, (x, y), r, (255, 0, 0), 4)
cv2.rectangle(image, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("Test", image + image_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()
I get this:resultant image
I feel that my problem lies in the usage of the HoughCircles() function. It's usage is:
cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])
where minDist is a value greater than 0 that requires detected circles to be a certain distance from one another. With this requirement, it would be impossible for me to properly detect all of the circles on the bottom of the can, as the center of each circle is in the same place. Would contours be a solution? How can I convert contours to circles so that I may use the coordinates of their center points? What should I do to best detect the circle objects for each ring in the bottom of the can?
Not all but a majority of the circles can be detected by adaptive thresholding the image, finding the contours and then fitting a minimum enclosing circle for contours having area greater than a threshold
import cv2
import numpy as np
block_size,constant_c ,min_cnt_area = 9,1,400
img = cv2.imread('viMmP.png')
img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,block_size,constant_c)
thresh_copy = thresh.copy()
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>min_cnt_area:
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
cv2.circle(img,center,radius,(255,0,0),1)
cv2.imshow("Thresholded Image",thresh_copy)
cv2.imshow("Image with circles",img)
cv2.waitKey(0)
Now this script yields the result:
But there are certain trade-offs like, if the block_size and constant_c are changed to 11 and 2 respectively then the script yields:
You should try applying erosion with a kernel of proper shape to separate the overlapping circles in the thresholded image
You may look at the following links to understand more about adaptive thresholding and contours:
Threshlding examples: http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html
Thresholding reference: http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html
Contour Examples:
http://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html

Categories

Resources