Computer Vision: Opencv Counting small circles inside big circle - python

Here is the image on which i have been working on
The goal is to detect small circles inside the big one.
currently what i have done is converted the image to gray scale and applied threshold (cv2.THRESH_OTSU) to which resulted in this image
After this i have filtered out large objects using findcontours applied Morph open using elliptical shaped kernel which i found on stackoverflow
The result image is like this
Can someone guide me through the correct path on what to do and where i'm getting wrong.
Below is attached code on which i have been working on
import cv2
import numpy as np
# Load image, grayscale, Otsu's threshold
image = cv2.imread('01.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
#cv2.imwrite('thresh.jpg', thresh)
# Filter out large non-connecting objects
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
#print(area)
if area < 200 and area > 0:
cv2.drawContours(thresh,[c],0,0,-1)
# Morph open using elliptical shaped kernel
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=3)
# Find circles
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area > 20 and area < 50:
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(image, (int(x), int(y)), int(r), (36, 255, 12), 2)
cv2.namedWindow('orig', cv2.WINDOW_NORMAL)
cv2.imshow('orig', thresh)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow('image', image)
cv2.waitKey()
Thank you!

You throw away a lot of useful information by converting your image to grayscale.
Why not use the fact that the spots you are looking for are the only thing that is red/orange?
I multiplied the saturaton channel with the red channel which gave me this image:
Now finding the white blobs becomes trivial.
Experiment with different weights for those channels, or apply thresholds first. There are many ways. Experiment with different illumination, different backgrounds until you get the ideal input for your image processing.

The main problem in your code is the flag that you are using in cv2.findContours() function.
For such a problem in which we have to find contours which can appear inside another contour(the big circle), we should not use the flag cv2.RETR_EXTERNAL, instead use cv2.RETR_TREE. Click here for detailed info..
Also, it is always better to use cv2.CHAIN_APPROX_NONE instead of cv2.CHAIN_APPROX_SIMPLE if memory is not an issue. Click here for detailed info.
Thus, the following simple code can be used to solve this problem.
import cv2
import numpy as np
Image = cv2.imread("Adg5.jpg")
GrayImage = cv2.cvtColor(Image, cv2.COLOR_BGR2GRAY)
# Applying Otsu's Thresholding
Retval, ThreshImage = cv2.threshold(GrayImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# Finding Contours in the image
Contours, Hierarchy = cv2.findContours(ThreshImage, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Taking only those contours which have no child contour.
FinalContours = [Contours[i] for i in range(len(Contours)) if Hierarchy[0][i][2] == -1]
# Drawing contours
Image = cv2.drawContours(Image, FinalContours, -1, (0, 255, 0), 1)
cv2.imshow("Contours", Image)
cv2.waitKey(0)
Resulting image
In this method, a lot of noise at the boundary is also coming but the required orange points are also being detected. Now the task is to remove boundary noise.
Another method that removes boundary noise to a great extent is similar to #Piglet 's approach.
Here, I am using HSV image to segment out the orange points and then detecting them using the above approach.
import cv2
import numpy as np
Image = cv2.imread("Adg5.jpg")
HSV_Image = cv2.cvtColor(Image, cv2.COLOR_BGR2HSV)
# Extracting orange colour using HSV Image.
ThreshImage = cv2.inRange(HSV_Image, np.array([0, 81, 0]), np.array([41, 255, 255]))
# Finding Contours
Contours, Hierarchy = cv2.findContours(ThreshImage, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Taking only those contours which have no child contour.
FinalContours = [Contours[i] for i in range(len(Contours)) if Hierarchy[0][i][2] == -1]
# Drawing Contours
Image = cv2.drawContours(Image, FinalContours, -1, (0, 255, 0), 1)
cv2.imshow("Contours", Image)
cv2.waitKey(0)
Resultant Image

I have an idea that detet small circles by sliding window. when the small cicle area occupied the sliding window area large than 90%(Inscribed circle and square), and less than 100%(avoiding sliding window move in the bigger cicle). this position is a small circle. the largest sliding windows size is the largest small cicle size. Hope some help.
in addtion, on the result of Piglet, apply k-means, which k = 2, you can get a binary image, and then use findcontours to count the small circles.

Related

How to get rectangular box contours when there are overlapping distractions using OpenCV

I pieced together a quick algorithm in python to get the input boxes from a handwritten invoice.
# some preprocessing
img = np.copy(orig_img)
img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
img = cv2.GaussianBlur(img,(5,5),0)
_, img = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# get contours
contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for i, cnt in enumerate(contours):
approx = cv2.approxPolyDP(cnt, 0.01*cv2.arcLength(cnt,True), True)
if len(approx) == 4:
cv2.drawContours(orig_img, contours, i, (0, 255, 0), 2)
It fails to get the 2nd one in this example because the handwriting crosses the box boundary.
Note that this picture could be taken with a mobile phone, so aspect ratios may be a little funny.
So, what are some neat recipes to get around my problem?
And as a bonus. These boxes are from an A4 page with a lot of other stuff going on. Would you recommend a whole different approach to getting the handwritten numbers out?
EDIT
This might be interesting. If I don't filter for 4 sided polys, I get the contours but they go all around the hand-drawn digit. Maybe there's a way to make contours have water-like cohesion so that they pinch off when they get close to themselves?
FURTHER EDIT
Here is the original image without bounding boxes drawn on
Here's a potential solution:
Obtain binary image. We load the image, convert to grayscale, apply a Gaussian blur, and then Otsu's threshold
Detect horizontal lines. We create a horizontal kernel and draw detected horizontal lines onto a mask
Detect vertical lines. We create a vertical kernel and draw detected vertical lines onto a mask
Perform morphological opening. We create a rectangular kernel and perform morph opening to smooth out noise and separate any connected contours
Find contours, draw rectangle, and extract ROI. We find contours and draw the bounding rectangle onto the image
Here's a visualization of each step:
Binary image
Detected horizontal and vertical lines drawn onto a mask
Morphological opening
Result
Individual extracted saved ROI
Note: To extract only the hand written numbers/letters out of each ROI, take a look at a previous answer in Remove borders from image but keep text written on borders (preprocessing before OCR)
Code
import cv2
import numpy as np
# Load image, grayscale, blur, Otsu's threshold
image = cv2.imread('1.png')
original = image.copy()
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), 3)
# Find vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), 3)
# Morph open
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7,7))
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1)
# Draw rectangle and save each ROI
number = 0
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)
ROI = original[y:y+h, x:x+w]
cv2.imwrite('ROI_{}.png'.format(number), ROI)
number += 1
cv2.imshow('thresh', thresh)
cv2.imshow('mask', mask)
cv2.imshow('opening', opening)
cv2.imshow('image', image)
cv2.waitKey()
Since the squares have a quite straight lines, it's good to use Hough transform:
1- Make the image grayscale, then do an Otsu threshold on it, then reverse the binary image
2- Do Hough transform (HoughLinesP) and draw the lines on a new image
3- With findContours and drawContours, make the 3 roi clean
4- Erode the final image a little to make the boxes neater
I wrote the code in C++, it's easily convertible to python:
Mat img = imread("D:/1.jpg", 0);
threshold(img, img, 0, 255, THRESH_OTSU);
imshow("Binary image", img);
img = 255 - img;
imshow("Reversed binary image", img);
Mat img_1 = Mat::zeros(img.size(), CV_8U);
Mat img_2 = Mat::zeros(img.size(), CV_8U);
vector<Vec4i> lines;
HoughLinesP(img, lines, 1, 0.1, 95, 10, 1);
for (size_t i = 0; i < lines.size(); i++)
line(img_1, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]),
Scalar(255, 255, 255), 2, 8);
imshow("Hough Lines", img_1);
vector<vector<Point>> contours;
findContours(img_1,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
for (int i = 0; i< contours.size(); i++)
drawContours(img_2, contours, i, Scalar(255, 255, 255), -1);
imshow("final result after drawcontours", img_2); waitKey(0);
Thank you to those who shared solutions. I ended up taking a slightly different path in the end.
Grayscale, Gaussian Blur, Otsu threshold
Get contours
Filter contours by aspect ratio and extent
Return the minimum upright bounding box of the contour.
Remove any bounding boxes that encapsulate smaller bounding boxes (because you get two boxes, one for the inside contour, and one for the outside).
Here's the code if anyone's interested (except for step 5 - that was just basic numpy manipulation)
orig_img = cv2.imread('example0.jpg')
img = np.copy(orig_img)
img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
img = cv2.GaussianBlur(img,(5,5),0)
_, img = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
boxes = list()
for i, cnt in enumerate(contours):
x,y,w,h = cv2.boundingRect(cnt)
aspect_ratio = float(w)/h
area = cv2.contourArea(cnt)
rect_area = w*h
extent = float(area)/rect_area
if abs(aspect_ratio - 1) < 0.1 and extent > 0.7:
boxes.append((x,y,w,h))
And here's an example of what came out when cutting out the boundary boxes from the original image.

How to remove noise artifacts from an image for OCR with Python OpenCV?

I have subsets of images that contains digits. Each subset is read by Tesseract for OCR. Unfortunately for some images the cropping from the original image isn't optimal.
Hence some artifacts/remains at the top and bottom of the image and hamper Tesseract to recognize characters on the image. Then I would like to get rid of these artifacts and get to a similar result:
First I considered a simple approach: I set the first row of pixels as the reference: if an artifact was found along the x-axis (i.e., a white pixel if the image is binarized), I removed it along the y-axis until the next black pixel. Code for this approach is the one below:
import cv2
inp = cv2.imread("testing_file.tif")
inp = cv2.cvtColor(inp, cv2.COLOR_BGR2GRAY)
_,inp = cv2.threshold(inp, 150, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
ax = inp.shape[1]
ay = inp.shape[0]
out = inp.copy()
for i in range(ax):
j = 0
while j in range(ay):
if out[j,i] == 255:
out[j,i] = 0
else:
break
j+=1
out = cv2.bitwise_not(out)
cv2.imwrite('output.png',out)
But the result isn't good at all:
Then I stumbled across the flood_fill function from scipy (here) but found out it was too much time consuming and still not efficient. A similar question was asked on SO here but didn't help so much. Maybe a k-nearest neighbor approach could be considered? I also found out that methods that consist in merging neighbors pixels under some criteria were called growing methods, among which the single linkage is the most common (here).
What would you recommend to remove the upper and lower artifacts?
Here's a simple approach:
Convert image to grayscale
Otsu's threshold to obtain binary image
Cerate special horizontal kernel and dilate
Detect horizontal lines, sort for largest contour, and draw onto mask
Bitwise-and
After converting to grayscale, we Otsu's threshold to get a binary image
# Read in image, convert to grayscale, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
Next we create a long horizontal kernel and dilate to connect the numbers together
# Create special horizontal kernel and dilate
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (70,1))
dilate = cv2.dilate(thresh, horizontal_kernel, iterations=1)
From here we detect horizontal lines and sort for the largest contour. The idea is that the largest contour will be the middle section of the numbers where the numbers are all "complete". Any smaller contours will be partial or cut off numbers so we filter them out here. We draw this largest contour onto a mask
# Detect horizontal lines, sort for largest contour, and draw on mask
mask = np.zeros(image.shape, dtype=np.uint8)
detected_lines = cv2.morphologyEx(dilate, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
break
Now that we have the outline of the desired numbers, we simply bitwise-and with our original image and color the background white to get our result
# Bitwise-and to get result and color background white
mask = cv2.cvtColor(mask,cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(image,image,mask=mask)
result[mask==0] = (255,255,255)
Full code for completeness
import cv2
import numpy as np
# Read in image, convert to grayscale, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Create special horizontal kernel and dilate
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (70,1))
dilate = cv2.dilate(thresh, horizontal_kernel, iterations=1)
# Detect horizontal lines, sort for largest contour, and draw on mask
mask = np.zeros(image.shape, dtype=np.uint8)
detected_lines = cv2.morphologyEx(dilate, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
break
# Bitwise-and to get result and color background white
mask = cv2.cvtColor(mask,cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(image,image,mask=mask)
result[mask==0] = (255,255,255)
cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('result', result)
cv2.waitKey()

How to obtain a dynamic threshold for contour detection in OpenCV

In my image database, there is a need to 1) detect if there is a flake (a very black contour) or not in an image and also 2) find a minimum closing circle to measure the radius of the flake.
However, the images come with slightly different illuminations.
Here are some examples:
This one is very easy to either detect and measure:
But these ones are more difficult:
My initial thought is to use a threshold related to the average value of pixels of images.
Is there any other way of computing such a dynamic threshold in OpenCV?
I think what you're looking for is cv2.adaptiveThreshold() or Otsu's thresholding. To satisfy your requirements for #1, we can use a minimum threshold area to determine if the flake exists. For #2, once we detect the contour, we can use moments to determine the radius. Here's a simple approach
Convert image to grayscale and median blur
Adaptive threshold
Morph close to smooth image
Dilate to enhance contour
Find contours and sort using contour area
The main idea is to use a large median blur to remove the noise then adaptive threshold.
Here's the results for each for your four pictures. For some of your pictures, the black spot was not actually a circle, it was more of a oval shape. You can decide what you want to do with that situation.
import cv2
image = cv2.imread('4.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.medianBlur(gray, 25)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,27,6)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
dilate = cv2.dilate(close, kernel, iterations=2)
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]
minimum_area = 500
for c in cnts:
area = cv2.contourArea(c)
if area > minimum_area:
# Find centroid
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.circle(image, (cX, cY), 20, (36, 255, 12), 2)
x,y,w,h = cv2.boundingRect(c)
cv2.putText(image, 'Radius: {}'.format(w/2), (10,20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (36,255,12), 2)
break
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('image', image)
cv2.waitKey()
You must start with a thresholding
Here you have some thresholding, you can choose the one you want, with good parameters, most of the noise will go.
Then you can do en edge detection
Finally, hough transform seems to best the best approach to detect circles (noise will be remove by the parameters of the hough circle transform).
You can set a minimal and a maximal radius, so if you have an idea of the average radius, you can adjust it this way.

How to remove small particle background noise from an image?

I'm trying to remove gradient background noise from the images I have. I've tried many ways with cv2 without success.
Converting the image to grayscale at first to make it lose some gradients that may help to find the contours.
Does anybody know of a way to deal with this kind of background? I even tried taking a sample from the corners and applying some kind of kernel filter.
One way to remove gradients is to use cv2.medianBlur() to smooth out the image by taking the median of all pixels under a kernel. Then to extract the letters, you can perform cv2.adaptiveThreshold().
The blur removes most of the gradient noise. You can change the kernel size to remove more but it will also remove the details of the letters
Adaptive threshold the image to extract characters. From your original image, it seems like gradient noise was added onto the the letters c, x, and z to make it blend into the background.
Next we can perform cv2.Canny() to detect edges and obtain this
Then we can do morphological opening using cv2.morphologyEx() to clean up the small noise and enhance details
Now we dilate using cv2.dilate() to obtain a single contour
From here, we find contours using cv2.findContours(). We iterate through each contour and filter using cv2.contourArea() with a minimum and maximum area to obtain bounding boxes. Depending on your image, you may have to adjust the min/max area filter. Here's the result
import cv2
import numpy as np
image = cv2.imread('1.png')
blur = cv2.medianBlur(image, 7)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,3)
canny = cv2.Canny(thresh, 120, 255, 1)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
dilate = cv2.dilate(opening, kernel, iterations=2)
cnts = cv2.findContours(dilate, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
min_area = 500
max_area = 7000
for c in cnts:
area = cv2.contourArea(c)
if area > min_area and area < max_area:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('blur', blur)
cv2.imshow('thresh', thresh)
cv2.imshow('canny', canny)
cv2.imshow('opening', opening)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey(0)
You could put a value on each pixel which defines how dark the pixel is. Then, if there are similar numbers, just find the median and set the similar pixels to that.
Normalize it to white, grey, and black, then you can differentiate between background and characters.

Get area within contours Opencv Python?

I have used an adaptive thresholding technique to create a picture like the one below:
The code I used was:
image = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 45, 0)
Then, I use this code to get contours:
cnt = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
My goal is to generate a mask using all the pixels within the outer contour, so I want to fill in all pixels within the object to be white. How can I do this?
I have tried the code below to create a mask, but the resulting mask seems no different then the image after applying adaptive threshold
mask = np.zeros(image.shape[:2], np.uint8)
cv2.drawContours(mask, cnt, -1, 255, -1)
What you have is almost correct. If you take a look at your thresholded image, the reason why it isn't working is because your shoe object has gaps in the image. Specifically, what you're after is that you expect that the shoe has its perimeter to be all connected. If this were to happen, then if you extract the most external contour (which is what your code is doing), you should only have one contour which represents the outer perimeter of the object. Once you fill in the contour, then your shoe should be completely solid.
Because the perimeter of your shoe is not complete and broken, this results in disconnected white regions. Should you use findContours to find all of the contours, it will only find the contours of each of the white shapes and not the most outer perimeter. As such, if you try and use findContours, it'll give you the same result as the original image, because you're simply finding the perimeter of each white region inside the image, then filling in these regions with findContours.
What you need to do is ensure that the image is completely closed. What I would recommend you do is use morphology to close all of the disconnected regions together, then run a findContours call on this new image. Specifically, perform a binary morphological closing. What this does is that it takes disconnected white regions that are close together and ensures that they're connected. Use a morphological closing, and perhaps use something like a 7 x 7 square structuring element to close the shoe. This structuring element you can think of as the minimum separation between white regions to consider them as being connected.
As such, do something like this:
import numpy as np
import cv2
image = cv2.imread('...') # Load your image in here
# Your code to threshold
image = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 45, 0)
# Perform morphology
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(image, cv2.MORPH_CLOSE, se)
# Your code now applied to the closed image
cnt = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
mask = np.zeros(image.shape[:2], np.uint8)
cv2.drawContours(mask, cnt, -1, 255, -1)
This code essentially takes your thresholded image, and applies morphological closing to this image. After, we find the external contours of this image, and fill them in with white. FWIW, I downloaded your thresholded image, and tried this on my own. This is what I get with your image:
A simple approach would be to close the holes in the foreground to form a single contour with cv2.morphologyEx() and cv2.MORPH_CLOSE
Now that the external contour is filled, we can find the outer contour with cv2.findContours() and use cv2.fillPoly() to fill in all pixels with white
import cv2
# Load in image, convert to grayscale, and threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Close contour
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7,7))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
# Find outer contour and fill with white
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cv2.fillPoly(close, cnts, [255,255,255])
cv2.imshow('close', close)
cv2.waitKey()

Categories

Resources