Get area within contours Opencv Python? - python

I have used an adaptive thresholding technique to create a picture like the one below:
The code I used was:
image = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 45, 0)
Then, I use this code to get contours:
cnt = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
My goal is to generate a mask using all the pixels within the outer contour, so I want to fill in all pixels within the object to be white. How can I do this?
I have tried the code below to create a mask, but the resulting mask seems no different then the image after applying adaptive threshold
mask = np.zeros(image.shape[:2], np.uint8)
cv2.drawContours(mask, cnt, -1, 255, -1)

What you have is almost correct. If you take a look at your thresholded image, the reason why it isn't working is because your shoe object has gaps in the image. Specifically, what you're after is that you expect that the shoe has its perimeter to be all connected. If this were to happen, then if you extract the most external contour (which is what your code is doing), you should only have one contour which represents the outer perimeter of the object. Once you fill in the contour, then your shoe should be completely solid.
Because the perimeter of your shoe is not complete and broken, this results in disconnected white regions. Should you use findContours to find all of the contours, it will only find the contours of each of the white shapes and not the most outer perimeter. As such, if you try and use findContours, it'll give you the same result as the original image, because you're simply finding the perimeter of each white region inside the image, then filling in these regions with findContours.
What you need to do is ensure that the image is completely closed. What I would recommend you do is use morphology to close all of the disconnected regions together, then run a findContours call on this new image. Specifically, perform a binary morphological closing. What this does is that it takes disconnected white regions that are close together and ensures that they're connected. Use a morphological closing, and perhaps use something like a 7 x 7 square structuring element to close the shoe. This structuring element you can think of as the minimum separation between white regions to consider them as being connected.
As such, do something like this:
import numpy as np
import cv2
image = cv2.imread('...') # Load your image in here
# Your code to threshold
image = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 45, 0)
# Perform morphology
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(image, cv2.MORPH_CLOSE, se)
# Your code now applied to the closed image
cnt = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
mask = np.zeros(image.shape[:2], np.uint8)
cv2.drawContours(mask, cnt, -1, 255, -1)
This code essentially takes your thresholded image, and applies morphological closing to this image. After, we find the external contours of this image, and fill them in with white. FWIW, I downloaded your thresholded image, and tried this on my own. This is what I get with your image:

A simple approach would be to close the holes in the foreground to form a single contour with cv2.morphologyEx() and cv2.MORPH_CLOSE
Now that the external contour is filled, we can find the outer contour with cv2.findContours() and use cv2.fillPoly() to fill in all pixels with white
import cv2
# Load in image, convert to grayscale, and threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Close contour
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7,7))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
# Find outer contour and fill with white
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cv2.fillPoly(close, cnts, [255,255,255])
cv2.imshow('close', close)
cv2.waitKey()

Related

Masked out large irregular shape from image with Python

The objective is to remove large irregular area and maintained only character in the image.
For example, given the following
and the expected masked output
I have the impression this can be achieved as below
import cv2
import numpy as np
from matplotlib import pyplot as plt
dpath='remove_bg1.jpg'
img = cv2.imread(dpath)
img_fh=img.copy()
cv2.bitwise_not(img_fh,img_fh)
ksize=10
kernel = np.ones((ksize,ksize),np.uint8)
erosion = cv2.erode(img_fh,kernel,iterations = 3)
invertx = cv2.bitwise_not(erosion)
masked = cv2.bitwise_not(cv2.bitwise_and(img_fh,invertx))
all_image=[img,invertx,masked]
ncol=len(all_image)
for idx, i in enumerate(all_image):
plt.subplot(int(f'1{ncol}{idx+1}')),plt.imshow(i)
plt.show()
which produce
Clearly, the code above did not produced the expected result.
May I know how to address this issue properly?
To remove the unwanted blob, we must create a mask such that it encloses it completely.
Flow:
Inversely binarize the image (such that you have a white foreground against dark background)
Dilate the image (since the blob makes contact with letter 'A', it has to be isolated )
Find contour with the largest area
Draw the contour on an another 1-channel image and thicken it (dilation)
Pixel Assignment: Pixels containing the dilated blob are made white on the original image
Code:
im = cv2.imread('stained_text.jpg')
im2 = im.copy()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# inverse binaraization
th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
Notice the blob region touching the letter 'A'. Hence to isolate it we perform erosion using an elliptical kernel
# erosion
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
erode = cv2.erode(th, kernel, iterations=2)
# find contours
contours, hierarchy = cv2.findContours(erode, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Contour of maximum area
c = max(contours, key = cv2.contourArea)
# create 1-channel image in black
black = np.zeros((im.shape[0], im.shape[1]), np.uint8)
# draw the contour on it
black = cv2.drawContours(black, [c], 0, 255, -1)
# perform dilation to have clean border
# we are using the same kernel
dilate = cv2.dilate(black, kernel, iterations = 3)
# assign the dilated area in white over the original image
im2[dilate == 255] = (255,255,255)
This was just one of the many possible ways on how to proceed. The key thing to note is how to isolate the blob.

Contour around watermark opencv

I want to draw a box around the watermark in my image. I have extracted the watermark and have found the contours. However, the contour is not drawn around the watermark. The contour is drawn across my full image. Kindly help me with the correct code.
The output of contour co-ordinates are:
[array([[[ 0, 0]],
[[ 0, 634]],
[[450, 634]],
[[450, 0]]], dtype=int32)]
The output image is:
My code snippet is as follows:
img = cv2.imread('Watermark/w3.png')
gr = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bg = gr.copy()
closing = cv2.morphologyEx(bg, cv2.MORPH_CLOSE, kernel) #dilation followed by erosion
#plt.imshow(cv2.subtract(img,opening))
plt.imshow(closing)
_,contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print(contours)
print(len(contours))
if len(contours)>0 :
cnt=contours[len(contours)-1]
cv2.drawContours(closing, [cnt], 0, (0,255,0), 3)
plt.imshow(closing)
The function findContours is having difficulty to find your box contour because expects to run over a binary image. From the documentation:
For better accuracy, use binary images. So before finding contours, apply threshold or canny edge detection.
In OpenCV, finding contours is like finding white object from black background. So remember, object to be found should be white and background should be black.
Thus, after cvtColor function apply the threshold making sure you have a black background.
...
img = cv2.imread('sample.png')
gr = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, bg = cv2.threshold(gr, 127, 255, cv2.THRESH_BINARY_INV)
...
If you run the findContours over this binary image you will find multiple boxes
To get a single box around the whole text you can search for the number of iterations parameter over the
morphologyEx function that creates one single blob.
...
kernel = np.ones((3,3))
closing = cv2.morphologyEx(bg, cv2.MORPH_CLOSE, kernel, iterations=5)
...
So, after creating the blob apply the findContours you already have and use the minAreaRect to find the rotated rectangle with the minimum area enclosing the set of points passed.
...
contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))
for i in range(len(contours)):
rect = cv2.minAreaRect(contours[i])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(127,60,255),2)
cv2.imwrite("output_box.png", img)

Computer Vision: Opencv Counting small circles inside big circle

Here is the image on which i have been working on
The goal is to detect small circles inside the big one.
currently what i have done is converted the image to gray scale and applied threshold (cv2.THRESH_OTSU) to which resulted in this image
After this i have filtered out large objects using findcontours applied Morph open using elliptical shaped kernel which i found on stackoverflow
The result image is like this
Can someone guide me through the correct path on what to do and where i'm getting wrong.
Below is attached code on which i have been working on
import cv2
import numpy as np
# Load image, grayscale, Otsu's threshold
image = cv2.imread('01.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
#cv2.imwrite('thresh.jpg', thresh)
# Filter out large non-connecting objects
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
#print(area)
if area < 200 and area > 0:
cv2.drawContours(thresh,[c],0,0,-1)
# Morph open using elliptical shaped kernel
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=3)
# Find circles
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area > 20 and area < 50:
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(image, (int(x), int(y)), int(r), (36, 255, 12), 2)
cv2.namedWindow('orig', cv2.WINDOW_NORMAL)
cv2.imshow('orig', thresh)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow('image', image)
cv2.waitKey()
Thank you!
You throw away a lot of useful information by converting your image to grayscale.
Why not use the fact that the spots you are looking for are the only thing that is red/orange?
I multiplied the saturaton channel with the red channel which gave me this image:
Now finding the white blobs becomes trivial.
Experiment with different weights for those channels, or apply thresholds first. There are many ways. Experiment with different illumination, different backgrounds until you get the ideal input for your image processing.
The main problem in your code is the flag that you are using in cv2.findContours() function.
For such a problem in which we have to find contours which can appear inside another contour(the big circle), we should not use the flag cv2.RETR_EXTERNAL, instead use cv2.RETR_TREE. Click here for detailed info..
Also, it is always better to use cv2.CHAIN_APPROX_NONE instead of cv2.CHAIN_APPROX_SIMPLE if memory is not an issue. Click here for detailed info.
Thus, the following simple code can be used to solve this problem.
import cv2
import numpy as np
Image = cv2.imread("Adg5.jpg")
GrayImage = cv2.cvtColor(Image, cv2.COLOR_BGR2GRAY)
# Applying Otsu's Thresholding
Retval, ThreshImage = cv2.threshold(GrayImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# Finding Contours in the image
Contours, Hierarchy = cv2.findContours(ThreshImage, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Taking only those contours which have no child contour.
FinalContours = [Contours[i] for i in range(len(Contours)) if Hierarchy[0][i][2] == -1]
# Drawing contours
Image = cv2.drawContours(Image, FinalContours, -1, (0, 255, 0), 1)
cv2.imshow("Contours", Image)
cv2.waitKey(0)
Resulting image
In this method, a lot of noise at the boundary is also coming but the required orange points are also being detected. Now the task is to remove boundary noise.
Another method that removes boundary noise to a great extent is similar to #Piglet 's approach.
Here, I am using HSV image to segment out the orange points and then detecting them using the above approach.
import cv2
import numpy as np
Image = cv2.imread("Adg5.jpg")
HSV_Image = cv2.cvtColor(Image, cv2.COLOR_BGR2HSV)
# Extracting orange colour using HSV Image.
ThreshImage = cv2.inRange(HSV_Image, np.array([0, 81, 0]), np.array([41, 255, 255]))
# Finding Contours
Contours, Hierarchy = cv2.findContours(ThreshImage, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Taking only those contours which have no child contour.
FinalContours = [Contours[i] for i in range(len(Contours)) if Hierarchy[0][i][2] == -1]
# Drawing Contours
Image = cv2.drawContours(Image, FinalContours, -1, (0, 255, 0), 1)
cv2.imshow("Contours", Image)
cv2.waitKey(0)
Resultant Image
I have an idea that detet small circles by sliding window. when the small cicle area occupied the sliding window area large than 90%(Inscribed circle and square), and less than 100%(avoiding sliding window move in the bigger cicle). this position is a small circle. the largest sliding windows size is the largest small cicle size. Hope some help.
in addtion, on the result of Piglet, apply k-means, which k = 2, you can get a binary image, and then use findcontours to count the small circles.

How to remove small particle background noise from an image?

I'm trying to remove gradient background noise from the images I have. I've tried many ways with cv2 without success.
Converting the image to grayscale at first to make it lose some gradients that may help to find the contours.
Does anybody know of a way to deal with this kind of background? I even tried taking a sample from the corners and applying some kind of kernel filter.
One way to remove gradients is to use cv2.medianBlur() to smooth out the image by taking the median of all pixels under a kernel. Then to extract the letters, you can perform cv2.adaptiveThreshold().
The blur removes most of the gradient noise. You can change the kernel size to remove more but it will also remove the details of the letters
Adaptive threshold the image to extract characters. From your original image, it seems like gradient noise was added onto the the letters c, x, and z to make it blend into the background.
Next we can perform cv2.Canny() to detect edges and obtain this
Then we can do morphological opening using cv2.morphologyEx() to clean up the small noise and enhance details
Now we dilate using cv2.dilate() to obtain a single contour
From here, we find contours using cv2.findContours(). We iterate through each contour and filter using cv2.contourArea() with a minimum and maximum area to obtain bounding boxes. Depending on your image, you may have to adjust the min/max area filter. Here's the result
import cv2
import numpy as np
image = cv2.imread('1.png')
blur = cv2.medianBlur(image, 7)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,3)
canny = cv2.Canny(thresh, 120, 255, 1)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
dilate = cv2.dilate(opening, kernel, iterations=2)
cnts = cv2.findContours(dilate, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
min_area = 500
max_area = 7000
for c in cnts:
area = cv2.contourArea(c)
if area > min_area and area < max_area:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('blur', blur)
cv2.imshow('thresh', thresh)
cv2.imshow('canny', canny)
cv2.imshow('opening', opening)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey(0)
You could put a value on each pixel which defines how dark the pixel is. Then, if there are similar numbers, just find the median and set the similar pixels to that.
Normalize it to white, grey, and black, then you can differentiate between background and characters.

How to remove the white border/edge around a figure in an image using python?

I want to remove the white border between the black mask and the body image
Image input examples:
Image output with thickness 1:
Image output with thickness 2:
I tried some games with Blur and thresholds that I found in here
I also used this code to find and draw the contour
thickness = 3
image = cv2.imread('../finetune/22.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cnts = cv2.findContours(gray, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cv2.drawContours(image, cnts, -1, (255,0,0), thickness)
cv2.imshow('image', image)
cv2.imwrite('../finetune/22-'+str(thickness)+'r.png',image)
cv2.waitKey()
However the contour I've found is the black mask edge and not the white line
I played with the thickness and it works nice but on each image this contour is different, also the thickness is not equal throughout the figure
what is the best precise way to remove it?
Here are two methods:
Method #1: cv2.erode()
You can use erosion is to erode away the boundaries of the white foreground object. Essentially the idea is to perform 2D convolution with a kernel. A kernel can be created using cv2.getStructuingElement() where you can pass the shape and size of the desired kernel to create. Typical kernels are cv2.MORPH_RECT, cv2.MORPH_ELLIPSE, or cv2.MORPH_CROSS. The kernel slides through the image where a pixel is considered a 1 if all the pixels under the kernel is 1 otherwise it is eroded to 0. The net effect is that all pixels on the boundaries will be discarded depending on the shape and size of the kernel. The thickness of the foreground decreases and is useful for removing small white noise or to detach objects. You can adjust the strength of the erosion with the number of iterations to perform.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
erode = cv2.erode(image, kernel, iterations=1)
Method #1: Opening with cv2.morphologyEx()
The opposite to erosion is dilation which enhances the image. Typically, dialtion is performed after erosion to "normalize" the effect of the morphological operation. OpenCV combines these steps into a single operation called morphological opening. Opening is just another name for erosion followed by dilation and will typically give you smoother results compared to only eroding.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel)
Result
You can experiment with the kernel shape and the number of iterations. To remove more noise, increase the kernel size and the number of iterations while to remove less, decrease the kernel size and the number of iterations.
import cv2
image = cv2.imread('1.png')
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations=3)
cv2.imshow('opening', opening)
cv2.waitKey()
The answer below sent by nathancy show exactly the result I want to achieve but it does not helping me with my root problem
When I use drawContours I could draw the contours on my mask and improve it
So this is more information about my problem:
After I've got a mask using an image segmentation method I want to change the background (in the example I use black background) but I still have white contour between the new background and the figure
Original image:
https://drive.google.com/open?id=1P39VCEe2FTqkD6JbdM4ueMr_71C6nI42
Mask:
https://drive.google.com/open?id=1LTHaclsDOxRJCI9t5bLg3PeanseRR9bc
Output:
https://drive.google.com/open?id=1-uQx77-fmMf_9qFSgNMBvo77Q6Ag8qEZ
This is the code I use:
image = cv2.imread('../finetune/1.png')
mask = cv2.imread('../finetune/1mask.png')
output = np.zeros(image.shape, dtype=np.uint8)
output[np.where(mask == 255)] = image[np.where(mask == 255)]
cv2.imshow("output",output)
cv2.imwrite('../finetune/1.output.png',output)
helping the answer above I can take the contour again and create new mask coordinately but I'm sure there is better elegant way to do so
to clarify, I want to improve the mask in order to prevent the white border when I put it on a new background

Categories

Resources