I am using Opencv and python to detect shapes and then crop them. I have succeeded to do that, however now I am trying to take the cropped images and remove their backgrounds.
The image has a circle inside and surrounded by gray color. (It can be gray or can be even more than one color).
How can I remove the colors surrounding the circle border (which is black) - we can convert the gray color to black - as the border color or even remove it at all and make that transparent.
The result image should contain only the circle.
At least in for this image, there is no need to detect the circle use houghCircle). I think threshold it and find the inner contour , then make mask and do bitwise-op is OK!
My steps:
(1) Read and convert to gray
(2) findContours
(3) find contour that smaller, create a mask
(4) do bitwise_and to crop
Here is my result:
#!/usr/bin/python3
# 2018.01.20 20:58:12 CST
# 2018.01.20 21:24:29 CST
import cv2
import numpy as np
## (1) Read
img = cv2.imread("img04.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## (2) Threshold
th, threshed = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)
## (3) Find the min-area contour
_cnts = cv2.findContours(threshed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[-2]
cnts = sorted(cnts, key=cv2.contourArea)
for cnt in cnts:
if cv2.contourArea(cnt) > 100:
break
## (4) Create mask and do bitwise-op
mask = np.zeros(img.shape[:2],np.uint8)
cv2.drawContours(mask, [cnt],-1, 255, -1)
dst = cv2.bitwise_and(img, img, mask=mask)
## Save it
cv2.imwrite("dst.png", dst)
Related
The objective is to remove large irregular area and maintained only character in the image.
For example, given the following
and the expected masked output
I have the impression this can be achieved as below
import cv2
import numpy as np
from matplotlib import pyplot as plt
dpath='remove_bg1.jpg'
img = cv2.imread(dpath)
img_fh=img.copy()
cv2.bitwise_not(img_fh,img_fh)
ksize=10
kernel = np.ones((ksize,ksize),np.uint8)
erosion = cv2.erode(img_fh,kernel,iterations = 3)
invertx = cv2.bitwise_not(erosion)
masked = cv2.bitwise_not(cv2.bitwise_and(img_fh,invertx))
all_image=[img,invertx,masked]
ncol=len(all_image)
for idx, i in enumerate(all_image):
plt.subplot(int(f'1{ncol}{idx+1}')),plt.imshow(i)
plt.show()
which produce
Clearly, the code above did not produced the expected result.
May I know how to address this issue properly?
To remove the unwanted blob, we must create a mask such that it encloses it completely.
Flow:
Inversely binarize the image (such that you have a white foreground against dark background)
Dilate the image (since the blob makes contact with letter 'A', it has to be isolated )
Find contour with the largest area
Draw the contour on an another 1-channel image and thicken it (dilation)
Pixel Assignment: Pixels containing the dilated blob are made white on the original image
Code:
im = cv2.imread('stained_text.jpg')
im2 = im.copy()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# inverse binaraization
th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
Notice the blob region touching the letter 'A'. Hence to isolate it we perform erosion using an elliptical kernel
# erosion
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
erode = cv2.erode(th, kernel, iterations=2)
# find contours
contours, hierarchy = cv2.findContours(erode, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Contour of maximum area
c = max(contours, key = cv2.contourArea)
# create 1-channel image in black
black = np.zeros((im.shape[0], im.shape[1]), np.uint8)
# draw the contour on it
black = cv2.drawContours(black, [c], 0, 255, -1)
# perform dilation to have clean border
# we are using the same kernel
dilate = cv2.dilate(black, kernel, iterations = 3)
# assign the dilated area in white over the original image
im2[dilate == 255] = (255,255,255)
This was just one of the many possible ways on how to proceed. The key thing to note is how to isolate the blob.
I've got a shape detection with OpenCV in Python going on; bolts and nuts. I take a picture, make it binary, and detect edges. Now the white area is always grainy because of dust and grime. My detection uses the largest areas as parts, which works great. But how can I delete the thousands of objects caused by dust?
In short, I want to reduce the array of shapes to only the biggest ones for further processing.
Here is one way to do that as per my comment above using Python/OpenCV.
From your binary image get the contours. Then select the largest contour. Then draw a white filled contour on a black background image the same size as your input as a mask. Then use numpy to blacken everything in your image that is black in your mask.
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("coke_bottle2.png")
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold using inRange
thresh = cv2.threshold(gray, 50, 255, cv2.THRESH_BINARY)[1]
# apply morphology closing to fill black holes and smooth outline
# could use opening to remove white spots, but we will use contours
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (25,25))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get the largest contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
# draw largest contour as white filled on black background as mask
mask = np.zeros((hh,ww), dtype=np.uint8)
cv2.drawContours(mask, [big_contour], 0, 255, -1)
# use mask to black all but largest contour
result = img.copy()
result[mask==0] = (0,0,0)
# write result to disk
cv2.imwrite("coke_bottle2_threshold.png", thresh)
cv2.imwrite("coke_bottle2_mask.png", mask)
cv2.imwrite("coke_bottle2_background_removed.jpg", result)
# display it
cv2.imshow("thresh", thresh)
cv2.imshow("mask", mask)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold Image (contains small extraneous white regions):
Mask Image (only the largest filled contour):
Result:
I'm a beginner in image processing with Python so I need help.
I'm trying to remove areas of connected pixels from my pictures with the code posted below. Actually, it works but not well.
What I desire is the removing of areas of pixels, such as those marked in red in the pictures reported below, from my images, so as to obtain a cleaned picture.
Would be also great to set a minimum and a maximum limit for the dimensions of the detected areas of connected pixels.
Example of a picture with marked areas 1
Example of a picture with marked areas 2
This is my currently code:
### LOAD MODULES ###
import numpy as np
import imutils
import cv2
def is_contour_bad(c): # Decide what I want to find and its features
peri=cv2.contourArea(c, True) # Find areas
approx=cv2.approxPolyDP(c, 0.3*peri, True) # Set areas approximation
return not len(approx)>2 # Threshold to decide if add an area to the mask for its removing (if>2 remove)
### DATA PROCESSING ###
image=cv2.imread("025.jpg") # Load a picture
gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale
cv2.imshow("Original image", image) # Plot
edged=cv2.Canny(gray, 50, 200, 3) # Edges of areas detection
cnts=cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) # Find contours: a curve joining all the continuous points (along the boundary), having same color or intensity
cnts=imutils.grab_contours(cnts)
mask=np.ones(image.shape[:2], dtype="uint8")*255 # Setup the mask with white background
# Loop over the detected contours
for c in cnts:
# If the contour satisfies "is_contour_bad", draw it on the mask
if is_contour_bad(c):
cv2.drawContours(mask, [c], -1, 0, -1) # (source image, list of contours, with -1 all contours in [c] pass, 0 is the intensity, -1 the thickness)
image_cleaned=cv2.bitwise_and(image, image, mask=mask) # Remove the contours from the original image
cv2.imshow("Adopted mask", mask) # Plot
cv2.imshow("Cleaned image", image_cleaned) # Plot
cv2.imwrite("cleaned_025.jpg", image_cleaned) # Write in a file
You may execute the following processing steps:
Threshold the image to binary image using cv2.threshold.
It's not a must, but in your case it looks like shades of gray are not important.
Use closing morphological operation, for closing small gaps in the binary image.
Use cv2.findContours with cv2.RETR_EXTERNAL parameter, for getting the contours (perimeter) surrounding the white clusters.
Modify the logic of "bad contour", to return true, only if area is large (assuming you only want to clean the large three contour).
Here is the updated code:
### LOAD MODULES ###
import numpy as np
import imutils
import cv2
def is_contour_bad(c): # Decide what I want to find and its features
peri = cv2.contourArea(c) # Find areas
return peri > 50 # Large area is considered "bad"
### DATA PROCESSING ###
image = cv2.imread("025.jpg") # Load a picture
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale
# Convert to binary image (all values above 20 are converted to 1 and below to 0)
ret, thresh_gray = cv2.threshold(gray, 20, 255, cv2.THRESH_BINARY)
# Use "close" morphological operation to close the gaps between contours
# https://stackoverflow.com/questions/18339988/implementing-imcloseim-se-in-opencv
thresh_gray = cv2.morphologyEx(thresh_gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)));
#Find contours on thresh_gray, use cv2.RETR_EXTERNAL to get external perimeter
_, cnts, _ = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Find contours: a curve joining all the continuous points (along the boundary), having same color or intensity
image_cleaned = gray
# Loop over the detected contours
for c in cnts:
# If the contour satisfies "is_contour_bad", draw it on the mask
if is_contour_bad(c):
# Draw black contour on gray image, instead of using a mask
cv2.drawContours(image_cleaned, [c], -1, 0, -1)
#cv2.imshow("Adopted mask", mask) # Plot
cv2.imshow("Cleaned image", image_cleaned) # Plot
cv2.imwrite("cleaned_025.jpg", image_cleaned) # Write in a file
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Marking contours found for testing:
for c in cnts:
if is_contour_bad(c):
# Draw green line for marking the contour
cv2.drawContours(image, [c], 0, (0, 255, 0), 1)
Result:
There is still work to be done...
Update
Two iterations approach:
First iteration - remove the large contour.
Second iteration - remove small but bright contours.
Here is the code:
import numpy as np
import imutils
import cv2
def is_contour_bad(c, thrs): # Decide what I want to find and its features
peri = cv2.contourArea(c) # Find areas
return peri > thrs # Large area is considered "bad"
image = cv2.imread("025.jpg") # Load a picture
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale
# First iteration - remove the large contour
###########################################################################
# Convert to binary image (all values above 20 are converted to 1 and below to 0)
ret, thresh_gray = cv2.threshold(gray, 20, 255, cv2.THRESH_BINARY)
# Use "close" morphological operation to close the gaps between contours
# https://stackoverflow.com/questions/18339988/implementing-imcloseim-se-in-opencv
thresh_gray = cv2.morphologyEx(thresh_gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)));
#Find contours on thresh_gray, use cv2.RETR_EXTERNAL to get external perimeter
_, cnts, _ = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Find contours: a curve joining all the continuous points (along the boundary), having same color or intensity
image_cleaned = gray
# Loop over the detected contours
for c in cnts:
# If the contour satisfies "is_contour_bad", draw it on the mask
if is_contour_bad(c, 1000):
# Draw black contour on gray image, instead of using a mask
cv2.drawContours(image_cleaned, [c], -1, 0, -1)
###########################################################################
# Second iteration - remove small but bright contours
###########################################################################
# In the second iteration, use high threshold
ret, thresh_gray = cv2.threshold(image_cleaned, 150, 255, cv2.THRESH_BINARY)
# Use "dilate" with small radius
thresh_gray = cv2.morphologyEx(thresh_gray, cv2.MORPH_DILATE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2,2)));
#Find contours on thresh_gray, use cv2.RETR_EXTERNAL to get external perimeter
_, cnts, _ = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Find contours: a curve joining all the continuous points (along the boundary), having same color or intensity
# Loop over the detected contours
for c in cnts:
# If the contour satisfies "is_contour_bad", draw it on the mask
# Remove contour if area is above 20 pixels
if is_contour_bad(c, 20):
# Draw black contour on gray image, instead of using a mask
cv2.drawContours(image_cleaned, [c], -1, 0, -1)
###########################################################################
Marked contours:
I have an image like this:
And I want to crop the image anywhere there is red.
So with this image I would be looking to produce 4 crops:
Obviously I first need to detect anywhere there is red in the image. I can do the following:
import cv2
import numpy as np
from google.colab.patches import cv2_imshow
## (1) Read and convert to HSV
img = cv2.imread("my_image_with_red.png")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
## (2) Find the target red region in HSV
hsv_lower = np.array([0,50,50])
hsv_upper = np.array([10,255,255])
mask = cv2.inRange(hsv, hsv_lower, hsv_upper)
## (3) morph-op to remove horizone lines
kernel = np.ones((5,1), np.uint8)
mask2 = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
## (4) crop the region
ys, xs = np.nonzero(mask2)
ymin, ymax = ys.min(), ys.max()
xmin, xmax = xs.min(), xs.max()
croped = img[ymin:ymax, xmin:xmax]
pts = np.int32([[xmin, ymin],[xmin,ymax],[xmax,ymax],[xmax,ymin]])
cv2.drawContours(img, [pts], -1, (0,255,0), 1, cv2.LINE_AA)
cv2_imshow(croped)
cv2_imshow(img)
cv2.waitKey()
Which gives the following result:
The bounding box covers the entire area containing red.
How can I get bounding boxes around each red piece of the image? I have looked into multiple masks but this doesn't seem to work.
What I am looking for is:
detect each red spot in the image;
return boundaries on each red dot;
use those boundaries to produce 4 individual crops as new images.
There are currently several problems:
If you look at your mask image, you will see that all traces of red are captured on the mask including the small noise. You're currently using np.nonzero() which captures all white pixels. This is what causes the bounding box to cover the entire area. To fix this, we can tighten up the lower hsv threshold to get this resulting mask:
Note there are still a lot of small blobs. Your question should be rephrased to
How can I crop the large red regions?
If you want to capture all red regions, you will obtain much more then 4 crops. So to remedy this, we will perform morphological operations to remove the small noise and keep only the large pronounced red regions. This results in a mask image that contains the large regions
You do not require multiple masks
How can I get bounding boxes around each red piece of the image?
You can do this using cv2.findContours() on the mask image to return the bounding rectangles of each red dot.
Oh? This is not your desired result. Since your desired result has some space surrounding each red dot, we also need to include a offset to the bounding rectangle. After adding an offset, here's our result
Since we have the bounding rectangles, we can simply use Numpy slicing to extract and save each ROI. Here's the saved ROIs
So to recap, to detect each red spot in the image, we can use HSV color thresholding. Note this will return all pixels which match this threshold which may be different from what you expect so it is necessary to perform morphological operations to filter the resulting mask. To obtain the bounding rectangles on each red blob, we can use cv2.findContours() which will give us the ROIs using cv2.boundingRect(). Once we have the ROI, we add a offset and extract the ROI using Numpy slicing.
import cv2
import numpy as np
image = cv2.imread("1.png")
original = image.copy()
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
hsv_lower = np.array([0,150,50])
hsv_upper = np.array([10,255,255])
mask = cv2.inRange(hsv, hsv_lower, hsv_upper)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1)
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
offset = 20
ROI_number = 0
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x - offset, y - offset), (x + w + offset, y + h + offset), (36,255,12), 2)
ROI = original[y-offset:y+h+offset, x-offset:x+w+offset]
cv2.imwrite('ROI_{}.png'.format(ROI_number), ROI)
ROI_number += 1
cv2.imshow('mask', mask)
cv2.imshow('close', close)
cv2.imshow('image', image)
cv2.waitKey()
I am new to OpenCV so I really need your help. I have a bunch of images like this one:
I need to detect the rectangle on the image, extract the text part from it and save it as a new image.
Can you please help me with this?
Thank you!
Just to add to Danyals answer I have added an example code with steps written in comments. For this image you don't even need to perform morphological opening on the image. But usually for this kind of noise in the image it is recomended. Cheers!
import cv2
import numpy as np
# Read the image and create a blank mask
img = cv2.imread('napis.jpg')
h,w = img.shape[:2]
mask = np.zeros((h,w), np.uint8)
# Transform to gray colorspace and invert Otsu threshold the image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# ***OPTIONAL FOR THIS IMAGE
### Perform opening (erosion followed by dilation)
#kernel = np.ones((2,2),np.uint8)
#opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# ***
# Search for contours, select the biggest and draw it on the mask
_, contours, hierarchy = cv2.findContours(thresh, # if you use opening then change "thresh" to "opening"
cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(mask, [cnt], 0, 255, -1)
# Perform a bitwise operation
res = cv2.bitwise_and(img, img, mask=mask)
########### The result is a ROI with some noise
########### Clearing the noise
# Create a new mask
mask = np.zeros((h,w), np.uint8)
# Transform the resulting image to gray colorspace and Otsu threshold the image
gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Search for contours and select the biggest one again
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Draw it on the new mask and perform a bitwise operation again
cv2.drawContours(mask, [cnt], 0, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# If you will use pytesseract it is wise to make an aditional white border
# so that the letters arent on the borders
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(res,(x,y),(x+w,y+h),(255,255,255),1)
# Crop the result
final_image = res[y:y+h+1, x:x+w+1]
# Display the result
cv2.imshow('img', final_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
One way to do this (if the rectangle sizes are somewhat predictable) is:
Convert the image to black and white
Invert the image
Perform morphological opening on the image from (2) with a horizontal line / rectangle (I tried with 2x30).
Perform morphological opening on the image from (2) with a vertical line (I tried it with 15x2).
Add the images from (3) and (4). You should only have a white rectangle now. Now can remove all corresponding rows and columns in the original image that are entirely zero in this image.