Find specific contours within image (Python OpenCV) - python

I have a script that loads an image and using selectROI() allows me to select and crop a specific part of that image and get the contours of just that part alone. But how can I search if are there any other contours in the original image just like the one I selected and cropped? My goal is to teach a shape and verify if that shape occurs in any other parts of that image or any other image that I load after, hopefully even being able to have a certain tolerance of correspondence.
I could try something like object detection using HAAR Cascade or YOLO, but I am positive that there is a way to do it without relying on heavy-weight computation AI models, especially because I want to use it on static images, not on video. I say that because that is how it is made on industrial vision systems. You only need to load a single image and select the object that you want to detect so the contours can be drawn. We you load another image, the software will look for these contours up to a certain level of correspondence.
import cv2 as cv
import numpy as np
# Load Image
img = cv.imread('C:/Users/ALEMAC/Downloads/geometricShapes.jpg')
#Selecting ROI
imgdraw = cv.selectROI(img)
cropimg = img[int(imgdraw[1]):int(imgdraw[1]+imgdraw[3]), int(imgdraw[0]):int(imgdraw[0]+imgdraw[2])] #displaying the cropped image as the output on the screen
cv.imshow('Cropped_image',cropimg)
blank = np.zeros(cropimg.shape[:2], dtype='uint8') # creates a blank img, with the same size as our geometricShapes img
gray = cv.cvtColor(cropimg, cv.COLOR_BGR2GRAY)
blur = cv.GaussianBlur(gray,(3,3), cv.BORDER_DEFAULT)
# Find edges using contours method
ret, thresh = cv.threshold(blur, 125,255, cv.THRESH_BINARY)
#cv.imshow('Thresh', thresh)
contours, hierachies = cv.findContours(thresh, cv.RETR_TREE,cv.CHAIN_APPROX_SIMPLE)
cv.drawContours(blank,contours,-1,(255,255,255),thickness=1)
cv.imshow('Contours', blank)
cv.waitKey(0)

Related

Change background color for Thresholderd image

I have been trying to write a code to extract cracks from an image using thresholding. However, I wanted to keep the background black. What would be a good solution to keep the outer boundary visible and the background black. Attached below is the original image along with the threshold image and the code used to extract this image.
import cv2
#Read Image
img = cv2.imread('Original.png')
# Convert into gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Image processing ( smoothing )
# Averaging
blur = cv2.blur(gray,(3,3))
ret,th1 = cv2.threshold(blur,145,255,cv2.THRESH_BINARY)
inverted = np.invert(th1)
plt.figure(figsize = (20,20))
plt.subplot(121),plt.imshow(img)
plt.title('Original'),plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(inverted,cmap='gray')
plt.title('Threshold'),plt.xticks([]), plt.yticks([])
Method 1
Assuming the circle in your images stays in one spot throughout your image set you can manually create a black 'mask' image with a white hole in the middle, then overlay it on the final inverted image.
You can easily make the mask image using your favorite image editor's magic wand tool.
I made this1 by also expanding the circle inwards by one pixel to take into account some of the pixels the magic wand tool couldn't catch.
You would then use the mask image like this:
mask = cv2.imread('/path/to/mask.png')
masked = cv2.bitwise_and(inverted, inverted, mask=mask)
Method 2
If the circle does NOT stay is the same spot throughout your entire image set you can try to make the mask from all the fully black pixels in your original image. This assumes that the 'sample' itself (the thing with the cracks) does not contain fully black pixels. Although this will result in the text on the bottom left to be left white.
# make all the non black pixels white
_,mask = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
1 The original is not the same size as your inverted image and thus the mask I made won't actually fit, you're gonna have to make it yourself.

How to automatically detect a specific feature from one image and map it to another mask image? Then how to smoothen only the corners of the image?

Using the dlib library I was able to mask the mouth feature from one image (masked).
masked
Similarly, I have another cropped image of the mouth that does not have the mask (colorlip).
colorlip
I had scaled and replaced the images (replaced) and using np.where as shown in the code below.
replaced
#Get the values of the lip and the target mask
lip = pred_toblackscreen[bbox_lip[0]:bbox_lip[1], bbox_lip[2]:bbox_lip[3],:]
target = roi[bbox_mask[0]:bbox_mask[1], bbox_mask[2]:bbox_mask[3],:]
cv2.namedWindow('masked', cv2.WINDOW_NORMAL)
cv2.imshow('masked', target)
#Resize the lip to be the same scale/shape as the mask
lip_h, lip_w, _ = lip.shape
target_h, target_w, _ = target.shape
fy = target_h / lip_h
fx = target_w / lip_w
scaled_lip = cv2.resize(lip,(0,0),fx=fx,fy=fy)
cv2.namedWindow('colorlip', cv2.WINDOW_NORMAL)
cv2.imshow('colorlip', scaled_lip)
update = np.where(target==[0,0,0],scaled_lip,target)
cv2.namedWindow('replaced', cv2.WINDOW_NORMAL)
cv2.imshow('replaced', update)
But the feature shape (lip) in 'colorlip' does not match the 'masked' image. So, there is a misalignment and the edges of the mask look sharp as if the image has been overlayed. How to solve this problem? And how to make the final replaced image look more subtle and normal?
**Update #2: OpenCV Image Inpainting to smooth jagged borders.
OpenCV python inpainting should help with rough borders. Using the mouth landmark model, mouth segmentation mask from DL model or anything that was used the border location can be found. From that draw border with a small chosen width around the mouth contour in a new image and use it as a mask for inpainting. The mask I provided need to be inverted to work.
In input masks one of the mask is wider, one has shadow and last one is narrow. The six output images are generated with radius value of 5 and 20 for all three masks.
Code
import numpy as np
# import cv2 as cv2
# import cv2
import cv2.cv2 as cv2
img = cv2.imread('images/lip_img.png')
#mask = cv2.imread('images/lip_img_border_mask.png',0)
mask = cv2.imread('images/lip_img_border_mask2.png',0)
#mask = cv2.imread('images/lip_img_border_mask3.png',0)
mask = np.invert(mask)
# Choose appropriate method and radius.
radius = 20
dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_TELEA)
# dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_NS)
cv2.imwrite('images/inpainted_lip.jpg', dst)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input Image and Masks
Output Images
**Update #1: Added Deep Image harmonization based blending methods.
Try OpenCV seamless cloning for subtle replacement and getting rid of sharp edges. Also deep learning based image inpainting on sharp corners or combining it with seamless clone may provide better results.
Deep learning based Image Harmonization can be another approach to blend together two images such that the cropped part matches the style of background image. Even in this case the pixel intensity will change to match the background but blending will be smoother. Links are added to bottom of the post.
Example
This code example is based on learnopencv seamless cloning example,
# import cv2
from cv2 import cv2
import numpy as np
src = cv2.imread("images/src_img.jpg")
dst = cv2.imread("images/dest_img.jpg")
src_mask = cv2.imread("images/src_img_rough_mask.jpg")
src_mask = np.invert(src_mask)
cv2.namedWindow('src_mask', cv2.WINDOW_NORMAL)
cv2.imshow('src_mask', src_mask)
cv2.waitKey(0)
# Where to place image.
center = (500,500)
# Clone seamlessly.
output = cv2.seamlessClone(src, dst, src_mask, center, cv2.NORMAL_CLONE)
# Write result
cv2.imwrite("images/opencv-seamless-cloning-example.jpg", output)
cv2.namedWindow('output', cv2.WINDOW_NORMAL)
cv2.imshow('output', output)
cv2.waitKey(0)
Source Image
Rough Mask Image
Destination Image
Final Image
Reference
https://docs.opencv.org/4.5.4/df/da0/group__photo__clone.html
https://learnopencv.com/seamless-cloning-using-opencv-python-cpp/
https://learnopencv.com/face-swap-using-opencv-c-python/
https://github.com/JiahuiYu/generative_inpainting
https://docs.opencv.org/4.x/df/d3d/tutorial_py_inpainting.html
Deep Image Harmonization
https://github.com/bcmi/Image-Harmonization-Dataset-iHarmony4
https://github.com/wasidennis/DeepHarmonization
https://github.com/saic-vul/image_harmonization
https://github.com/wuhuikai/GP-GAN
https://github.com/junleen/RainNet
https://github.com/bcmi/BargainNet-Image-Harmonization
https://github.com/vinthony/s2am

How can I use Blob detection to Isolate region in an image

I want to measure area of land in an aerial view image, so I was adviced to first use blob detection to Isolate region and threshold the image. Here is what I have done, but I am not sure if this is correct.
img = cv2.imread('landarea.jpg', cv2.IMREAD_COLOR)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create()
# Detecting blobs.
keypoints = detector.detect(img)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 0,
255),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
print(im_with_keypoints.size)
# plt.show()
cv2.imshow("Blob",im_with_keypoints )
cv2.waitKey(0)
cv2.destroyAllWindows()
# Convert to gray
gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
#Threshold the image
ret3,th3 = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
titles = ["Otsu's Thresholding"]
images = [th3]
plt.figure(figsize=(15, 10))
for i in range(1):
plt.subplot(1,1,i+1), plt.imshow(images[i], 'gray')
plt.title(titles[i])
plt.xticks([]), plt.yticks([])
plt.show()
Image Link
In summary this is what I am trying to achieve
Task : Land Area Measurement:
I am current working on getting the Area, Width and Height Measurement of a Land from aerial Mapping Images. Steps taken to achieve this are listed below:
I was advice that it's better to write a Python code from scratch to do my Image processing.
Also from SO, I was advice to use a blob detector to isolates regions, threshold my image and count the number of white pixels. Then I can calibrate the dimensions of the image with ground truth dimensions.
I have been able to detect blobs, threshold the image and I have also been able to get the count of white pixels. My major challenge is on the last two steps and how to get the measurement from this steps.
Also a friend said that normally the shape of any photo could be square, rectangle, etc. So the area might not vary if I measure area with photos.
I do not think blob detection is going to work well. You would need to threshold the image in some way to separate the land area from everything else. From what part of the image do you want to get the area?

Image variable in imread()

I'm trying to subtract the background of image and then draw contours.
I'm using the removebg API to subtract the background, and I want to take the image result and draw contours on it.
Removing the background:
rmbg = RemoveBg("MY_KEY_API", "error.log")
img_no_bg = rmbg.remove_background_from_img_file("images/imagetest.png")
I want to convert the result imagetest_no-bg.png to grayscale, but I can't use img_no_bg as the attribute for the function imread().
image = cv2.imread(img_no_bg)
gray = gray = cv2.cvtColor(img_no_bg,cv2.COLOR_BGR2GRAY)

Cut out a piece from image the piece is not rectangular (eg trapeze), and turn it into a rectangle that fits with the board

I will bring an example I have a picture of a swimming pool with some tracks I want to take only the three middle tracks Now what is the best way to cut the image in a trapeze shape then how to take this trapeze and try to fit it to the size of the window that will have a relatively similar ratio between the two sides (upper and lower)
image for the example
I modified this example
Result:
import numpy as np
import cv2
# load image
img = cv2.imread('pool.jpg')
# resize to easily fit on screen
img = cv2.resize(img,None,fx=0.5, fy=0.5, interpolation = cv2.INTER_CUBIC)
# determine cornerpoints of the region of interest
pts1 = np.float32([[400,30],[620,30],[50,700],[1000,700]])
# provide new coordinates of cornerpoints
pts2 = np.float32([[0,0],[300,0],[0,600],[300,600]])
# determine transformationmatrix
M = cv2.getPerspectiveTransform(pts1,pts2)
# apply transformationmatrix
dst = cv2.warpPerspective(img,M,(300,600))
# display image
cv2.imshow("img", dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note the rezise function, you may wish to delete that line, but you will have to change the coordinates of the cornerpoints accordingly.
I used about the height and width of the base of the trapezoid for the new image (300,600).
You can tweak the cornerpoints and final image size as you see fit.
You can use imutils.four_point_transform function. You can read more about it here.
Basic usage is finding the document contours on a canny edge detected image (again, you can use imutils package that I linked), find the contours on that image and then apply four_point_transform on that contour.
EDIT: How to use canny edge detection and four_point_transform
For finding contours you can use openCV and imutils like this:
cnts = cv2.findContours(edged_image.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
Now, when you have the contours just iterate through and see which one is the biggest and has four points (4 vertices). Then just pass the image and the contour to the four_point_transform function like this:
image_2 = four_point_transform(image, biggest_contour)
That's it.

Categories

Resources