Find the contour with largest enclosing area using OpenCV (Python) - python

I have an image Original Image, and I would like to find the contour that encloses the box in the image. The reason for doing this, is I would like to then crop the image to the bounding box, and then perform further image processing on this cropped image.
I have tried detecting Canny edges, however they seem not to be connecting as I want them to. Attached is an image of how the canny edges look. Canny edges
gray = img[:,:,1]
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 20, 60)
What is the best way to find the bounding box from the original image?
Many thanks.
Let me know how I can make this question clearer if possible too!

I assume the following: (if this is not the case you should specify such things in your question)
You know the size of the box
The size is always the same
The perspective is always the same
The box is always completely within the field of fiew
The box is not rotated
Use a few scan lines across the image to find the transition from black background to box (in x and y)
Threshold exceeded, max gradient or whatever suits you best.
Discard outliers, use min and max coordinates to position the fixed size ROI over your box.
There are many other ways to find the center positon of that fixed ROI like
threshold, distance transform, maximum
or
threshold, blob search, centroid/contour
You could also do some contour matching.
I recommend you improve your setup so the background illumination does not exceed the box border (left/right is better than top/bottom). Then everything becomes easy.
Your edge image looks terrible btw. Check other methods or improve your Canny parameters.

Related

Canny transform VS threshed image for HoughCircle

I'm try to build a robust coin detection on images using HoughCircle with these params :
blured_roi = cv2.GaussianBlur(region_of_interest, (23, 23), cv2.BORDER_DEFAULT)
rows = blured_roi.shape[0]
circles = cv2.HoughCircles(blured_roi, cv2.HOUGH_GRADIENT, 0.9, rows/8,
param1=30, param2=50, minRadius=30, maxRadius=90)
The region of interest could be (300x300) or (1080x1920). So the minRadius and maxRadius aren't really helping here. But they are approximately the size of my coin in each type of image shape.
So to achieve that I tried many things. First of all, using simple GRAY image with a GaussianBlur filter.
It works on most of cases but if my coin border are similar tint as the background, the GRAY scale image will not really help to detect the right radius of my circle take a look at this example :
In second time, I tried to use edges of the coin to detect circles with Canny Transform, but as you can see above the Canny filter not works as I hoped. So I applied a GaussianBlur of (13, 13).
I also know that there is another canny transform call inside the HoughCircle method, but I wanted to be sure I'll get edges of the coin, because I'm losing them while blurring with GaussianBlur.
Finally I tried using a threshed image, but I can't understand why it didn't works as well as excepted, because on this image we can hope there will never have any noises because it's only black&white (?), and the circle is almost perfect. And applied a GaussianBlur of (9, 9).
Here you can see it failed to detect the coin on the threshed image, but it works with the canny edges image. But in many other cases Hough Transform on edges detection give an unperfected result, and I feel confiant about threshed image which as you can see reveal a nice circle.
I would like to understand why it didn't works on the threshed image (like the example above), and what I could do to make it works.
EDIT 1: So I discovered the different type of BORDER to specify in the GaussianBlur method. I thought it would be really useful to improve the hough circle detection on the Threshed image, but it didn't goes as well as excepted.

How to detect edge of object using OpenCV

I am trying to use OpenCV to measure size of filament ( that plastic material used for 3D printing)
What I am trying to do is measuring filament size ( that plastic material used for 3D printing ). The idea is that I use led panel to illuminate filament, then take image with camera, preprocess the image, apply edge detections and calculate it's size. Most filaments are fine made of one colour which is easy to preprocess and get fine results.
The problem comes with transparent filament. I am not able to get useful results. I would like to ask for a little help, or if someone could push me the right directions. I have already tried cropping the image to heigh that is a bit higher than filament, and width just a few pixels and calculating size using number of pixels in those images, but this did not work very well. So now I am here and trying to do it with edge detections
works well for filaments of single colour
not working for transparent filament
Code below is working just fine for common filaments, the problem is when I try to use it for transparent filament. I have tried adjusting tresholds for Canny function. I have tried different colour-spaces. But I am not able to get the results.
Images that may help to understand:
https://imgur.com/gallery/CIv7fxY
image = cv.imread("../images/img_fil_2.PNG") # load image
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # convert image to grayscale
edges = cv.Canny(gray, 100, 200) # detect edges of image
You can use the assumption that the images are taken under the same conditions.
Your main problem is that the reflections in the transparent filament are detected as edges. But, since the image is relatively simple, without any other edges, you can simply take the upper and the lower edge, and measure the distance between them.
A simple way of doing this is to take 2 vertical lines (e.g. image sides), find the edges that intersect the line (basically traverse a column in the image and find edge pixels), and connect the highest and the lowest points to form the edges of the filament. This also removes the curvature in the filament, which I assume is not needed for your application.
You might want to use 3 or 4 vertical lines, for robustness.

How to detect change in colours in the image below?

I need to identify the pixels where there is a change in colour. I googled for edge detection and line detection techniques but am not sure how or in what way can these be applied.
Here are my very naive attempts:
Applying Canny Edge Detection
edges = cv2.Canny(img,0,10)
with various parameters but it didn't work
Applying Hough Line Transform to detect lines in the document
The intent behind this exercise is that I have an ill-formed table of values in a pdf document with the background I have attached. If I am able to identify the row boundaries using colour matching as in this question, my problem will be reduced to identifying columns in the data.
Welcome to image processing. What you're trying to do here is basically trying to find the places where the change in color between neighboring pixels is big, thus where the derivative of pixel intensities in the y direction is substantial. In signal processing, those are called high frequencies. The most common detector for high frequencies in images is called Canny Edge Detector and you can find a very nice tutorial here, on the OpenCV website.
The algorithm is very easy to implement and requires just a few simple steps:
import cv2
# load the image
img = cv2.imread("sample.png")
# convert to grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# resize for the visualization purposes
img = cv2.resize(img, None, img, fx=0.4, fy=0.4)
# find edges with Canny
edges = cv2.Canny(img, 10, 20, apertureSize=3)
# show and save the result
cv2.imshow("edges", edges)
cv2.waitKey(0)
cv2.imwrite("result.png", edges)
Since your case is very straightforward you don't have to worry about the parameters in the Canny() function call. But if you choose to find out what they do, I recommend checking out how to implement a trackbar and use it for experimenting. The result:
Good luck.

How to calculate a marked area within boundary in SimpleCV or OpenCV

I have this image:
Here I have an image on a green background and an area marked with a red line within it. I want to calculate the area of the marked portion with respect to the Image.
I am cropping the image to remove the green background and calculating the area of the cropped Image. From here I don't know how to proceed.
I have noticed that Contour can be used for this but the problem is how do I draw the contour in this case.
I guess if I can create the contour and fill the marked area with some color, I can subtract it from the whole(cropped) image and get both the areas.
In your link, they use the method threshold with a colour in parameter. Basically it takes your source image and sets as white all pixels greater than this value, or black otherwise (This means that your source image needs to be a greyscale image). This threshold is what enables you to "fill the marked area" in order to make a contour detection possible.
However, I think you should try to use the method inRange on your cropped picture. It is pretty much the same as threshold, but instead of having one threshold, you have a minimum and a maximum boundary. If your pixel is in the range of colours given by your boundaries, then it will be set as white. If it isn't, then it will be set as black. I don't know if this will work, but if you try to isolate the "most green" colours in your range, then you might get your big white area on the top right.
Then you apply the method findContours on your binarized image. It will give you all the contours it found, so if you have small white dots on other places in your image it doesn't matter, you'll only have to select the biggest contour found by the method.
Be careful, if the range of inRange isn't appropriate, the big white zone you should find on top right might contain some noise, and it could mess with the detection of contours. To avoid that, you could blur your image and do some stuff like erosion/dilation. This way you might get a better detection.
EDIT
I'll add some code here, but it can't be used as is. As I said, I have no knowledge in Python so all I can do here is provide you the OpenCV methods with the parameters to provide.
Let's make also a review of the steps:
Binarize your image with inRange. You need to find appropriate values for your minimum and maximum boundaries. What you want to do here is isolate the green colours since it is mostly what composes the area inside your contour. I can't really suggest you something better than trial and error to find the best thresholds. Let's start with those min and max values : (0, 125, 0) and (255, 250, 255)
inRange(source_image, Scalar(0, 125, 0), Scalar(255, 250, 255), binarized_image)
Check your result with imshow
imshow("bin", binarized_image)
If you binarization is ok (you can detect the area you want quite well), apply findContours. I'm sorry I don't understand the syntax used in your tutorial nor in the documentation, but here are the parameters:
binarized_mat: your binarized image
contours: an array of arrays of Point which will contain all the contours detected. Each contour is stored as an array of points.
mode: you can choose whatever you want, but I'd suggest RETR_EXTERNAL in your case.
Get the array with the biggest size, since it might be the contour with the highest number of points (the largest one then).
Calculate the area inside
Hope this helps!

How to properly detect corners using Harris detector with OpenCV?

I'm testing some image processing to obtain minutiae from digital fingerprints. I'm doing so far:
Equalize histogram
Binarize
Apply Zhang-Suen algorithm for lines thinning (this is not working properly).
Try to determine corners in thinned image and show them.
So, the modifications I'm obtaining are:
However, I can't get to obtain possible corners in the last image, which belongs to thinned instance of Mat object.
This is code for trying to get corners:
corners_image = cornerHarris(thinned,1,1,0.04)
corners_image = dilate(corners_image,None)
But trying imshow on the resulting matrix will show something like:
a black image.
How should I determine corners then?
Actually cv::cornerHarris returns corener responses, not corners itself. Looks like responses on your image is too small.
If you want to visualize corners you may get responses which are larger some threshold parameter, then you may mark this points on original image as follows:
corners = cv2.cvtColor(thinned, cv2.COLOR_GRAY2BGR)
threshold = 0.1*corners_image.max()
corners [corners_image>threshold] = [0,0,255]
cv2.imshow('corners', corners)
Then you can call imshow and red points will correspond to corner points. Most likely you will need to tune threshold parameter to get results what you need.
See more details in tutorial.

Categories

Resources