How to measure the length of a curved contour [duplicate] - python

This is regarding a project that concerns detection of text in an image using OpenCV in C. The process is to detect the colors inside and outside the corresponding contours and the way to do that is to draw normals on the contours in equal spaced positions and extract the pixel colors in the corresponding positions of the normals end-points.
I am trying to implement this using the following code but it's not working. I mean, its drawing the normals but not in and equi-spaced fashion.
for( ; contours!=0 ; contours = contours->h_next )
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours( cc_color, contours, color, CV_RGB(0,0,0), -1, 1, 8, cvPoint(0,0) );
ptr = contours;
for( i=1; i<ptr->total; i++)
{
p1 = CV_GET_SEQ_ELEM( CvPoint, ptr, i );
p2 = CV_GET_SEQ_ELEM( CvPoint, ptr, i+1 );
x1 = p1->x;
y1 = p1->y;
x2 = p2->x;
y2 = p2->y;
printf("%d %d %d %d\n",x1,y1,x2,y2);
draw_normals(x1,y1,x2,y2);
}
}
So is there a way to find the length of a contour so that I can divide it by the number of normals I want to draw. Thanks in advance.
EDIT: The draw_normal function draws the normals between two points passed to it as parameters.

So is there a way to find the length of a contour?
Yes, you can find length of a contour using OpenCV standard function , cvarcLength().
Check Documentation here.

Related

Measure how Straight/Smooth the Text Borders are Rendered in an Image

I have two images:
I want to measure how straight/smooth the text borders are rendered.
First image is rendered perfectly straight, so it deserves a quality measure 1. On the other hand, the second image is rendered with a lot of variant curves (rough in a way) that is why it deserves a quality measure less than 1. How will I measure it using image processing or any Python function or any function written in other languages?
Clarification :
There are font styles that are rendered originally with straight strokes but there are also font styles that are rendered smoothly just like the cursive font styles. What I'm really after is to differentiate the text border surface roughness of the characters by giving it a quality measure.
I want to measure how straight/smooth the text borders are rendered in an image.
Inversely, it can also be said that I want to measure how rough the text borders are rendered in an image.
I don't know any python function, but I would:
1) Use potrace to trace the edges and convert them to bezier curves. Here's a vizualisation:
2) Then let's zoom to the top part of the P for example:
You draw lines perpendicular to the curve for a finite length (let's say 100 pixels). You plot the color intensity (you can convert to HSI or HSV and use one of those channels, or just convert to grayscale and take the pixel value directly) over that line:
3) Then you calculate the standard deviation of the derivative. Small standard deviation means sharp edges, large standard deviation means blurry edges. For a perfect edge, the standard deviation would be zero.
4) For every edge were you drew a perpendicular line, you now have a "smoothness" value. You can then average all the smoothness values per edge, per letter, per word or per image, as you see fit. Also, the more perpendicular lines you draw, the more accurate your smoothness value, but the more computationally intensive.
I would try something simple like creating a 'roughness' metric using a few functions from the opencv library, since it's easy to work with in Python (and C++, as well as other wrappers).
For example (without actual source, since I'm typing on my phone):
Preprocess to create binary images (many standard ways).
Use cv2.findContours to get outlines of the letters.
Use cv2.arcLength on each contour as denominators.
Use cv2.approxPolyDP to simplify each contour.
Use cv2.arcLength on each simplified contour as numerators.
Calculate ratios of simplified over full arc lengths.
In step 5, ratios closer to 1.0 require less simplification, so they're presumably less rough. Ratios closer to 0.0 require a lot of simplification, and are therefore probably very rough. Of course, you'll have to tweak the contour finding code to get appropriate outlines to work with, and you'll need to manage numerical precision to keep the math calculations meaningful, but hopefully the idea is clear enough.
OpenCV also has the useful functions cv2.convexHull and cv2.convexityDefects that you might find interesting in related work. However, they didn't seem appropriate for the letters here, since internal features on letters like M for example would be more challenging to address.
Speaking of rough things, I admit this algorithmic outline is incredibly rough! However, I hope it gives you a useful idea to try that seems straightforward to implement quickly to start getting quantitative feedback.
One idea might be simply to get the average of the number of vertices per character in Python/OpenCV using cv2.CHAIN_APPROX_SIMPLE.
Since you have the same characters and you want to know how straight they are, the CHAIN_APPROX_SIMPLE measures only horizontal and vertical corner vertices. For your first image, there should be much fewer vertices than for your second image.
CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal
segments and leaves only their end points. For example, an up-right
rectangular contour is encoded with 4 points.
import cv2
import numpy as np
# read image
img = cv2.imread('lemper1.png')
#img = cv2.imread('lemper2.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# invert
thresh = 255 - thresh
# get contours and compute average number of vertices per character (contour)
result = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
num_contours = 0
sum = 0
for cntr in contours:
cv2.drawContours(result, [cntr], 0, (0,0,255), 1)
num_vertices = len(cntr)
sum = sum + num_vertices
num_contours = num_contours + 1
smoothness = (sum / num_contours)
print(smoothness)
# save resulting images
cv2.imwrite('lemper1_contours.png',result)
#cv2.imwrite('lemper2_contours.png',result)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("contours", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
First image average number of vertices: 49.666666666666664
Second image average number of vertices: 179.14285714285714
So smaller number of vertices means straighter characters.
Preface
There's a few nice ideas presented here based around the properties of the character contours as polylines. Whilst there is some inherent flaws in this approach due to them being a function of resolution and scale, I would like to offer one further interruption of the same. My algorithm is still susceptible but it may offer a different perspective.
Theory
The method I propose is to compare common characters by the number of inflections in their contours. In this context, what I mean by inflection, is a sign change between the cross products of successive polyline segments as vector. For example; consider a polyline contour of a circle, starting at the mid y coordinate and the x+ most coordinate. If we were to trace the polyline contour CW (clockwise) around the perimeter, each line segment would be incrementally a CW transform of the prior. If at any time a segment turned "away" or "outwards", this transform would be CCW (counter-clockwise) and the cross product will invert. A "rough" circle will therefore have inflections, a "perfect" or "smooth" circle will have none.
Algorithm
The algorithm follows the steps below using the Emgu.CV. C# code below that:
The images are loaded and converted to binary by means of thresholding
The binary images then undergo contour detection and these contours are sort by their bounding box, left to right, so and their indices match the occurrence order of the character they contour.
Each contour is then re-pointed to an equal number of segments in order to normalize for scale and resolution differences between images/characters.
Each contour is "walked" and its number of inflections counted.
// [Some basic extensions are omitted for clarity]
// Load the images
Image<Rgb, byte> baseLineImage = new Image<Rgb, byte>("BaseLine.png");
Image<Rgb, byte> testCaseImage = new Image<Rgb, byte>("TestCase.png");
// Convert them to Gray Scale
Image<Gray, byte> baseLineGray = baseLineImage.Convert<Gray, byte>();
Image<Gray, byte> testCaseGray = testCaseImage.Convert<Gray, byte>();
// Threshold the images to binary
Image<Gray, byte> baseLineBinary = baseLineGray.ThresholdBinaryInv(new Gray(100), new Gray(255));
Image<Gray, byte> testCaseBinary = testCaseGray.ThresholdBinaryInv(new Gray(100), new Gray(255));
// Some dilation required on the test image so that the characters are continuous
testCaseBinary = testCaseBinary.Dilate(3);
// Extract the the contours from the images to isolate the character profiles
// and sort them left to right so as the indicies match the character order
VectorOfVectorOfPoint baseLineContours = new VectorOfVectorOfPoint();
Mat baseHierarchy = new Mat();
CvInvoke.FindContours(
baseLineBinary,
baseLineContours,
baseHierarchy,
RetrType.External,
ChainApproxMethod.ChainApproxSimple);
var baseLineContoursList = baseLineContours.ToList();
baseLineContoursList.Sort(new ContourComparer());
VectorOfVectorOfPoint testCaseContours = new VectorOfVectorOfPoint();
Mat testHierarchy = new Mat();
CvInvoke.FindContours(
testCaseBinary,
testCaseContours,
testHierarchy,
RetrType.External,
ChainApproxMethod.ChainApproxSimple);
var testCaseContoursList = testCaseContours.ToList();
testCaseContoursList.Sort(new ContourComparer());
var baseLineRepointedContours = RepointContours(baseLineContoursList, 50);
var testCaseRepointedContours = RepointContours(testCaseContoursList, 50);
var baseLineInflectionCounts = GetContourInflections(baseLineRepointedContours);
var testCaseInflectionCounts = GetContourInflections(testCaseRepointedContours);
Inflection Detection/Counting
static List<List<Point>> GetContourInflections(List<VectorOfPoint> contours)
{
// A resultant list to return the inflection points
List<List<Point>> result = new List<List<Point>>();
// Calculate the forward to reverse cross product at each vertex
List<double> crossProducts;
// Points used to store 2D Vectors as X,Y (I,J)
Point priorVector, forwardVector;
foreach (VectorOfPoint contour in contours)
{
crossProducts = new List<double>();
for (int p = 0; p < contour.Size; p++)
{
// Determine the vector to the prior to this vertex
priorVector = p == 0 ?
priorVector = new Point()
{
X = contour[p].X - contour[contour.Size - 1].X,
Y = contour[p].Y - contour[contour.Size - 1].Y
} :
priorVector = new Point()
{
X = contour[p].X - contour[p - 1].X,
Y = contour[p].Y - contour[p - 1].Y
};
// Determine the vector to the next vector
// If this is the lst vertex, loop back to vertex 0
forwardVector = p == contour.Size - 1 ?
new Point()
{
X = contour[0].X - contour[p].X,
Y = contour[0].Y - contour[p].Y,
} :
new Point()
{
X = contour[p + 1].X - contour[p].X,
Y = contour[p + 1].Y - contour[p].Y,
};
// Calculate the cross product of the prior and forward vectors
crossProducts.Add(forwardVector.X * priorVector.Y - forwardVector.Y * priorVector.X);
}
// Given the calculated cross products, detect the inflection points
List<Point> inflectionPoints = new List<Point>();
for (int p = 1; p < contour.Size; p++)
{
// If there is a sign change between this and the prior cross product, an inflection,
// or change from CW to CCW bearing increments has occurred. To and from zero products
// are ignored
if ((crossProducts[p] > 0 && crossProducts[p-1] < 0) ||
(crossProducts[p] < 0 && crossProducts[p-1] > 0))
{
inflectionPoints.Add(contour[p]);
}
}
result.Add(inflectionPoints);
}
return result;
}
Output
L: Baseline Inflections:0 Testcase Inflections:22
E: Baseline Inflections:1 Testcase Inflections:16
M: Baseline Inflections:4 Testcase Inflections:15
P: Baseline Inflections:11 Testcase Inflections:17
E: Baseline Inflections:1 Testcase Inflections:10
R: Baseline Inflections:9 Testcase Inflections:16
Contours (Blue) and Inflections (Red)

Detect contour intersection without drawing

I'm working to detect cells within microscope images like the one below. There are often spurious contours that get drawn due to imperfections on the microscope slides, like the one below the legend in the figure below.
I'm currently using this solution to clean these up. Here's the basic idea.
# Create image of background
blank = np.zeros(image.shape[0:2])
background_image = cv2.drawContours(blank.copy(), background_contour, 0, 1, -1)
for i, c in enumerate(contours):
# Create image of contour
contour_image = cv2.drawContours(blank.copy(), contours, i, 1, -1)
# Create image of focal contour + background
total_image = np.where(background_image+contour_image>0, 1, 0)
# Check if contour is outside postive space
if total_image.sum() > background_image.sum():
continue
This works as expected; if the total_image area is greater than the area of the background_image then c must be outside the region of interest. But drawing all of these contours is incredibly slow and checking thousands of contours takes hours. Is there a more efficient way to check if contours overlap that doesn't require drawing the contours?
I assume the goal is to exclude the external contour from further analysis? If so, the easiest is to use the red background contour as a mask. Then use the masked image to detect the blue cells.
# Create image of background
blank = np.zeros(image.shape[0:2], dtype=np.uint8)
background_image = cv2.drawContours(blank.copy(), background_contour, 0, (255), -1)
# mask input image (leaves only the area inside the red background contour)
res = cv2.bitwise_and(image,image,mask=background_image )
#[detect blue cells]
assuming you are trying to find points on the different contours that are overlaping
consider contour as
vector<vector<Point> > contours;
..... //obtain you contrours.
vector<Point> non_repeating_points;
for(int i=0;i<contours.size();i++)
{
for(int j=0;j<contours[i].size();j++)
{
Point this_point= countour[i][j];
for(int k=0;k<non_repeating_points.size();k++)
{//check this list for previous record
if(non_repeating_points[k] == this_point)
{
std::cout<< "found repeat points at "<< std::endl;
std::cout<< this_point << std::endl;
break;
}
}
//if not seen before just add it in the list
non_repeating_points.push_back(this_point);
}
}
I just wrote it without compile. but I think you can understand the idea.
the information you provide is not enough.
In case you mean to find the nearest connected boundary. And there is no overlapping.
you can declare a local cluster near the point non_repeating_points[k]. Call it surround_non_repeating_points[k];
you can control the distance that can be considered as intercept and push all of them in this surround_non_repeating_points[k];
Then just check in a loop for
if(surround_non_repeating_points[k] == this_point)

To find the number of circles in an image using OpenCV

I have an image as below :
Can anyone tell me how to detect the number of circles in it.I'm using Hough circle transform to achieve this and this is my code:
# import the necessary packages
import numpy as np
import sys
import cv2
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(str(sys.argv[1]))
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 5)
no_of_circles = 0
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
no_of_circles = len(circles)
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
print 'no of circles',no_of_circles
I'm getting wrong answers for this code.Can anyone tell me where I went wrong?
i tried a tricky way to detect all circles.
i found HoughCircles parameters manually
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, 50, 40, 46, 0, 0 );
the tricky part is
flip( src, flipped, 1 );
hconcat( src,flipped, flipped );
hconcat( flipped, src, src );
flip( src, flipped, 0 );
vconcat( src,flipped, flipped );
vconcat( flipped, src, src );
flip( src, src, -1 );
will create a model like below before detection.
the result is like this
the c++ code can be easily converted to python
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat src, src_gray, flipped, display;
if (argc < 2)
{
std::cerr<<"No input image specified\n";
return -1;
}
// Read the image
src = imread( argv[1], 1 );
if( src.empty() )
{
std::cerr<<"Invalid input image\n";
return -1;
}
flip( src, flipped, 1 );
hconcat( src,flipped, flipped );
hconcat( flipped, src, src );
flip( src, flipped, 0 );
vconcat( src,flipped, flipped );
vconcat( flipped, src, src );
flip( src, src, -1 );
// Convert it to gray
cvtColor( src, src_gray, COLOR_BGR2GRAY );
// Reduce the noise so we avoid false circle detection
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
// will hold the results of the detection
std::vector<Vec3f> circles;
// runs the actual detection
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, 50, 40, 46, 0, 0 );
// clone the colour, input image for displaying purposes
display = src.clone();
Rect rect_src(display.cols / 3, display.rows / 3, display.cols / 3, display.rows / 3 );
rectangle( display, rect_src, Scalar(255,0,0) );
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
Rect r = Rect( center.x-radius, center.y-radius, radius * 2, radius * 2 );
Rect intersection_rect = r & rect_src;
if( intersection_rect.width * intersection_rect.height > r.width * r.height / 3 )
{
// circle center
circle( display, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( display, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
}
// shows the results
imshow( "results", display(rect_src));
// get user key
waitKey();
return 0;
}
This SO post describes detection of semi-circles, and may be a good start for you:
Detect semi-circle in opencv
If you get stuck in OpenCV, try coding up the solution yourself. Writing a Hough circle finder parameterized for your particular application is relatively straightforward. If you write application-specific Hough algorithms a few times, you should be able to write a reasonable solution in less time than it takes to sort through a bunch of google results, decipher someone else's code, and so on.
You definitely don't need Canny edge detection for an image like this, but it won't hurt.
Other libraries (esp. commercial ones) will allow you to set more parameters for Hough circle finding. I would've expected some overload of the HoughCircle function to allow a struct of search parameters to be passed in, including the minimum percentage of circle completeness (arc length) allowed.
Although it's good to learn both RANSAC and Hough techniques--and, over time, more exotic techniques--I wouldn't necessarily recommend using RANSAC when you have circles defined so nicely and crisply. Without offering specific evidence, I'll just claim that fiddling with RANSAC parameters may be less intuitive than fiddling with Hough parameters.
HoughCircles needs some parameter tuning to work properly.
It could be that in your case the default values of Param1 and Param2 (set to 100) are not good.
You can fine tune your detection with HoughCircle, by computing the ultimate eroded. It will give you the number of circles in your image.
If there are only circles and background on the input you can count the number of connected components and ignore the component associated with background. This will be the simplest and most robust solution

How to adapt or resize a rectangle inside an object without including (or with a few numbers) of background pixels?

After I applied thresholding and finding the contours of the object, I used the following code to get the straight rectangle around the object (or the rotated rectangle inputting its instruction):
img = cv2.imread('image.png')
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,cv2.THRESH_BINARY)
# find contours
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
# straight rectangle
x,y,w,h = cv2.boundingRect(cnt)
img= cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
see the image
Then I have calculated the number of object and background pixels inside the straight rectangle using the following code:
# rectangle area (total number of object and background pixels inside the rectangle)
area_rect = w*h
# white or object pixels (inside the rectangle)
obj = cv2.countNonZero(imgray)
# background pixels (inside the rectangle)
bac = area_rect - obj
Now I want to adapt the rectangle of the object as a function of the relationship between the background pixel and those of the object, ie to have a rectangle occupying the larger part of the object without or with less background pixel, for example:
How do I create this?
This problem can be stated as the find the largest rectangle inscribed in a non-convex polygon.
An approximate solution can be found at this link.
This problem can be formulated also as: for each angle, find the largest rectangle containing only zeros in a matrix, explored in this SO question.
My solution is based on this answer. This will find only axis aligned rectangles, so you can easily rotate the image by a given angle and apply this solution for every angle.
My solution is C++, but you can easily port it to Python, since I'm using mostly OpenCV function, or adjust the solution in the above mentioned answer accounting for rotation.
Here we are:
#include <opencv2\opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
// https://stackoverflow.com/a/30418912/5008845
Rect findMinRect(const Mat1b& src)
{
Mat1f W(src.rows, src.cols, float(0));
Mat1f H(src.rows, src.cols, float(0));
Rect maxRect(0,0,0,0);
float maxArea = 0.f;
for (int r = 0; r < src.rows; ++r)
{
for (int c = 0; c < src.cols; ++c)
{
if (src(r, c) == 0)
{
H(r, c) = 1.f + ((r>0) ? H(r-1, c) : 0);
W(r, c) = 1.f + ((c>0) ? W(r, c-1) : 0);
}
float minw = W(r,c);
for (int h = 0; h < H(r, c); ++h)
{
minw = min(minw, W(r-h, c));
float area = (h+1) * minw;
if (area > maxArea)
{
maxArea = area;
maxRect = Rect(Point(c - minw + 1, r - h), Point(c+1, r+1));
}
}
}
}
return maxRect;
}
RotatedRect largestRectInNonConvexPoly(const Mat1b& src)
{
// Create a matrix big enough to not lose points during rotation
vector<Point> ptz;
findNonZero(src, ptz);
Rect bbox = boundingRect(ptz);
int maxdim = max(bbox.width, bbox.height);
Mat1b work(2*maxdim, 2*maxdim, uchar(0));
src(bbox).copyTo(work(Rect(maxdim - bbox.width/2, maxdim - bbox.height / 2, bbox.width, bbox.height)));
// Store best data
Rect bestRect;
int bestAngle = 0;
// For each angle
for (int angle = 0; angle < 90; angle += 1)
{
cout << angle << endl;
// Rotate the image
Mat R = getRotationMatrix2D(Point(maxdim,maxdim), angle, 1);
Mat1b rotated;
warpAffine(work, rotated, R, work.size());
// Keep the crop with the polygon
vector<Point> pts;
findNonZero(rotated, pts);
Rect box = boundingRect(pts);
Mat1b crop = rotated(box).clone();
// Invert colors
crop = ~crop;
// Solve the problem: "Find largest rectangle containing only zeros in an binary matrix"
// https://stackoverflow.com/questions/2478447/find-largest-rectangle-containing-only-zeros-in-an-n%C3%97n-binary-matrix
Rect r = findMinRect(crop);
// If best, save result
if (r.area() > bestRect.area())
{
bestRect = r + box.tl(); // Correct the crop displacement
bestAngle = angle;
}
}
// Apply the inverse rotation
Mat Rinv = getRotationMatrix2D(Point(maxdim, maxdim), -bestAngle, 1);
vector<Point> rectPoints{bestRect.tl(), Point(bestRect.x + bestRect.width, bestRect.y), bestRect.br(), Point(bestRect.x, bestRect.y + bestRect.height)};
vector<Point> rotatedRectPoints;
transform(rectPoints, rotatedRectPoints, Rinv);
// Apply the reverse translations
for (int i = 0; i < rotatedRectPoints.size(); ++i)
{
rotatedRectPoints[i] += bbox.tl() - Point(maxdim - bbox.width / 2, maxdim - bbox.height / 2);
}
// Get the rotated rect
RotatedRect rrect = minAreaRect(rotatedRectPoints);
return rrect;
}
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Compute largest rect inside polygon
RotatedRect r = largestRectInNonConvexPoly(img);
// Show
Mat3b res;
cvtColor(img, res, COLOR_GRAY2BGR);
Point2f points[4];
r.points(points);
for (int i = 0; i < 4; ++i)
{
line(res, points[i], points[(i + 1) % 4], Scalar(0, 0, 255), 2);
}
imshow("Result", res);
waitKey();
return 0;
}
The result image is:
NOTE
I'd like to point out that this code is not optimized, so it can probably perform better. For an approximized solution, see here, and the papers reported there.
This answer to a related question put me in the right direction.
There's now a python library calculating the maximum drawable rectangle inside a polygon.
Library: maxrect
Install through pip:
pip install git+https://${GITHUB_TOKEN}#github.com/planetlabs/maxrect.git
Usage:
from maxrect import get_intersection, get_maximal_rectangle, rect2poly
# For a given convex polygon
coordinates1 = [ [x0, y0], [x1, y1], ... [xn, yn] ]
coordinates2 = [ [x0, y0], [x1, y1], ... [xn, yn] ]
# find the intersection of the polygons
_, coordinates = get_intersection([coordinates1, coordinates2])
# get the maximally inscribed rectangle
ll, ur = get_maximal_rectangle(coordinates)
# casting the rectangle to a GeoJSON-friendly closed polygon
rect2poly(ll, ur)
Source: https://pypi.org/project/maxrect/
here is a python code I wrote with rotation included. I tried to speed it up, but I guess it can be improved.
For future googlers,
Since your provided sample solution allows background pixels to be within the rectangle, I suppose you wish to find the the smallest rectangle that covers perhaps 80% of the white pixels.
This can be done using a similar method of finding the error ellipse given a set of data (in this case, the data is all the white pixels, and the error ellipse needs to be modified to be a rectangle)
The following links would hence be helpful
How to get the best fit bounding box from covariance matrix and mean position?
http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/

Find Area of a OpenCV Contour

On a recent set of images, my OpenCV code stopped finding the correct area of a contour. This appears to happen when the contour is not closed. I have tried to ensure the contour is closed to no avail.
Edit: The problem is that there are gaps in the contour.
Background:
I have a series of images of a capsule in a channel and I want to measure the area of the shape as well as the centroid from the moments.
Problem:
When the contour is not closed, the moments are wrong.
Edit: When I have gaps, the contour is not of the whole shape and hence the incorrect area.
What I do:
Read image -> img =cv2.imread(fileName,0)
apply Canny filter -> edges = cv2.Canny(img,lowerThreshold,lowerThreshold*2)
find contours -> contours, hierarchy = cv2.findContours(edges,cv2.cv.CV_RETR_LIST,cv2.cv.CV_CHAIN_APPROX_NONE)
find longest contour
ensure contour is closed
find moments -> cv2.moments(cnt)
A working example with test images can be found here.
There is a question regarding closing a contour but neither of the suggestions worked. Using cv2.approxPolyDP does not change the results, although it should return a closed contour. Adding the first point of the contour as the last, in order to make it closed, also does not resolve the issue.
An example of an image with the contour draw on it is below. Here, the area is determined as 85 while in an almost identical image it is 8660, which is what it should be.
Any advice would be appriciated.
Code:
img =cv2.imread(fileName,0)
edges = cv2.Canny(img,lowerThreshold,lowerThreshold*2)
contours, hierarchy = cv2.findContours(edges,cv2.cv.CV_RETR_LIST,cv2.cv.CV_CHAIN_APPROX_NONE) #cv2.cv.CV_CHAIN_APPROX_NONE or cv2.cv.CV_CHAIN_APPROX_SIMPLE
#Select longest contour as this should be the capsule
lengthC=0
ID=-1
idCounter=-1
for x in contours:
idCounter=idCounter+1
if len(x) > lengthC:
lengthC=len(x)
ID=idCounter
if ID != -1:
cnt = contours[ID]
cntFull=cnt.copy()
#approximate the contour, where epsilon is the distance to
#the original contour
cnt = cv2.approxPolyDP(cnt, epsilon=1, closed=True)
#add the first point as the last point, to ensure it is closed
lenCnt=len(cnt)
cnt= np.append(cnt, [[cnt[0][0][0], cnt[0][0][1]]])
cnt=np.reshape(cnt, (lenCnt+1,1, 2))
lenCntFull=len(cntFull)
cntFull= np.append(cntFull, [[cntFull[0][0][0], cntFull[0][0][1]]])
cntFull=np.reshape(cntFull, (lenCntFull+1,1, 2))
#find the moments
M = cv2.moments(cnt)
MFull = cv2.moments(cntFull)
print('Area = %.2f \t Area of full contour= %.2f' %(M['m00'], MFull['m00']))
My problem was, as #HugoRune pointed out, that there are gaps in the countour. The solution is to close the gaps.
I found it difficult to find a general method to close the gaps, so I iterativly change the threshold of the Canny filter and performing morphological closing until a closed contour is found.
For those struggeling with the same problem, there are several good answers how to close contours, such as this or this
Having dealt with a similar problem, an alternative solution (and arguably simpler with less overhead) is to use the morphology opening functionality, which performs an erosion followed by a dilation. If you turn this into a binary image first, perform the opening operation, and the do the Canny detection, that should do the same thing, but without having to iterate with the filter. The only thing you will have to do is play with the kernel size a couple times to identify an appropriate size without losing too much detail. I have found this to be a fairly robust way of making sure the contours are closed.
Morphological operations documentation
An alternate approach is to use the contour points to find the area. Here nContours has previously been found thru cvFindContours(). I have used MFC CArray here. You can use std::vector alternatively.
////////////////////////////////////////////
CvSeq* MasterContour = NULL;
if (nContours > 1)
{
// Find the biggest contour
for (int i = 0; i < nContours; i++)
{
CvRect rect = cvBoundingRect(m_contour, 1);
if (rect.width > rectMax.width)
MasterContour = m_contour;
if (m_contour->h_next != 0)
m_contour = m_contour->h_next;
else
break;
}
}
else
MasterContour = m_contour;
arOuterContourPoints.RemoveAll();
CArray<CPoint, CPoint> arOuterTrackerPoints;
for (int i = 0; i < MasterContour->total; i++)
{
CvPoint *pPt;
pPt = (CvPoint *)cvGetSeqElem(MasterContour, i);
arOuterContourPoints.Add(CPoint(pPt->x, pPt->y));
}
int nOuterArea = 0;
for (int i = 0; i < arOuterContourPoints.GetSize(); i++)
{
if (i == (arOuterContourPoints.GetSize() - 1))
nOuterArea += (arOuterContourPoints[i].x * arOuterContourPoints[0].y - arOuterContourPoints[0].x * arOuterContourPoints[i].y);
else
nOuterArea += (arOuterContourPoints[i].x * arOuterContourPoints[i+1].y - m_arOuterContourPoints[i+1].x * m_arOuterContourPoints[i].y);
}
nOuterAreaPix = abs(nOuterArea / 2.0);
/////////////////////////////////////////////////////////////

Categories

Resources