How to calculate area outside bounding boxes - python

So after I've done object detection, I have a few overlapping 2d rectangles, and I want to find size of the area outside of all the rectangles I've created, which has (xyxy) coordinates on each one. How do I find size of area outside that boxes in python?
enter image description here
I have tried to calculate the area by adding the sizes of all the bounding boxes then the sum will be used to subtract the size from the input image. However, it is still constrained by overlapping square lengths so that the measurement is not accurate.

from bbox import BBox2D
box1 = BBox2D([0.20212765957446807, 0.145625, 0.24822695035460993, 0.10875])
box2 = BBox2D([0.6693262411347518, 0.146875, 0.31382978723404253, 0.06875])
print(box2.height * box2.width)
print(box1.height * box1.width)
Try this one.
or simply u can
def _getArea(box):
return (box[2] - box[0]) * (box[3] - box[1])
This should do the trick. Structure of Box: [xmin,ymin,xmax,ymax]

To find the area outside all of the boxes, you will need to do the following things.
Find the area of the entire image
Find the area of all of the rectangles (including overlapping area)
Find the total overlapping area
Subtract the total overlapping area from the total area of the rectangles
Subtract the rectangle area without overlap from the area of the entire image
To find the total overlapping area, you will need to loop through every rectangle on the screen and check if it is overlapping. If it is overlapping another rectangle you will need to find the corners that are overlapping each other. For example in your image, the top left and middle rectangles are overlapping. The two corners you need would be the bottom right corner of the top left rectangle and the top left corner of the middle rectangle. You can then find the area of the rectangle that is made by the overlap.
If you add a little more information in your question, I could type out a simple function that could do this but right now I don't know all the specifications.

Related

How do I fit rectangles to an image in python and obtain their coordinates

I'm looking for a way to split a number of images into proper rectangles. These rectangles are ideally shaped such that each of them take on the largest possible size without containing a lot of white.
So let's say that we have the following image
I would like to get an output such as this:
Note the overlapping rectangles, the hole and the non axis aligned rectangle, all of these are likely scenario's I have to deal with.
I'm aiming to get the coordinates describing the corner pieces of the rectangles so something like
[[(73,13),(269,13),(269,47)(73,47)],
[(73,13),(73,210),(109,210),(109,13)]
...]
In order to do this I have already looked at the cv2.findContours but I couldn't get it to work with overlapping rectangles (though I could use the hierarchy model to deal with holes as that causes the contours to be merged into one.
Note that although not shown holes can be nested.
A algorithm that works roughly as follow should be able to give you the result you seek.
Get all the corner points in the image.
Randomly select 3 points to create a rectangle
Count the ratio of yellow pixels within the rectangle, accept if the ratio satisfy a threshold.
Repeat 2 to 4 until :
a) every single combination of point is complete or
b) all yellow pixel are accounted for or
c) after n number of iteration
The difficult part of this algorithm lies in step 2, creating rectangle from 3 points.
If all the rectangles were right angle, you can simply find the minimum x and y to correspond for topLeft corner and maximum x and y to correspond for bottomRight corner of your new rectangle.
But since you have off axis rectangle, you will need to check if the two vector created from the 3 points have a 90 degree angle between them before generating the rectangle.

Getting the data in the area between two concentric circles in a picture

I am trying to calculate all the red dots in areas between two concentric circles. Finding the red dots is easy, I simply search using a for loop everything of red color, but the problem is finding that inside a contour, especially when I try to run over all the areas between the circles.
Code as bellow:
img2=Image.open("C:\Python27\Image.png")
pixels=list(img2.getdata())
for pixel in pixels:
if pixel==(255,0,0): print pixel
Bellow you can see the sample picture I'm working on to try my algorithm.
enter image description here
If you know where the circle's center is you simply calculate the distance between the red dot and the center. This tells you in which circle band your dot's are.
If you don't know where the circles are apply techniques for finding circles. Hough transform for example.
If you start scanning a single row of pixels in the middle of the edge of the image from left to right you can determine when a pixel is black.
When you record a series of white then black then white pixels you know you've found the edge of a circle. Scanning the same row from right to left will let you figure out the opposite side of the circle. Then you can calculate the equation of that circle from the diameter.
If you keep recording each circle as you move towards the center, you'll find the equation of each circle. Then when you find red pixels, you can determine which area they belong to by using the (x, y) coordinates of the red pixel and the equations of the circles.

Detecting shape of a contour and color inside

I am new to opencv using python and trying to get the shape of a contour in an image.
Considering only regular shapes like square, rectangle, circle and triangle is there any way to get the contour shape using only numpy and cv2 libraries?
Also i want to find the colour inside a contour. How can I do it?
For finding area of a contour there is an inbuilt function: cv2.contourArea(cnt).
Are there inbuilt functions for "contour shape" and "color inside contour" also?
Please help!
Note : The images I am considering contains multiple regular shapes.
This method might be longer, but right now it is on the top of my head. For finding contour shape, use findcontours function, it will give vector of points as output(boundary points of contours). Now find the center of contour, using moments.
for finding contour use this function-
cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
image is the canny output image.
calculate center from moments, refer to this link
http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
calculate distance of each point stored in contours from the center
Now classify shaped by comparing distance of points from center
1)circle - all contours points will be roughly at equal distance from center.
2)square, rectangle- find farthest 4 points from center, These points will be vertices and will have approximately same distance. Now differentiate square from rectangle using edge length
3) traingles - this can be tricky, for different types of triangle, so you can just use else condition here, since you have only 4 shapes
For finding colour, use the vertices for square, rectangle and triangle to create a mask.
Since you have single color only, you make a small patch around center and get the avg value of RGB pixels there.
Assume you have center at (100,100) and its a circle with radius 20 pixel. create patch of size say 10 X 10, with center at (100,100) and find average value to R,G and B values in this patch.
for red R ~ 255 G ~0 and B~0
for green R ~ 0 G ~255 and B~0
for blue R ~0 G ~0 and B~255
Note: opencv stores value as BGR, not RGB
For finding the shape of a particular contour we can draw a bounded rectangle around the contour.
Now we can compare the area of contour with the area of bounded rectangle.
If area of contour is equal to half the area of bounded rectangle the shape is a triangle.
If the area of contour is less that area of bounded rectangle but is greater than half the area of bounded rectangle then its a circle.
Note: This method is limited to regular triangle and circle. this doesnt apply to polygons like hexagon,heptagon etc.

finding canvas boundary of a photo frame

Say we have a photo frame like the one above.
Starting from center, how would u find a rectangle with maximum area that can be used to draw (all pixels in the rectangle must be rgb(255,255,255)?
I need to find the x and y coordinate of point A and B shown in the picture.
One of my approach is to do this:
starting from the center, and expand the boundary like the graph above.
But I am not sure how you could write loop(s) like that.
You should use the flood fill algorithm: link.
I suggest you to use sets to store the pixels to be altered in a set; that way the number of recursions to be done can be reduced.
Edit: I obviously didn't read the question well. Still, the flood fill could be used, if you used it it on a circle that is expanded.
Start with a single pixel, that is the center of your circle.
set the radius lager by 1 unit.
Find the pixels within your circle, get their colours using flood fill.
If they are the same color, goto 2. If not, you have the radius finding the rectangle is next.
This algorithm may give you a possible solution, but there may be more than one, depending on your frame - you should start developing using some simple frame where the correctness of the solution can be judged easily.
Edit: based on the comment, the problem is to find the largest area axis-parallel rectangle in a polygon - and luckily there is a paper on this: here. Doesn't look as an easy task though.
I would use brute force here. Choose a y_bottom and y_top, determine the corresponding x_left and x_right and loop both y_bottom and y_top over the width of the picture. In pseudocode:
for y_bottom in range (0, H):
for y_top in range (y_bottom, H):
# Assume that x = W/2 is part of the rectangle
x_left = maximum X such that all pixels in box (x_left, y_top, W/2, y_bottom) are white
x_right = minimum X such that all pixels in box (W/2, y_top, x_right, y_bottom) are white
determine Area of box (x_left, y_top, x_right, y_bottom)
store this box if Area is larger than max found so far

Approximating surface area from an image with Python

I plan to calculate the surface area of this cone by
Splitting the image into top/bottom halves of the cone
Finding the brightest spot on top/bottom halves
Finding the distance between brightest spots on top/bottom halves as a diameter for every pixel along the x axis and using it to calculate a dS for the total S, surface area
However, this appears unreliable at the extremities (tip and base). How can I make it more reliable at base/tip? or is my approach entirely wrong?
Edit: I want it to truncate in the black space, on both ends
I would try applying a filter that will make cone pixels white and other pixels black (e.g. provide a binary image). After that the area of the cone is just the sum of the white pixels.

Categories

Resources