Calculating Area of Irregular Curved Polygon in Python - python

I know the x, y coordinates of the blue dots shown in the image. How can I calculate the area of an irregular shape bounded by these points? Number of points on upper and lower surfaces are different
Image

As suggested by #Fractalism, You can use Schoelace formula.
Here is an implementation for python using Numpy. I took it from here:
import numpy as np
# x,y are arrays containing coordinates of the polygon vertices
x=np.array([3,5,12,9,5])
y=np.array([4,11,8,5,6])
i=np.arange(len(x))
#Area=np.sum(x[i-1]*y[i]-x[i]*y[i-1])*0.5 # signed area, positive if the vertex sequence is counterclockwise
Area=np.abs(np.sum(x[i-1]*y[i]-x[i]*y[i-1])*0.5) # one line of code for the shoelace formula

Related

Use GeoPandas / Shapely to find intersection area of polygons defined by latitude and longitude coordinates

I have two GeoDataFrames, left and right, with many polygons in them. Now I am trying to find the total intersection area of each polygon in left, with all polygons in right.
I've managed to get the indices of the intersecting polygons in right for each polygon in left using gpd.sjoin, so I compute the intersection area using:
area = left.iloc[i].geometry.intersection(right.iloc[idx].geometry).area
Where i and idx are the indices of the intersecting polygons in the two GDFs (let's assume the left poly only intersects with 1 poly in right). The problem is that the area value I get does not seem correct in any way, and I don't know what units it has. The CRS of both GeoDataFrames is EPSG:4326 so the standard WSG84 projection, and the polygon coordinates are defined in degrees latitude and longitude.
Does anyone know what units the computed area then has? Or does this not work and do I need to convert them to a different projection before computing the area?
Thanks for the help!
I fixed it by using the EPSG:6933 projection instead, which is an area preserving map projection and returns the area in square metres (EPSG:4326 does not preserve areas, so is not suitable for area calculations). I could just change my GDF to this projection using
gdf.to_crs(espg=6933)
And then compute the area in the same way as above.

How to find the rotation matrix with known points

The corner points of a rectangle are given in 2D coordinates. I also know the real distance of the points and I have the camera matrix.
Now I want to find the rotation vector with respect to the camera, but without using the cv2.calibrateCamera() method with the chessboard corners.

Projecting a Texture Mask onto an existing 3D Mesh given the camera extrinsics

Given an image mask, I want to project the pixels onto a mesh in respect to the position and orientation of the camera and convert these pixels into a pointcloud. I have the intrinsic and extrinsic parameters of the camera in respect to the world, and the location of the mesh in world coordinates. I know the mapping from world coordinates to camera image is as follow:
imgpoint = Intrinsic * Extrinsic * worldpoint
So when I want to the opposite i do the inverse of the intrinsic and extrinsic matrices:
worldpoint= Intrinsic^(-1) * Extrinsic^(-1) * imgpoint
However, the idea that I had was to obtain two points from one pixel, with different depth values, to obtain a line and then look for the closest intersection for the mesh I want with the line, but I do not know how to properly generate a point away from the original camera plane. How can I find this extra point and/or am I complicating this problem?
The top equation below shows how to project a point (x,y,z) onto a pixel (u,v);
The extrinsic parameters are the 3x3 rotation matrix R and translation t.
The intrinsic parameters are the focal distances f_x, f_y and
principal point (c_x, c_y). The value alpha is the perspective foreshortening term that is divided out.
The bottom equation reverses the process by describing how to project
a ray from the camera position through through the pixel (u,v) out into the scene as the parameter alpha varies from 0 to infinity.
Now we have converted the problem into a ray casting problem.
Find the intersection of the ray with your mesh which is a
standard computer graphics problem.

What does negative homoegenous coordinates signify?

What do negative homogeneous coordinates signify? Why are they not useful in bi-linear interpolation of image A onto the plane of another image B?
Homogeneous coordinates can be calculated as follows:
But I am unable to understand what do negative homogeneous coordinates mean!?
During the interpolation process, I tried to use both negative and positive homogeneous coordinates and then only positive homogeneous coordinates. With positive homogeneous coordinates, projection worked as expected but with negative and positive projection didn't work well and the image that was being projected onto another image occupied the whole part in the output image.

How can I select the pixels that fall within a contour in an image represented by a numpy array?

VI have a set of contour points drawn on an image which is stored as a 2D numpy array. The contours are represented by 2 numpy arrays of float values for x and y coordinates each. These coordinates are not integers and do not align perfectly with pixels but they do tell you the location of the contour points with respect to pixels.
I would like to be able to select the pixels that fall within the contours. I wrote some code that is pretty much the same as answer given here: Access pixel values within a contour boundary using OpenCV in Python
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([[a, b]]) # 2D array of shape 1x2
temp_array = np.array(temp_list)
contour_array_list = []
contour_array_list.append(temp_array)
lst_intensities = []
# For each list of contour points...
for i in range(len(contour_array_list)):
# Create a mask image that contains the contour filled in
cimg = np.zeros_like(pixel_array)
cv2.drawContours(cimg, contour_array_list, i, color=255, thickness=-1)
# Access the image pixels and create a 1D numpy array then add to list
pts = np.where(cimg == 255)
lst_intensities.append(pixel_array[pts[0], pts[1]])
When I run this, I get an error error: OpenCV(3.4.1) /opt/conda/conda-bld/opencv-suite_1527005509093/work/modules/imgproc/src/drawing.cpp:2515: error: (-215) npoints > 0 in function drawContours
I am guessing that at this point openCV will not work for me because my contours are floats, not integers, which openCV does not handle with drawContours. If I convert the coordinates of the contours to integers, I lose a lot of precision.
So how can I get at the pixels that fall within the contours?
This should be a trivial task but so far I was not able to find an easy way to do it.
I think that the simplest way of finding all pixels that fall within the contour is as follows.
The contour is described by a set of non-integer points. We can think of these points as vertices of a polygon, the contour is a polygon.
We first find the bounding box of the polygon. Any pixel outside of this bounding box is not inside the polygon, and doesn't need to be considered.
For the pixels inside the bounding box, we test if they are inside the polygon using the classical test: Trace a line from some point at infinity to the point, and count the number of polygon edges (line segments) crossed. If this number is odd, the point is inside the polygon. It turns out that Matplotlib contains a very efficient implementation of this algorithm.
I'm still getting used to Python and Numpy, this might be a bit awkward code if you're a Python expert. But it is straight-forward what it does, I think. First it computes the bounding box of the polygon, then it creates an array points with the coordinates of all pixels that fall within this bounding box (I'm assuming the pixel centroid is what counts). It applies the matplotlib.path.contains_points method to this array, yielding a boolean array mask. Finally, it reshapes this array to match the bounding box.
import math
import matplotlib.path
import numpy as np
x_pixel_nos = [...]
y_pixel_nos = [...] # Data from https://gist.github.com/sdoken/173fae1f9d8673ffff5b481b3872a69d
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([a, b])
polygon = np.array(temp_list)
left = np.min(polygon, axis=0)
right = np.max(polygon, axis=0)
x = np.arange(math.ceil(left[0]), math.floor(right[0])+1)
y = np.arange(math.ceil(left[1]), math.floor(right[1])+1)
xv, yv = np.meshgrid(x, y, indexing='xy')
points = np.hstack((xv.reshape((-1,1)), yv.reshape((-1,1))))
path = matplotlib.path.Path(polygon)
mask = path.contains_points(points)
mask.shape = xv.shape
After this code, what is necessary is to locate the bounding box within the image, and color the pixels. left contains the pixel in the image corresponding to the top-left pixel of mask.
It is possible to improve the performance of this algorithm. If the ray traced to test a pixel is horizontal, you can imagine that all the pixels along a horizontal line can benefit from the work done for the pixels to the left. That is, it is possible to compute the in/out status for all pixels on an image line with a little bit more effort than the cost for a single pixel.
The matplotlib.path.contains_points algorithm is much more efficient than performing a single-point test for all points, since sorting the polygon edges and vertices appropriately make each test much cheaper, and that sorting only needs to be done once when testing many points at once. But this algorithm doesn't take into account that we want to test many points on the same line.
These are what I see when I do
pp.plot(x_pixel_nos, y_pixel_nos)
pp.imshow(mask)
after running the code above with your data. Note that the y axis is inverted with imshow, hence the vertically mirrored shapes.
With Help of Shapely library in python, it can easily be done as:
from shapely.geometry import Point, Polygon
Convert all the x,y coords to shapely Polygons as:
coords = [(0, 0), (0, 2), (1, 1), (2, 2), (2, 0), (1, 1), (0, 0)]
pl = Polygon(coords)
Now find pixels in each of polygon as:
minx, miny, maxx, maxy = pl.bounds
minx, miny, maxx, maxy = int(minx), int(miny), int(maxx), int(maxy)
box_patch = [[x,y] for x in range(minx,maxx+1) for y in range(miny,maxy+1)]
pixels = []
for pb in box_patch:
pt = Point(pb[0],pb[1])
if(pl.contains(pt)):
pixels.append([int(pb[0]), int(pb[1])])
return pixels
Put this loop for each set of coords and then for each polygons.
good to go :)
skimage.draw.polygon can handle this 1, see the example code of this function on that page.
If you want just the contour, you can do skimage.segmentation.find_boundaries 2.

Categories

Resources