Computing the Bounding Sphere for a 3D Mesh in Python - python

As part of writing a 3D game library, I am trying to implement frustum culling in order to avoid rendering objects that are outside of the camera perspective frustum. To do this, I first need to calculate a bounding sphere for each mesh and see if it collides with any of the six sides of the viewing frustum. Here is my currently (very) naive implementation of computing the bounding sphere for each model as written in model.py in my code:
from pyorama.entity import Entity
from pyorama.math3d.vec3 import Vec3
from pyorama.math3d.mat4 import Mat4
from pyorama.physics.sphere import Sphere
import math
import numpy as np
import itertools as its
class Model(Entity):
def __init__(self, mesh, texture, transform=Mat4.identity()):
super(Model, self).__init__()
self.mesh = mesh
self.texture = texture
self.transform = transform
def compute_bounding_sphere(self):
vertex_data = self.mesh.vertex_buffer.get_data()
vertices = []
for i in range(0, len(vertex_data), 3):
vertex = Vec3(vertex_data[i: i+3])
vertices.append(vertex)
max_pair = None
max_dist = 0
for a, b in its.combinations(vertices, 2):
dist = Vec3.square_distance(a, b)
if dist > max_dist:
max_pair = (a, b)
max_dist = dist
radius = math.sqrt(max_dist)/2.0
center = Vec3.lerp(max_pair[0], max_pair[1], 0.5)
return Sphere(center, radius)
I am just taking pairwise points from my mesh and using the largest distance I find as my diameter. Calling this on 100 simple cube test models every frame is extremely slow, driving my frame rate from 120 fps to 1 fps! This is not surprising since I assume the time complexity for this pairwise code is O(n^2).
My question is what algorithm is fast and reasonably simple to implement that computes (at least) an approximate bounding sphere given a set of 3D points from a mesh? I looked at this Wikipedia page and saw there was an algorithm called "Ritter's bounding sphere." However, this requires me to choose some random point x in the mesh and hope that it is the approximate center so that I get a reasonably tight bounding sphere. Is there a fast method for choosing a good starting point x? Any help or advice would be greatly appreciated!
UPDATE:
Following #Aaron3468's answer, here is the code in my library that would calculate the bounding box and the corresponding bounding sphere:
from pyorama.entity import Entity
from pyorama.math3d.vec3 import Vec3
from pyorama.math3d.mat4 import Mat4
from pyorama.physics.sphere import Sphere
from pyorama.physics.box import Box
import math
import numpy as np
import itertools as its
class Model(Entity):
def __init__(self, mesh, texture, transform=Mat4.identity()):
super(Model, self).__init__()
self.mesh = mesh
self.texture = texture
self.transform = transform
def compute_bounding_sphere(self):
box = self.compute_bounding_box()
a, b = box.min_corner, box.max_corner
radius = Vec3.distance(a, b)/2.0
center = Vec3.lerp(a, b, 0.5)
return Sphere(center, radius)
def compute_bounding_box(self):
vertex_data = self.mesh.vertex_buffer.get_data()
max_corner = Vec3(vertex_data[0:3])
min_corner = Vec3(vertex_data[0:3])
for i in range(0, len(vertex_data), 3):
vertex = Vec3(vertex_data[i: i+3])
min_corner = Vec3.min_components(vertex, min_corner)
max_corner = Vec3.max_components(vertex, max_corner)
return Box(min_corner, max_corner)

Iterate over the vertices once and collect the highest and lowest value for each dimension. This creates a bounding box made of Vec3(lowest.x, lowest.y, lowest.z) and Vec3(highest.x, highest.y, highest.z).
Use the median value of the highest and lowest value for each dimension. This creates the center of the box as Vec3((lowest.x + highest.x)/2, ...).
Then get the euclidean distance between the center and each of the 8 corners of the box. Use the largest distance, and the center you found to make a bounding circle.
You've only iterated once through the data set and have a good approximation of the bounding circle!
A bounding circle computed this way is almost certainly going to be bigger than the mesh. To shrink it, you can set the radius to the distance along the widest dimension from the center. This approach does risk chopping off faces that are in the corners.
You can iteratively shrink the radius and check that all points are in the bounding circle, but then you approach worse performance than your original algorithm.

Related

place Ellipses in a rectangle without overlapping each other

How do I randomly fill a given rectangular of bottom left corner (0, 0) and top right corner (350,250) with Ellipses of random sizes without the Ellipses overlapping each other using NumPy?
and get the data of the Ellipses placed as center point, radius1 and radius2.
I saw this method but didn't got much
How to randomly fill a region with non-overlapping rectangles using NumPy?
What I tried is mentioned below I have generated the radius of ellipses required but no idea how to place them all in the rectangle without overlapping.
import numpy as np
import math
import random
Area_of_quad = 350 * 250
Af_ell = 0.4 # Vol Fraction of Ellipses
Area_of_Ell = Af_ell*Area_of_quad
Ell_range_r1 = np.array([20,12.5,10])
Ell_ragne_r2 = np.array([0.6,0.7,0.8,0.9])
Area_of_ell_each = Area_of_Ell/len(Ell_range_r1)
rad = []
for i in range(len(Ell_range_r1)):
area = Area_of_ell_each
while area >= 0.01*Area_of_ell_each:
r1 = Ell_range_r1[i]
r2 = random.choice(Ell_ragne_r2)*r1
Ar_ac = math.pi*r1*r2
area -= Ar_ac
rad.append([r1,r2])

fitting ellipse in images with poor contrast

I am working on image processing in Python, on the topic of underwater photogrammetry. My goal is to fit an ellipse to fidicual markers, and retrieve its a) center, b) axis and c) orientation.
My markers are
radial,
white on black background, and some have a
binary code:
A ML-model delivers a small image snippets for each marker in each image, containting only the center of the marker.
So far, I've implemented these approaches:
Using openCV:
a) Thresholding, which results in a binary image (cv2.threshold)
b) Find Contours (cv2.findContours)
c) fit Ellipse (v2.fitEllipse)
Using Scikit:
a) Detect edge (using Canny)
b) Apply hough transform
Star operator (work in progress)
a) Estimate ellipse center
b) Send 360rays in all directions
c) Build an array, comprising coordinates of the largest gradient on each ray
d) Calculate best-fit ellipse using least-square method
e) Use the new center to repeat process (possibly several iterations required)
I perform these methods for each color-channel seperately. So far, the results between channels differ within several pixels for the ellipse center.
Do you have any suggestions on what pre-processing methods I should use, prior detecting/fitting the ellipse?
Any thoughts on which of the above methods will lead to the most accurate results?
This is amazing! Thank you. I just started to read about moments (e.g. https://www.pythonpool.com/opencv-moments/) and inertia.
However, there is a challange applying your code to this example:
As you can see, the image was poorly cropped, and the inertia of the image is more in the image center than in the center of the expected ellipse.
My first attempt to fix this is to binarize the image first:
import cv2 as cv2
T = int(cv2.mean(image)[0])
ret,image = cv2.threshold(image,T,255,0)
Is that a reasonable approach? I fear, that the binarization will have an unwanted impact on the moments of inertia. Thank you for claryfying.
This code finds the center of mass of the image, and the main axis of symmetry by calculating the moments of inertia.
I tried many libraries that calculate moments of inertia of images, but they give strange results (like 4x4 matrix for what should be a 2x2 matrix of inertia.
Also, ndimage.measurements.center_of_mass() appears to return (Cy,Cx) (row, column)
So, I resorted to manually calculating the moments of inertia
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image as Pim
from io import BytesIO
import requests
photoURL = "https://i.stack.imgur.com/EcLYk.png"
response = requests.get(photoURL)
image = np.array(Pim.open(BytesIO(response.content)).convert('L')) # Convert to greyscale
plt.imshow(image)
if True: # calculate eigen vectors = main axis of inertia
# xCoord, yCoord are the column and row numbers in image
xCoord, yCoord = np.meshgrid(np.arange(image.shape[1]), np.arange(image.shape[0]))
# mass M is the total sum of Image
M = np.sum(image)
# Cx, Cy are the coordinates of the center of mass
#Cx = sum(xCoord * image) / sum(image)
Cx = np.einsum('ij,ij', xCoord, image)/M
Cy = np.einsum('ij,ij', yCoord, image)/M
# Ixx is the second order moment of image respect to the horizontal axis passing through the center of mass
# Ixx=sum(Image*y^2)
Ixx = np.einsum('ij,ij,ij', yCoord-Cy, yCoord-Cy, image)
# Iyy is the second order moment of image respect to the vertical axis passing through the center of mass
# Iyy=sum(Image*x^2)
Iyy = np.einsum('ij,ij,ij', xCoord-Cx, xCoord-Cx, image)
# Ixy is the second order moment of image respect to both axis passing through the center of mass
# Ixy=sum(Image*x*y)
Ixy = np.einsum('ij,ij,ij', xCoord-Cx, yCoord-Cy, image)
inertiaMatrix = np.array([[Ixx, Ixy],
[Ixy, Iyy]])
eigValues, eigVectors = np.linalg.eig(inertiaMatrix)
# Plot center of mass
plt.scatter(Cx, Cy, c='r')
# Plot eigenvectors from center to direction of eigenvectors
plt.quiver(Cx, Cy, eigVectors[0, 0], eigVectors[1, 0], color='r', scale=2)
plt.quiver(Cx, Cy, eigVectors[0, 1], eigVectors[1, 1], color='r', scale=2)
plt.show()
nothing = 0

determining the average colour of a given circular sample of an image?

What I am trying to achieve is similar to photoshop/gimp's eyedropper tool: take a round sample of a given area in an image and return the average colour of that circular sample.
The simplest method I have found is to take a 'regular' square sample, mask it as a circle, then reduce it to 1 pixel, but this is very CPU-demanding (especially when repeated millions of times).
A more mathematically complex method is to take a square area and average only the pixels that fall within a circular area within that sample, but determining what pixel is or isn't within that circle, repeated, is CPU-demanding as well.
Is there a more succinct, less-CPU-demanding means to achieve this?
Here's a little example of skimage.draw.circle() which doesn't actually draw a circle but gives you the coordinates of points within a circle which you can use to index Numpy arrays with.
#!/usr/bin/env python3
import numpy as np
from skimage.io import imsave
from skimage.draw import circle
# Make rectangular canvas of mid-grey
w, h = 200, 100
img = np.full((h, w), 128, dtype=np.uint8)
# Get coordinates of points within a central circle
Ycoords, Xcoords = circle(h//2, w//2, 45)
# Make all points in circle=200, i.e. fill circle with 200
img[Ycoords, Xcoords] = 200
# Get mean of points in circle
print(img[Ycoords, Xcoords].mean()) # prints 200.0
# DEBUG: Save image for checking
imsave('result.png',img)
I'm sure that there's a more succinct way to go about it, but:
import math
import numpy as np
import imageio as ioimg # as scipy's i/o function is now depreciated
from skimage.draw import circle
import matplotlib.pyplot as plt
# base sample dimensions (rest below calculated on this).
# Must be an odd number.
wh = 49
# tmp - this placement will be programmed later
dp = 500
#load work image (from same work directory)
img = ioimg.imread('830.jpg')
# convert to numpy array (droppying the alpha while we're at it)
np_img = np.array(img)[:,:,:3]
# take sample of resulting array
sample = np_img[dp:wh+dp, dp:wh+dp]
#==============
# set up numpy circle mask
## this mask will be multiplied against each RGB layer in extracted sample area
# set up basic square array
sample_mask = np.zeros((wh, wh), dtype=np.uint8)
# set up circle centre coords and radius values
xy, r = math.floor(wh/2), math.ceil(wh/2)
# use these values to populate circle area with ones
rr, cc = circle(xy, xy, r)
sample_mask[rr, cc] = 1
# add axis to make array multiplication possible (do I have to do this)
sample_mask = sample_mask[:, :, np.newaxis]
result = sample * sample_mask
# count number of nonzero values (this will be our median divisor)
nz = np.count_nonzero(sample_mask)
sample_color = []
for c in range(result.shape[2]):
sample_color.append(int(round(np.sum(result[:,:,c])/nz)))
print(sample_color) # will return array like [225, 205, 170]
plt.imshow(result, interpolation='nearest')
plt.show()
Perhaps asking this question here wasn't necessary (it has been a while since I've python-ed, and was hoping that some new library had been developed for this since), but I hope this can be a reference for others who have the same goal.
This operation will be performed for every pixel in the image (sometimes millions of times) for thousands of images (scanned pages), so therein are my performance issue worries, but thanks to numpy, this code is pretty quick.

How can I select the pixels that fall within a contour in an image represented by a numpy array?

VI have a set of contour points drawn on an image which is stored as a 2D numpy array. The contours are represented by 2 numpy arrays of float values for x and y coordinates each. These coordinates are not integers and do not align perfectly with pixels but they do tell you the location of the contour points with respect to pixels.
I would like to be able to select the pixels that fall within the contours. I wrote some code that is pretty much the same as answer given here: Access pixel values within a contour boundary using OpenCV in Python
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([[a, b]]) # 2D array of shape 1x2
temp_array = np.array(temp_list)
contour_array_list = []
contour_array_list.append(temp_array)
lst_intensities = []
# For each list of contour points...
for i in range(len(contour_array_list)):
# Create a mask image that contains the contour filled in
cimg = np.zeros_like(pixel_array)
cv2.drawContours(cimg, contour_array_list, i, color=255, thickness=-1)
# Access the image pixels and create a 1D numpy array then add to list
pts = np.where(cimg == 255)
lst_intensities.append(pixel_array[pts[0], pts[1]])
When I run this, I get an error error: OpenCV(3.4.1) /opt/conda/conda-bld/opencv-suite_1527005509093/work/modules/imgproc/src/drawing.cpp:2515: error: (-215) npoints > 0 in function drawContours
I am guessing that at this point openCV will not work for me because my contours are floats, not integers, which openCV does not handle with drawContours. If I convert the coordinates of the contours to integers, I lose a lot of precision.
So how can I get at the pixels that fall within the contours?
This should be a trivial task but so far I was not able to find an easy way to do it.
I think that the simplest way of finding all pixels that fall within the contour is as follows.
The contour is described by a set of non-integer points. We can think of these points as vertices of a polygon, the contour is a polygon.
We first find the bounding box of the polygon. Any pixel outside of this bounding box is not inside the polygon, and doesn't need to be considered.
For the pixels inside the bounding box, we test if they are inside the polygon using the classical test: Trace a line from some point at infinity to the point, and count the number of polygon edges (line segments) crossed. If this number is odd, the point is inside the polygon. It turns out that Matplotlib contains a very efficient implementation of this algorithm.
I'm still getting used to Python and Numpy, this might be a bit awkward code if you're a Python expert. But it is straight-forward what it does, I think. First it computes the bounding box of the polygon, then it creates an array points with the coordinates of all pixels that fall within this bounding box (I'm assuming the pixel centroid is what counts). It applies the matplotlib.path.contains_points method to this array, yielding a boolean array mask. Finally, it reshapes this array to match the bounding box.
import math
import matplotlib.path
import numpy as np
x_pixel_nos = [...]
y_pixel_nos = [...] # Data from https://gist.github.com/sdoken/173fae1f9d8673ffff5b481b3872a69d
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([a, b])
polygon = np.array(temp_list)
left = np.min(polygon, axis=0)
right = np.max(polygon, axis=0)
x = np.arange(math.ceil(left[0]), math.floor(right[0])+1)
y = np.arange(math.ceil(left[1]), math.floor(right[1])+1)
xv, yv = np.meshgrid(x, y, indexing='xy')
points = np.hstack((xv.reshape((-1,1)), yv.reshape((-1,1))))
path = matplotlib.path.Path(polygon)
mask = path.contains_points(points)
mask.shape = xv.shape
After this code, what is necessary is to locate the bounding box within the image, and color the pixels. left contains the pixel in the image corresponding to the top-left pixel of mask.
It is possible to improve the performance of this algorithm. If the ray traced to test a pixel is horizontal, you can imagine that all the pixels along a horizontal line can benefit from the work done for the pixels to the left. That is, it is possible to compute the in/out status for all pixels on an image line with a little bit more effort than the cost for a single pixel.
The matplotlib.path.contains_points algorithm is much more efficient than performing a single-point test for all points, since sorting the polygon edges and vertices appropriately make each test much cheaper, and that sorting only needs to be done once when testing many points at once. But this algorithm doesn't take into account that we want to test many points on the same line.
These are what I see when I do
pp.plot(x_pixel_nos, y_pixel_nos)
pp.imshow(mask)
after running the code above with your data. Note that the y axis is inverted with imshow, hence the vertically mirrored shapes.
With Help of Shapely library in python, it can easily be done as:
from shapely.geometry import Point, Polygon
Convert all the x,y coords to shapely Polygons as:
coords = [(0, 0), (0, 2), (1, 1), (2, 2), (2, 0), (1, 1), (0, 0)]
pl = Polygon(coords)
Now find pixels in each of polygon as:
minx, miny, maxx, maxy = pl.bounds
minx, miny, maxx, maxy = int(minx), int(miny), int(maxx), int(maxy)
box_patch = [[x,y] for x in range(minx,maxx+1) for y in range(miny,maxy+1)]
pixels = []
for pb in box_patch:
pt = Point(pb[0],pb[1])
if(pl.contains(pt)):
pixels.append([int(pb[0]), int(pb[1])])
return pixels
Put this loop for each set of coords and then for each polygons.
good to go :)
skimage.draw.polygon can handle this 1, see the example code of this function on that page.
If you want just the contour, you can do skimage.segmentation.find_boundaries 2.

Rotated image coordinates after scipy.ndimage.interpolation.rotate?

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

Categories

Resources