I have two numpy arrays, one is for 3D vertices of a mesh, call it vert and one is for the triangular faces, call it faces:
The vert array is a N x 3 shape array of float, hence N three dimensional points. The x coordinate of each point can have both positive and negative values.
As a pure example this can be the vert array:
[[ 2.886495 24.886948 15.909558]
[ -13.916695 -58.985245 19.655312]
[ 40.415527 8.968353 8.515955]
...
[ 13.392465 -58.20602 18.752457]
[ -12.504704 -58.307934 18.912386]
[ 13.322185 -58.52817 19.165733]]
Since the mesh is centered, the left part of the mesh is the one with positive x component and the corresponding vertex indices are found by a np.where
i_vert_left = np.where(vert[:,0]>0)[0]
I now would like to filter out those faces made of triangles with coordinates entirely in the positive x axis.
However I have a problem in doing this indexing operation correctly.
My first attempt was to subset the faces such that their corresponding vertices have x>0
faces_left = np.asarray([f for f in faces if np.all(np.isin(i_vert_left,f)) ])
but the operation is incredibly slow on large meshes.
How can I exploit a smart indexing of the faces?
Assuming faces is a Nx3 array of integers indexing the three vertices of each triangle, I think you should just need:
# Check whether each vertex is left or not
vert_left_mask = vert[:, 0] > 0
# Check whether each face has all vertices on left or not
faces_left_mask = np.all(vert_left_mask[faces], axis=1)
# Select resulting left faces
faces_left = faces[faces_left_mask]
The main "trick" here is in vert_left_mask[faces], which replaces each integer vertex number with a boolean indicating whether the vertex is left or not, so it's easy to tell which face is fully left with np.all.
Related
I need to sort a selection of 3D coordinates in a winding order as seen in the image below. The bottom-right vertex should be the first element of the array and the bottom-left vertex should be the last element of the array. This needs to work given any direction that the camera is facing the points and at any orientation of those points. Since "top-left","bottom-right", etc is relative, I assume I can use the camera as a reference point? We can also assume all 4 points will be coplanar.
I am using the Blender API (writing a Blender plugin) and have access to the camera's view matrix if that is even necessary. Mathematically speaking is this even possible if so how? Maybe I am overcomplicating things?
Since the Blender API is in Python I tagged this as Python, but I am fine with pseudo-code or no code at all. I'm mainly concerned with how to approach this mathematically as I have no idea where to start.
Since you assume the four points are coplanar, all you need to do is find the centroid, calculate the vector from the centroid to each point, and sort the points by the angle of the vector.
import numpy as np
def sort_points(pts):
centroid = np.sum(pts, axis=0) / pts.shape[0]
vector_from_centroid = pts - centroid
vector_angle = np.arctan2(vector_from_centroid[:, 1], vector_from_centroid[:, 0])
sort_order = np.argsort(vector_angle) # Find the indices that give a sorted vector_angle array
# Apply sort_order to original pts array.
# Also returning centroid and angles so I can plot it for illustration.
return (pts[sort_order, :], centroid, vector_angle[sort_order])
This function calculates the angle assuming that the points are two-dimensional, but if you have coplanar points then it should be easy enough to find the coordinates in the common plane and eliminate the third coordinate.
Let's write a quick plot function to plot our points:
from matplotlib import pyplot as plt
def plot_points(pts, centroid=None, angles=None, fignum=None):
fig = plt.figure(fignum)
plt.plot(pts[:, 0], pts[:, 1], 'or')
if centroid is not None:
plt.plot(centroid[0], centroid[1], 'ok')
for i in range(pts.shape[0]):
lstr = f"pt{i}"
if angles is not None:
lstr += f" ang: {angles[i]:.3f}"
plt.text(pts[i, 0], pts[i, 1], lstr)
return fig
And now let's test this:
With random points:
pts = np.random.random((4, 2))
spts, centroid, angles = sort_points(pts)
plot_points(spts, centroid, angles)
With points in a rectangle:
pts = np.array([[0, 0], # pt0
[10, 5], # pt2
[10, 0], # pt1
[0, 5]]) # pt3
spts, centroid, angles = sort_points(pts)
plot_points(spts, centroid, angles)
It's easy enough to find the normal vector of the plane containing our points, it's simply the (normalized) cross product of the vectors joining two pairs of points:
plane_normal = np.cross(pts[1, :] - pts[0, :], pts[2, :] - pts[0, :])
plane_normal = plane_normal / np.linalg.norm(plane_normal)
Now, to find the projections of all points in this plane, we need to know the "origin" and basis of the new coordinate system in this plane. Let's assume that the first point is the origin, the x axis joins the first point to the second, and since we know the z axis (plane normal) and x axis, we can calculate the y axis.
new_origin = pts[0, :]
new_x = pts[1, :] - pts[0, :]
new_x = new_x / np.linalg.norm(new_x)
new_y = np.cross(plane_normal, new_x)
Now, the projections of the points onto the new plane are given by this answer:
proj_x = np.dot(pts - new_origin, new_x)
proj_y = np.dot(pts - new_origin, new_y)
Now you have two-dimensional points. Run the code above to sort them.
After many hours, I finally found a solution. #Pranav Hosangadi's solution worked for the 2D side of things. However, I was having trouble projecting the 3D coordinates to 2D coordinates using the second part of his solution. I also tried projecting the coordinates as described in this answer, but it did not work as intended. I then discovered an API function called location_3d_to_region_2d() (see docs) which, as the name implies, gets the 2D screen coordinates in pixels of the given 3D coordinate. I didn't need to necessarily "project" anything into 2D in the first place, getting the screen coordinates worked perfectly fine and is much more simple. From that point, I could sort the coordinates using Pranav's function with some slight adjustments to get it in the order illustrated in the screenshot of my first post and I wanted it returned as a list instead of a NumPy array.
import bpy
from bpy_extras.view3d_utils import location_3d_to_region_2d
import numpy
def sort_points(pts):
"""Sort 4 points in a winding order"""
pts = numpy.array(pts)
centroid = numpy.sum(pts, axis=0) / pts.shape[0]
vector_from_centroid = pts - centroid
vector_angle = numpy.arctan2(
vector_from_centroid[:, 1], vector_from_centroid[:, 0])
# Find the indices that give a sorted vector_angle array
sort_order = numpy.argsort(-vector_angle)
# Apply sort_order to original pts array.
return list(sort_order)
# Get 2D screen coords of selected vertices
region = bpy.context.region
region_3d = bpy.context.space_data.region_3d
corners2d = []
for corner in selected_verts:
corners2d.append(location_3d_to_region_2d(
region, region_3d, corner))
# Sort the 2d points in a winding order
sort_order = sort_points(corners2d)
sorted_corners = [selected_verts[i] for i in sort_order]
Thanks, Pranav for your time and patience in helping me solve this problem!
There is a simpler and faster solution for the Blender case:
1.) The following code sorts 4 planar points in 2D (vertices of the plane object in Blender) very efficiently:
def sort_clockwise(pts):
rect = np.zeros((4, 2), dtype="float32")
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect
2.) Blender keeps vertices related data, such as the translation, rotation and scale in the world matrix. If you query for vertices.co(ordinates) only, you just get the original coordinates, without translation, rotation and scaling. But that does not affect the order of vertices. That simplifies the problem because what you get is actually a 2D (with z's = 0) mesh data. If you sort that 2D data (excluding z's) you will get the information, the sort indices for the 3D sorted data. You can modify the code above to get the indices from that 2D array. For the plane object of Blender, for some reason the order is always [0,1,3,2], not [0,1,2,3]. The following modified code gives the sorted indices for the vertices data in 2D.
def sorted_ix_clockwise(pts):
#rect = zeros((4, 2), dtype="float32")
ix = array([0,0,0,0])
s = pts.sum(axis=1)
#rect[0] = pts[argmin(s)]
#rect[2] = pts[argmax(s)]
ix[0] = argmin(s)
ix[2] = argmax(s)
dif = diff(pts, axis=1)
#rect[1] = pts[argmin(dif)]
#rect[3] = pts[argmax(dif)]
ix[1] = argmin(dif)
ix[3] = argmax(dif)
return ix
You can use these indices to get the actual 3D sorted data, which you can obtain by multiplying vertices coordinates with the world matrix to include any translation, rotation and scaling.
I have a given 3D mesh which is constructed by taking a set of random points and finding the convex hull of those points. I then use open3d and trimesh to convert the conex hull into a mesh. I want to know how I can convert this mesh or the convex hull itself into a filled boolean voxel grid.
I can use trimesh to get a voxel grid of some sort but it seems the insides are hollow. I want a boolean voxel grid which gives true for the volume inside the convex hull and false otherwise.
Simply rasterize your convex polygon volume ...
compute any inside point c
for convex hull its enough to compute avg point so sum all n points together an divide by n
compute per face normals
each triangle face has 3 points p0,p1,p2 so
nor = cross( p1-p0 , p2-p0 );
and chose direction so it points out of the convex hull so:
if ( dot( p0-c , nor ) < 0) nor = -nor;
loop through all voxels
so 3 nested for loops going through your grid. Lets call the actual iterated point q
test inside convex hull
the q is inside your convex hull if all dot products between q-face_point and face_normal is negative or zero... So loop through all triangles/faces and test... after that either fill the voxel or not ...
If you want something faster (in case you got too many triangles) there are ways like:
Rasterize the triangles and floodfill
Tetrahedronize volume and rasterize each tetrahedron separately
render depth maps from 6 sides of outscribed cube and infill
So I figured out a simple solution which can be implemented using trimmesh. The idea is to generate a large set of coordinates and query the mesh to determine if the coordinate is within the mesh/convex hull. If the coordinate is within the mesh it is indicated in the grid as 1 and 0 otherwise. res is an array that determines the resolution along x,y,z axis. A higher resolution provides much better representation of the mesh.
x, y, z = np.indices((res[0], res[1], res[2]))
total_voxels = np.product(res)
coords = np.concatenate((np.reshape(x/res[0], [total_voxels, 1]),
np.reshape(y/res[1], [total_voxels, 1]),
np.reshape(z/res[2], [total_voxels, 1])), axis=1)
out = mesh.contains(coords)
voxel = np.reshape(out, res)
I have two trajectories (i.e. two lists of points) and I am trying to find the intersection points for both these trajectories. However, if I represent these trajectories as lines, I might miss real world intersections (just misses).
What I would like to do is to represent the line as a polygon with certain width around the points and then find where the two polygons intersect with each other.
I am using the python spatial library but I was wondering if anyone has done this before. Here is a picture of the line segments which don't intersect because they just miss each other. Below is the sample data code that represents the trajectory of two objects.
object_trajectory=np.array([[-3370.00427248, 3701.46800775],
[-3363.69164715, 3702.21408203],
[-3356.31277271, 3703.06477984],
[-3347.25951787, 3704.10740164],
[-3336.739511 , 3705.3958357 ],
[-3326.29355823, 3706.78035903],
[-3313.4987339 , 3708.2076586 ],
[-3299.53433345, 3709.72507366],
[-3283.15486406, 3711.47077376],
[-3269.23487255, 3713.05635557]])
target_trajectory=np.array([[-3384.99966703, 3696.41922372],
[-3382.43687562, 3696.6739521 ],
[-3378.22995178, 3697.08802862],
[-3371.98983789, 3697.71490469],
[-3363.5900481 , 3698.62666805],
[-3354.28520354, 3699.67613798],
[-3342.18581931, 3701.04853915],
[-3328.51519511, 3702.57528111],
[-3312.09691577, 3704.41961271],
[-3297.85543763, 3706.00878621]])
plt.plot(object_trajectory[:,0],object_trajectory[:,1],'b',color='b')
plt.plot(vehicle_trajectory[:,0],vehicle_trajectory[:,1],'b',color='r')
Let's say you have two lines defined by numpy arrays x1, y1, x2, and y2.
import numpy as np
You can create an array distances[i, j] containing the distances between the ith point in the first line and the jth point in the second line.
distances = ((x1[:, None] - x2[None, :])**2 + (y1[:, None] - y2[None, :])**2)**0.5
Then you can find indices where distances is less than some threshold you want to define for intersection. If you're thinking of the lines as having some thickness, the threshold would be half of that thickness.
threshold = 0.1
intersections = np.argwhere(distances < threshold)
intersections is now a N by 2 array containing all point pairs that are considered to be "intersecting" (the [i, 0] is the index from the first line, and [i, 1] is the index from the second line). If you want to get the set of all the indices from each line that are intersecting, you can use something like
first_intersection_indices = np.asarray(sorted(set(intersections[:, 0])))
second_intersection_indices = np.asarray(sorted(set(intersections[:, 1])))
From here, you can also determine how many intersections there are by taking only the center value for any consecutive values in each list.
L1 = []
current_intersection = []
for i in range(first_intersection_indices.shape[0]):
if len(current_intersection) == 0:
current_intersection.append(first_intersection_indices[i])
elif first_intersection_indices[i] == current_intersection[-1]:
current_intersection.append(first_intersection_indices[i])
else:
L1.append(int(np.median(current_intersection)))
current_intersection = [first_intersection_indices[i]]
print(len(L1))
You can use these to print the coordinates of each intersection.
for i in L1:
print(x1[i], y1[i])
Turns out that the shapely package already has a ton of convinience functions that get me very far with this.
from shapely.geometry import Point, LineString, MultiPoint
# I assume that self.line is of type LineString (i.e. a line trajectory)
region_polygon = self.line.buffer(self.lane_width)
# line.buffer essentially generates a nice interpolated bounding polygon around the trajectory.
# Now we can identify all the other points in the other trajectory that intersects with the region_polygon that we just generated. You can also use .intersection if you want to simply generate two polygon trajectories and find the intersecting polygon as well.
is_in_region = [region_polygon.intersects(point) for point in points]
I have points in a plane. I want each point to be a vertex of at least one triangle. I also want to fill the plane confined within the vertices with no overlapping triangles. Something like:
Note that for each pair of points in any triangle, the line connecting them consists of points that are also in some triangle.
I want to get the list of all vertices triplets from which a triangle can be defined. In the picture, there would be 6 such triplets.
My attempt is this one:
indices = range(locs.shape[0])
visited = []
ind_list = []
for ind in indices:
if ind not in visited:
visited.append(ind)
nearest_idx = np.argsort(distances[ind])[1:3]
for ni in nearest_idx:
visited.append(ni)
ind_list.append([ind]+list(nearest_idx))
where locs is a Nx2 array containing the (x,y) coordinates of each vertex, distances is a NxN matrix whose i,j component is the euclidean distance between the i-th vertex and the j-th vertex. Note that ind_list is a list of indices that allow me to get the vertices by locs[ind_list]. What I want is the correct ind_list. In my case I clearly omit some of the triplets. An example of this failure can be seen in this figure:
where there are blank regions. Instead I want all the space to be filled with no overlapping triangles. Any idea of how to achieve this? Thanks a lot!
I will have a 3-d grid of points (defined by Cartesian vectors). For any given coordinate within the grid, I wish to find the 8 grid points making the cuboid which surrounds the given coordinate. I also need the distances between the vertices of the cuboid and the given coordinate. I have found a way of doing this for a meshgrid with regular spacings, but not for irregular spacings. I do not yet have an example of the irregularly spaced grid data, I just know that the algorithm will have to deal with them eventually. My solution for the regularly spaced points is based off of this post, Finding index of nearest point in numpy arrays of x and y coordinates and is as follows:
import scipy as sp
import numpy as np
x, y, z = np.mgrid[0:5, 0:10, 0:20]
# Example 3-d grid of points.
b = np.dstack((x.ravel(), y.ravel(), z.ravel()))[0]
tree = sp.spatial.cKDTree(b)
example_coord = np.array([1.5, 3.5, 5.5])
d, i = tree.query((example_coord), 8)
# i being the indices of the closest grid points, d being their distance from the
# given coordinate, example_coord
b[i[0]], d[0]
# This gives one of the points of the surrounding cuboid and its distance from
# example_coord
I am looking to make this algorithm run as efficiently as possible as it will need to be run a lot. Thanks in advance for your help.