(forgive my terminology - it has been a long time since I took an advanced math class)
Let's say I have n "planes" each "perpendicular" to a single axis in m-dimensional space. No two planes are perpendicular to the same axis. I believe I can safely assume that there will be some intersection between all n planes.
I want to project point a onto the intersection and get the position vector for the result.
For example:
I have a single plane whose normal vector is (0.75, 0, 0) and a point a at position (0.25, 0, 1). I want to get the position vector of point a projected onto the plane.
Another example:
I have two planes represented by normal vectors (0.5, 0, 0) and (0, 1, 0). I have a point a at position (0.1, 0.1, 0.1). I want to get the position vector of the point projected onto the result of the intersection between my two planes (a line)
Your "planes" in m-dimensional space are (m-1)-dimensional objects. They are usually referred to as hyperplanes — a generalization of planes, 2-dimensional objects in 3-dimensional space. To define a hyperplane you need not only a normal vector but also a point (think of lines in two-dimensional space: all parallel lines share the same direction, and in order to isolate one you need to specify a point).
I suspect you mean all of your hyperplanes to pass through the origin (in which case indeed there is a point in the intersection — the origin itself), and I interpret your "being perpendicular to a single axis" as saying that the normal vectors all point along some coordinate axis (in other words, they have a single nonzero component). In that case, all you have to do to find the projection of an arbitrary point (vector, really) onto the intersection is set to zero the components of the point (again, vector, really) along the normal vectors of your hyperplanes.
Let me go through your examples:
The (hyper)plane in 3-dimensional space with normal vector (0.75, 0, 0) is the yz-plane: the projection of an arbitrary point (x, y, z) is (0, y, z) — the hyperplane has a normal vector along the first coordinate, so set to zero the first component of the point (for the last time: vector, really). In particular, (0.25, 0, 1) projects to (0, 0, 1).
The planes perpendicular to (0.5, 0, 0) and (0, 1, 0) are the yz- and xz-planes. Their intersection is the z-axis. The projection of the point (0.1, 0.1, 0.1) is (0, 0, 0.1).
The projection can be computed by solving an overdetermined system in the sense of least squares, with lstsq. The columns of the matrix of the system is formed by normal vectors, used as columns (hence, transpose on the second line below).
coeff are coefficients to be attached to these normal vectors; this linear combination of normal vectors is subtracted from the given point to obtain the projection
import numpy as np
normals = np.transpose(np.array([[0.5, 0, 0], [0, 1, 0]])) # normals
point = np.array([0.1, 0.1, 0.1]) # point
coeff = np.linalg.lstsq(normals, point)[0]
proj = point - np.dot(normals, coeff)
print(proj)
Output: [0, 0, 0.1].
Related
I have a series of radar point clouds and I have used shapfile to segment out several of these areas, which are all rectangular in shape from the z-axis. I would like to know if there is a way to rotate them so that one edge is parallel to the x or y axis. My idea is to create obb enclosing boxes with abb enclosing boxes and then compare the two and rotate them cyclically. Thanks!
aabb = cloud.get_axis_aligned_bounding_box()
aabb.color = (1, 0, 0)
obb = cloud.get_oriented_bounding_box()
obb.color = (0, 1, 0)
You should be able to call cloud.rotate() directly to which you pass a 3x3 rotation matrix.
To get that 3x3 rotation matrix you have multiple options:
axis-angle: open3d.geometry.get_rotation_matrix_from_axis_angle
quaternion: open3d.geometry.get_rotation_matrix_from_quaternion
euler : open3d.geometry.get_rotation_matrix_from_xyz (and variants (from_xzy, from_yxz, from_yzx, from_zxy, from_zyx))
(angles are in radians, not degrees)
e.g.
cloud.rotate(o3d.geometry.get_rotation_matrix_from_axis_angle([np.radians(90), 0, 0]))
(this should apply a 90 degrees rotation on the x axis: adjust as needed)
I have some 3D motion capture data for training a deep learning model, but the loss is too high so I need to apply normalization to adjust the input data.
The list below is the steps I am planning to do for the normalization:
Set the skeleton's left sole to be the origin point (0, 0, 0)
All joint points (x, y, z) minus the left sole point(xls, yls, zls)
Set the skeleton's height as 1
Divide all the joint point y-axis values by the y-value of the head point
Calculate the angle between the waist-line and x-axis line
Calculate the angle between the two lines, the waist-line (The line connected by the Left-waist point and Right-waist point), and the line parallel to the X-axis extending from the midpoint of the waist-line.
Every joint point multiply with the rotation matrix
Multiply the rotation matrix with the y-axis(height axis) as the rotation axis (right-handed coordinate system: the value on the X-axis is greater on the right than on the left)
The rotation matrix is obtained as follows:
[[cos_theta, 0, neg_sin_theta],
[0, 1, 0],
[sin_theta, 0, cos_theta]]
From a mathematical perspective, I drew an image as shown below to depict the normalization by multiplying with a rotation matrix, hoping the skeleton can rotate with the waist and let the waist-line always be parallel with the x-axis :
Now I want to normalize the 3D skeleton coordinates by multiplying the rotation matrix:
(I tested 3 different rotation matrixes. The above mentioned rotation matrix produced very high distortion, so I tested with the matrix below and got a more reasonable rotation, which I don't know why...)
rotate_matrix =[[cos_theta, neg_sin_theta, 0],
[sin_theta, cos_theta, 0],
[0, 0, 1]]
Got the result of normalization:
The normalization using this rotation matrix still produces weird result, but at least it seems to rotate with the height-axis as desired.
The original skeleton movement:
Hope someone give some advice of this normalization, or provide other ideas to deal with 3D skeleton normalization.
Thank you in advance for your input.
I have a 3D vector and a 3D face normal. How do I go along to move this vector along the given face normal using Python (with or without numpy)?
Ideally, I'd build a matrix using the face normal with the x and y and multiply it by the original vector or something like that, but I can't get my head around on how to build it. It's been a while since Linear Algebra.
EDIT:
Thanks for pointing out that my question was too broad.
My goal is to get a new point, that is x and y units away from the original point, along the face defined by its normal.
Example: If the point is (0,0,0) and the normal is (0, 0, 1), the result would be (x, y, 0).
Example 2: If the point is (1, 0, 0) and the normal is (0, 1, 0), the result would be (1+x, 0, y).
I'd need to extrapolate that to work with any point, normal, x and y.
The projection of a vector to a plane defined by its normal is:
def projection(vector, normal):
return vector - vector.dot(normal) * normal
Presumably this means you want something like:
x + projection(y, normal)
def give_me_a_new_vertex_position_along_normal(old_vertex_position, normal):
new_vertex_position = old_vertex_position + normal
return new_vertex_position
There is a difference between affine spaces (your normals) and euclidean/linear spaces (your vertices).
Vectors in linear space have coordinates associated with them, while vectors in affine space do not.
Adding an affine-spaced vector to a linear-spaced vector is called projection and that is what you are looking to do.
Suppose we have:
A curve described by a two-dimensional dataset that describes approximately a high order polynomial curve.
A line defined by two points.
This is a sample image:
Supposing the line and the curve intercept each other, how could I find the intersection point between the line and the dataset?
As per my comment above
import numpy as np
A = np.random.random((20, 2))
A[:,0] = np.arange(20)
A[:,1] = A[:,1] * (7.5 + A[:,0]) # some kind of wiggly line
p0 = [-1.0,-6.5] # point 0
p1 = [22.0, 20.0] # point 1
b = (p1[1] - p0[1]) / (p1[0] - p0[0]) # gradient
a = p0[1] - b * p0[0] # intercept
B = (a + A[:,0] * b) - A[:,1] # distance of y value from line
ix = np.where(B[1:] * B[:-1] < 0)[0] # index of points where the next point is on the other side of the line
d_ratio = B[ix] / (B[ix] - B[ix + 1]) # similar triangles work out crossing points
cross_points = np.zeros((len(ix), 2)) # empty array for crossing points
cross_points[:,0] = A[ix,0] + d_ratio * (A[ix+1,0] - A[ix,0]) # x crossings
cross_points[:,1] = A[ix,1] + d_ratio * (A[ix+1,1] - A[ix,1]) # y crossings
print(ix, B, A, cross_points)
Since the curve data appears to be dense and not scattered (as for example the result of a numerically solved differential equation), interpolating or approximating the whole curve dataset is overkill. In the following, I assume that the points of the dataset are ordered along the curve (as if they were the result of sampling a parametric curve).
First, do a coordiante transformation A(x,y) with a translation an a rotation such that the red line matches the x axis.
Intersect the transformed curve with the x axis, i.e. take all points from the curve dataset with a small absolute value of the y-coordinate (and remember their indices in the dataset). Try y < 0.05 for the depicted curve.
Use the indices from the points selected in 2. to detect ranges of adjacent curve points, each range resembling a small bit of the curve.
Sloppy version
For each range, take the average value x_mean of the x-coordinates. The inverse coordinate transformation A_inv(x_mean, 0) will give you an approximation of the intersection point of that range. Depending on your use case and the complexity of potential curves, the approximation may already be good enough.
Sophisticated version
Approximate each range with a line or a polynomial curve of low degree <= 4.
Map the indices of the the range into the unit interval, such that e.g. [103, 104, 105, 106, 107] becomes [0.0, 0.25, 0.5, 0.75, 1.0]
Split the range data into a range of x and y coordinates.
Approximate the x and y data separately as 1D-function to express the curve data as a parametric function (x(t), y(t)) with t from [0, 1] (using the mapped indices from above as interpolation knots).
Use a polynomial solver to solve y(t) == 0.
For each zero t_zero inside [0, 1], evaluate the approximation funtcion x(t) at t_zero. The inverse coordinate transformation A_inv(x(t_zero), 0) gives you an approximation of the intersection point at t_zero in your original coordinates.
If you can confirm, that this solution may suit your problem, I may provide an according numpy example.
This should be easy, but I've been all over trying to find a simple explanation that I can grasp. I have an object that I'd like to represent in OpenGL as a cone. The object has x, y, z coordinates and a velocity vector vx, vy, and vz. The cone should point in the direction of the velocity vector.
So, I think my PyOpenGL code should look something like this:
glPushMatrix()
glTranslate(x, y, z)
glPushMatrix()
# do some sort of rotation here #
glutSolidCone(base, height, slices, stacks)
glPopMatrix()
glPopMatrix()
So, is that correct (so far)? What do I put in place of the "# do some sort of rotation here #" ?
In my world, the Z-axis points up (0, 0, 1) and, without any rotations, so does my cone.
Okay, Reto Koradi's answer seems to be the approach that I should take, but I'm not sure of some of the implementation details and my code is not working.
If I understand correctly, the rotation matrix should be a 4x4. Reto shows me how to get a 3x3, so I'm assuming that the 3x3 should be the upper-left corner of a 4x4 identity matrix. Here's my code:
import numpy as np
def normalize(v):
norm = np.linalg.norm(v)
if norm > 1.0e-8: # arbitrarily small
return v/norm
else:
return v
def transform(v):
bz = normalize(v)
if (abs(v[2]) < abs(v[0])) and (abs(v[2]) < abs(v[1])):
by = normalize(np.array([v[1], -v[0], 0]))
else:
by = normalize(np.array([v[2], 0, -v[0]]))
#~ by = normalize(np.array([0, v[2], -v[1]]))
bx = np.cross(by, bz)
R = np.array([[bx[0], by[0], bz[0], 0],
[bx[1], by[1], bz[1], 0],
[bx[2], by[2], bz[2], 0],
[0, 0, 0, 1]], dtype=np.float32)
return R
and here is the way it gets inserted into the rendering code:
glPushMatrix()
glTranslate(x, y, z)
glPushMatrix()
v = np.array([vx, vy, vz])
glMultMatrixf(transform(v))
glutSolidCone(base, height, slices, stacks)
glPopMatrix()
glPopMatrix()
Unfortunately, this isn't working. My test case cones just do not point correctly and I can't identify the failure mode. Without the "glutMultMatrixf(transform(v)" line, the cones align along the z-axis, as expected.
It's working. Reto Koradi correctly identified that the rotation matrix needed to be transposed in order to match the column order of OpenGL. The code should look like this (before optimization):
def transform(v):
bz = normalize(v)
if (abs(v[2]) < abs(v[0])) and (abs(v[2]) < abs(v[1])):
by = normalize(np.array([v[1], -v[0], 0]))
else:
by = normalize(np.array([v[2], 0, -v[0]]))
#~ by = normalize(np.array([0, v[2], -v[1]]))
bx = np.cross(by, bz)
R = np.array([[bx[0], by[0], bz[0], 0],
[bx[1], by[1], bz[1], 0],
[bx[2], by[2], bz[2], 0],
[0, 0, 0, 1]], dtype=np.float32)
return R.T
A helpful concept to remember here is that a linear transformation can also be interpreted as a change of coordinate systems. In other words, instead of picturing points being transformed within a coordinate system, you can just as well picture the points staying in place, but their coordinates being expressed in a new coordinate system. When looking at the matrix expressing the transformation, the base vectors of this new coordinate system are the column vectors of the matrix.
In the following, the base vectors of the new coordinate system are named bx, by and bz. Since the columns of a rotation matrix need to be orthonormal, bx, by and bz need to form an orthonormal set of vectors.
In this case, the original cone is oriented along the z-axis. Since you want the cone to be oriented along (vx, vy, vz) instead, we use this vector as the z-axis of our new coordinate system. Since we want an orthonormal coordinate system, the only thing left to do to obtain bz is to normalize this vector:
[vx]
bz = normalize([vy])
[vz]
Since the cone is rotationally symmetrical, it does not really matter how the remaining two base vectors are chosen, just as long as they are both orthogonal to bz, and orthogonal to each other. A simple way to find an arbitrary orthogonal vector to a given vector is to keep one coordinate 0, swap the other two coordinates, and change the sign of one of those two coordinates. Again, the vector needs to be normalized. Vectors we could choose with this approach include:
[ vy] [ vz] [ 0 ]
by = normalize([-vx]) by = normalize([ 0 ]) by = normalize([ vz])
[ 0 ] [-vx] [-vy]
The dot product of each of these vectors with (vx, vy, vz) is zero, which means that the vectors are orthogonal.
While the choice between these (or other variations) is mostly arbitrary, care must be taken to not end up with a degenerate vector. For example if vx and vy are both zero, using the first of these vector would be bad. To avoid choosing a (near) degenerate vector, a simple strategy is to use the first of these three vectors if vz is smaller than both vx and vy, and one of the other two otherwise.
With two new base vectors in place, the third is the cross product of the other two:
bx = by x bz
All that's left is to populate the rotation matrix with column vectors bx, by and bz, and the rotation matrix is complete:
[ bx.x by.x bz.x ]
R = [ bx.y by.y bz.y ]
[ bx.z by.z bz.z ]
If you need a 4x4 matrix, e.g. because you are using the legacy fixed function OpenGL pipeline, you can extend this to a 4x4 matrix:
[ bx.x by.x bz.x 0 ]
R = [ bx.y by.y bz.y 0 ]
[ bx.z by.z bz.z 0 ]
[ 0 0 0 1 ]