How to calculate cartesian coordinates from dihedral angle in python - python

I have an ensemble of points in a Cartesian space. I can compute dihedral angles defined by a given sub-ensemble of four points (a,b,c,d) using python with numpy. Below are my functions:
def getDihedral(a,b,c,d):
v1 = getNormedVector(a, b)
v2 = getNormedVector(b, c)
v3 = getNormedVector(c, d)
v1v2 = numpy.cross(v1,v2)
v2v3 = numpy.cross(v2,v3)
return getAngle(v1v2,v2v3)
def getNormedVector(a,b):
return (b-a)/numpy.linalg.norm(b-a)
def getAngle(a,b):
return numpy.rad2deg(numpy.arccos(numpy.dot(a/numpy.linalg.norm(a),b.T/numpy.linalg.norm(b))))[0,0]
I want to rotate only one dihedral angles, how can I calculate the new coordinates for a sub-ensembles of points using python with numpy and scipy?

If you can compute the dihedral, I assume you can obtain the axis about which you want to rotate you subset of points. Given that, you can easily do this by rotating all points around this axis by the angle you want in vpython - see this example (go to 'rotating a vector'). Otherwise, you need to program the appropriate equation (spelled out in this thread).

Related

How can I rotate a 2d image using a target image, landmark coordinates, the least squares approach, and a rotation matrix?

I have two 2d images, one is the source image and the other is a target image; I need to rotate the source image to match the target image using python (scikit & numpy). I have 3 landmark coordinates for each image, as follows:
image1_points = [(12,16),(7,4),(25,20)]
image2_points = [(15,22),(1,22),(25,10)]
I believe the following steps are what's needed:
Create rotation matrix using least squares approach using the 3 landmark coordinates
Use the rotation matrix to get theta
Convert theta to degrees (for the angle)
Use the apply_angle method with the angle to rotate the image
I've been trying to use these points and the least squares approach to compute a linear transformation matrix that transforms points from the source to the target image.
I know I need to create a rotation matrix, but having never taken algebra I'm a bit lost. I've done lots of reading, and tried using scipy's built-in procrustes to do an affine transformation below (which may be all wrong).
m1, m2, d = scipy.spatial.procrustes(target_points, source_points)
a = np.dot(m1.T, m2, out=None) / norm(m1)**2
#separate x and y for the sake of convenience
ref_x = m2[::2]
ref_y = m2[1::2]
x = m1[::2]
y = m1[1::2]
b = np.sum(x*ref_y - ref_x*y) / norm(m1)**2
scale = np.sqrt(a**2+b**2)
theta = atan(b / max(a.all(), 10**-10)) #avoid dividing by 0
degrees = cos(radians(theta))
apply_angle(source_img, degrees)
However, this is not giving me the result I would expect. It's giving me a degree around 1, where I would expect a degree around 72. I suspect that the degree is what's needed to rotate the image as the angle parameter.
Any help would be hugely appreciated. Thank you!

Programmatical Change of basis for coordinate vectors with different origin of coordinates (python/general maths)

I have a Support Vector Machine that splits my data in two using a decision hyperplane (for visualisation purposes this is a sample dataset with three dimensions), like this:
Now I want to perform a change of basis, such that the hyperplane lies flatly on the x/y plane, such that the distance from each sample point to the decision hyperplane is simply their z-coordinate.
For that, I know that I need to perform a change of basis. The hyperplane of the SVM is given by their coefficient (3d-vector) and intercept (scalar), using (as far as I understand it) the general form for mathematical planes: ax+by+cz=d, with a,b,c being the coordinates of the coefficient and d being the intercept. When plotted as 3d-Vector, the coefficient is a vector orthogonal to the plane (in the image it's the cyan line).
Now to the change of basis: If there was no intercept, I could just assume the vector that is the coefficient is one vector of my new basis, one other can be a random vector that is on the plane and the third one is simply cross product of both, resulting in three orthogonal vectors that can be the column vectors of the transformation-matrix.
The z-function used in the code below comes from simple term rearrangement from the general form of planes: ax+by+cz=d <=> z=(d-ax-by)/c:
z_func = lambda interc, coef, x, y: (interc-coef[0]*x -coef[1]*y) / coef[2]
def generate_trafo_matrices(coefficient, z_func):
normalize = lambda vec: vec/np.linalg.norm(vec)
uvec2 = normalize(np.array([1, 0, z_func(1, 0)]))
uvec3 = normalize(np.cross(uvec1, uvec2))
back_trafo_matrix = np.array([uvec2, uvec3, coefficient]).T
#in other order such that its on the xy-plane instead of the yz-plane
trafo_matrix = np.linalg.inv(back_trafo_matrix)
return trafo_matrix, back_trafo_matrix
This transformation matrix would then be applied to all points, like this:
def _transform(self, points, inverse=False):
trafo_mat = self.inverse_trafo_mat if inverse else self.trafo_mat
points = np.array([trafo_mat.dot(point) for point in points])
return points
Now if the intercept would be zero, that would work perfectly and the plane would be flat on the xy-axis. However as soon as I have an intercept != zero, the plane is not flat anymore:
I understand that that is the case because this is not a simple change of basis, because the coordinate origin of my other basis is not at (0,0,0) but at a different place (the hyperplane could be crossing the coefficient-vector at any point), but my attempts of adding the intercept to the transformation all didn't lead to the correct result:
def _transform(self, points, inverse=False):
trafo_mat = self.inverse_trafo_mat if inverse else self.trafo_mat
intercept = self.intercept if inverse else -self.intercept
ursprung_translate = trafo_mat.dot(np.array([0,0,0])+trafo_matrix[:,0]*intercept)
points = np.array([point+trafo_matrix[:,0]*intercept for point in points])
points = np.array([trafo_mat.dot(point) for point in points])
points = np.array([point-ursprung_translate for point in points])
return points
is for example wrong. I am asking this on StackOverflow and not on the math StackExchange because I think I wouldn't be able to translate the respective math into code, I am glad I even got this far.
I have created a github gist with the code to do the transformation and create the plots at https://gist.github.com/cstenkamp/0fce4d662beb9e07f0878744c7214995, which can be launched using Binder under the link https://mybinder.org/v2/gist/jtpio/0fce4d662beb9e07f0878744c7214995/master?urlpath=lab%2Ftree%2Fchange_of_basis_with_translate.ipynb if somebody wants to play around with the code itself.
Any help is appreciated!
The problem here is that your plane is an affine space, not a vector space, so you can't use the usual transform matrix formula.
A coordinate system in affine space is given by an origin point and a basis (put together, they're called an affine frame). For example, if your origin is called O, the coordinates of the point M in the affine frame will be the cooordinates of the OM vector in the affine frame's basis.
As you can see, the "normal" R^3 space is a special case of affine space where the origin is (0,0,0).
Once we've determined those, we can use the frame change formulas in affine spaces: if we have two affine frames R = (O, b) and R' = (O', b'), the base change formula for a point M is: M(R') = base_change_matrix_from_b'_to_b * (M(R) - O'(R)) (with O'(R) the coordinates of O' in the coordinate system defined by R).
In our case, we're trying to go from the frame with an origin at (0,0,0) and
the canonical basis, to a frame where the origin is the orthogonal projection of (0,0,0) on the plane and the basis is, for instance, the one described in your initial post.
Let's implement these steps:
To begin with, we'll define a Plane class to make our lifes a bit easier:
from dataclasses import dataclass
import numpy as np
#dataclass
class Plane:
a: float
b: float
c: float
d: float
#property
def normal(self):
return np.array([self.a, self.b, self.c])
def __contains__(self, point:np.array):
return np.isclose(self.a*point[0] + self.b*point[1] + self.c*point[2] + self.d, 0)
def project(self, point):
x,y,z = point
k = (self.a*x + self.b*y + self.c*z + self.d)/(self.a**2 + self.b**2 + self.c**2)
return np.array([x - k*self.a, y-k*self.b, z-k*self.c])
def z(self, x, y):
return (- self.d - self.b*y - self.a*x)/self.c
We can then implement make_base_changer, which takes a Plane as an input, and return 2 lambda functions performing the forward and inverse transform (taking and returning a point). You should be able to test
def normalize(vec):
return vec/np.linalg.norm(vec)
def make_base_changer(plane):
uvec1 = plane.normal
uvec2 = [0, -plane.d/plane.b, plane.d/plane.c]
uvec3 = np.cross(uvec1, uvec2)
transition_matrix = np.linalg.inv(np.array([uvec1, uvec2, uvec3]).T)
origin = np.array([0,0,0])
new_origin = plane.project(origin)
forward = lambda point: transition_matrix.dot(point - new_origin)
backward = lambda point: np.linalg.inv(transition_matrix).dot(point) + new_origin
return forward, backward

Geometry - Divide 3D points into segments with specific angle

I have a cloud of points with the form of a cylinder(x, y, z) like the picture:.
I want to divide it into 3D segments within a specific angle.
It looks like I have a pie and I need to cut it into pieces.
What is the best way to do this?
If I understand you correctly, this is how you can do what you want using https://github.com/daavoo/pyntcloud and using as example this cylinder in .ply format.
You can load the cylinder:
from pyntcloud import PyntCloud
cylinder = PyntCloud.from_file("cylinder.ply")
Wich is a triangular mesh that look like this:
You can generate a point cloud from the mesh as follows (this step is not necesary if you already have the cylinder as a point cloud):
n_points = 100000
cylinder = cylinder.get_sample(
"mesh_random_sampling",
n=n_points,
as_PyntCloud=True)
Wich now looks like this:
Now comes what I think is the right approach to do the "pie segmentation".
You can convert the (x, y, z) cartesian coordinates to (ro, phi, z) cylindrical coordinates as follows:
cylinder.add_scalar_field("cylindrical_coords")
The "phi" scalar field is a value wich identifies each point with the angle that you were interested in. The visualization is more explanatory:
You can now use this phi values to divide the points into the desired number of segments:
import pandas as pd
n_segments = 3
cylinder.points["segment"] = pd.cut(
cylinder.points["phi"],
n_segments,
labels=range(n_segments))
Now cylinder.points["segment"] has a unique value assigning each point to a "pie segment".
The visualization is usefull again to appreciate the "pie segments":
Get:
- base point O - center of cylinder circle base, lying in plane L
- cylinder axis unit vector N
- base vector V in the plane L, collinear to some cut direction
- the second base U, perpendicular to N and V (U = N x V)
Then for every point P from cloud make its projection onto plane and find corresponding vector
P' = P - N * ((P-O).dot.N)
W = P'-O
and get angle (and sector) relative to V in the plane using
angle = atan2(W.dot.U, W.dot.V)

How do I retrieve the angle between two vectors 3D?

I am new in python.
I have two vectors in 3d space, and I want to know the angle between two
I tried:
vec1=[x1,y1,z1]
vec2=[x2,y2,z2]
angle=np.arccos(np.dot(vec1,vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2)))
but when change the order, vec2,vec1 obtain the same angle and no higher.
I want to give me a greater angle when the order of the vectors changes.
Use a function to help you choose which angle do you want. In the beggining of your code, write:
def angle(v1, v2, acute):
# v1 is your firsr vector
# v2 is your second vector
angle = np.arccos(np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2)))
if (acute == True):
return angle
else:
return 2 * np.pi - angle
Then, when you want to calculate an angle (in radians) in your program just write
angle(vec1, vec2, 'True')
for acute angles, and
angle(vec2, vec1, 'False')
for obtuse angles.
For example:
vec1 = [1, -1, 0]
vec2 = [1, 1, 0]
#I am explicitly converting from radian to degree
print(180* angle(vec1, vec2, True)/np.pi) #90 degrees
print(180* angle(vec2, vec1, False)/np.pi) #270 degrees
If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy.
import numpy as np
import vg
vec1 = np.array([x1, y1, z1])
vec2 = np.array([x2, y2, z2])
vg.angle(vec1, vec2)
You can also specify a viewing angle to compute the angle via projection:
vg.angle(vec1, vec2, look=vg.basis.z)
Or compute the signed angle via projection:
vg.signed_angle(vec1, vec2, look=vg.basis.z)
I created the library at my last startup, where it was motivated by uses like this: simple ideas which are verbose or opaque in NumPy.
What you are asking is impossible as the plane that contains the angle can be oriented two ways and nothing in the input data gives a clue about it.
All you can do is to compute the smallest angle between the vectors (or its complement to 360°), and swapping the vectors can't have an effect.
The dot product isn't guilty here, this is a geometric dead-end.
The dot product is commutative, so you'll have to use a different metric. It doesn't care about the order.
Since the dot product is commutative, simply reversing the order you put the variables into the function will not work.
If your objective is to find the obtuse(larger) angle rather than the acute(smaller) one, subtract the value returned by your function from 360 degrees. Since you seem to have a criteria for when you want to switch the variables around, you should use that same criteria to determine when to subtract your found value from 360. This will give you the value you are looking for in these cases.

Finding coordinate points of intersection with two numpy arrays

This sort of question is a tad bit different the normal 'how to find the intersection of two lines' via numpy. Here is the situation, I am creating a program that looks at slope stability and I need to find where a circle intersects a line.
I have two numpy arrays:
One array gives me a normal (x, y) values of an elevation profile in 2D
The other array is calculated values of coordinates (x, y) that spans the circumference of a circle from a defined centre.
I need to somehow compare the two at what approximate point does the coordinates of the circle intersect the profile line?
Here some data to work with:
circ_coords = np.array([
[.71,.71],
[0.,1.]
])
linear_profile = np.array([
[0.,0.],
[1.,1.]
])
I need a function that would spit out say a single or multiple coordinate values saying that based on these circular coordinates and your linear profile.. the two would intersect here.
def intersect(array1, array2):
# stuff
return computed_array
You can solve it algebraically. The parametric representation of points (x,y) on the line segment between (x1,y1) and (x2,y2) is:
x=tx1+(1−t)x2 and y=ty1+(1−t)y2,
where 0≤t≤1.
If you substitute it in the equation of the circle and solve the resulting quadratic equation for t, you can test if 0≤t01≤1, i.e line segment intersets with circle. The t01 values could be than used to calculate intersection points.
Shapely has some cool functions. According to this post, this code should work:
from shapely.geometry import LineString
from shapely.geometry import Point
p = Point(0,0)//center
c = p.buffer(0.71).boundary//radius
l = LineString([(0.,0.), (1., 1.)])//line point
i = c.intersection(l)
Apparently here i is the array you are looking for, also, check this post too. Hope this helps.

Categories

Resources