I have two copies of the same molecule as .xyz file. This means that each atom is has X, Y and Z coordinates. However, you can rotate the molecule and obtain different coordinates for each atom, although the relative positions are the same and the molecule remains the same. I want to align the two molecules using three atoms as reference points. However, I am struggling to completely align the two molecules.
Firstly, I align both molecules by translation for a single atom. Then, I am doing two subsequent rotation using rotation matrices as explained elsewhere. For some reason, I need to take the negative of the cross product of both vectors and use a sinus instead of a cosinus to get both structures to be perfectly aligned (I discovered this after a lot of trial and error).
For the second rotation, I project both vectors I want to align on a plane defined by the rotation vector. This is necessary because I don't want to rotate along the cross product of the two vectors to align, since that would disalign the rest of the molecule. Instead, I rotate along the two already aligned vectors. The project allows me to find the angle in the plane between the two vectors, and thus the rotation necessary.
However, this code does not properly align the two molecules.
"group1[0]" contains the XYZ coordinates of the three atoms to align in a list. Likewise for "group2[0]" and the structure 2.
#Point 1: align the functional groups to the origin
O1 = np.array(coords1[group1[0][0]])
O2 = np.array(coords2[group2[0][0]])
mat_2 = np.zeros((len(atoms2), 3))
for ind, c in enumerate(coords1):
coords1[ind] = np.array(c) - O1
for ind, c in enumerate(coords2):
coords2[ind] = np.array(c) - O2
mat_2[ind] = coords2[ind]
#Point 2: align according to a first vector
v1 = np.array(coords1[group1[0][1]])#Since atom 1 is the origin, the coordinates is the vector already
v2 = np.array(coords2[group2[0][1]])#Since atom 1 is the origin, the coordinates is the vector already
v1 = v1/np.linalg.norm(v1)
v2 = v2/np.linalg.norm(v2)
#Let v be the axis of rotation
v = -np.cross(v1, v2)#why do I need a minus here?
if np.linalg.norm(v) != 0:
a = np.arccos(np.dot(v1, v2)/(np.linalg.norm(v1)*np.linalg.norm(v2)))
#c = np.dot(v1, v2)*np.cos(a)
c = np.dot(v1, v2)*np.sin(a)#The internet says cos, but this works perfectly
vx = np.array([[0, -v[2], v[1]], [v[2], 0, -v[0]], [-v[1], v[0], 0]])
rot_mat = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) + vx + vx.dot(vx)*(1-c)/(1-c**2)
mat_2 = np.array(mat_2)
R_mat_rot = np.matmul(rot_mat, mat_2.T).T
else:
exit(0)
coords3 = R_mat_rot.copy()
#I get exactly what I want up until here
#Point 3: Rotate along atom2-atom1 (v1) to align the third atom
v = -v1.copy()
v2 = np.array(coords3[group2[0][2]]) - np.array(coords3[group2[0][0]]) #Since atom 1 is the origin, the coordinates is the vector already
v2 = v2/np.linalg.norm(v2)
v1 = np.array(coords1[group1[0][2]]) - np.array(coords1[group1[0][0]]) #Since atom 1 is the origin, the coordinates is the vector already
v1 = v1/np.linalg.norm(v1)
if np.linalg.norm(v) != 0:
#consider v to be the vector normal to a plane
#we want the projection of v1 and v2 unto that plane
vp1 = np.cross(v, np.cross(v1, v)) - np.array(coords1[group1[0][0]])
vp1 = vp1/np.linalg.norm(vp1)
vp2 = np.cross(v, np.cross(v2, v)) - np.array(coords3[group2[0][0]])
vp2 = vp2/np.linalg.norm(vp2)
#we find the angle between those vectors on the plane
a = np.arccos(np.dot(vp1, vp2))/(np.linalg.norm(vp1)*np.linalg.norm(vp2))
#rotation of that amount
c = np.dot(v1, v2)*np.cos(a)
vx = np.array([[0, -v[2], v[1]], [v[2], 0, -v[0]], [-v[1], v[0], 0]])
rot_mat = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) + vx + np.dot(vx, vx)*(1-c)/(1-c**2)
R_mat_rot = np.matmul(rot_mat, coords3.T).T
coords4 = R_mat_rot.copy()#Final coordinates
Related
As far as I know, after image resize, the corresponding intrinsic parameter K also changes proportionally, but why the coordinates of the 3D reconstruction of the same point are not the same?
The following python program is a simple experiment, the original image size is , after resize it becomes , the intrinsic parameter K1 corresponds to the original image, the intrinsic parameter K2 corresponds to the resize, RT1, RT2 are the extrinsic projection matrix of the camera (should remain unchanged?,[R,T], size), without considering the effects of camera skew factor and distortions,why is there a difference in the reconstructed 3D points?
import cv2
import numpy as np
fx = 1040
fy = 1040
cx = 1920 / 2
cy = 1080 / 2
K1 = np.array([[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]])
RT1 = np.array([[1, 0, 0, 4],
[0, 1, 0, 5],
[0, 0, 1, 6]]) # just random set
theta = np.pi / 6
RT2 = np.array([[np.cos(theta), -np.sin(theta), 0, 40],
[np.sin(theta), np.cos(theta), 0, 50],
[0, 0, 1, 60]]) # just random set
p1 = np.matmul(K1, RT1) # extrinsic projection matrix
p2 = np.matmul(K1, RT2) # extrinsic projection matrix
pt1 = np.array([100.0, 200.0])
pt2 = np.array([300.0, 400.0])
point3d1 = cv2.triangulatePoints(p1, p2, pt1, pt2)
# Remember to divide out the 4th row. Make it homogeneous
point3d1 = point3d1 / point3d1[3]
print(point3d1)
[[-260.07160113]
[ -27.39546108]
[ 273.95189881]
[ 1. ]]
then resize image to test recontruct 3D point, see if it is numerical equal.
rx = 640.0 / 1920.0
ry = 480.0 / 1080.0
fx = fx * rx
fy = fy * ry
cx = cx * rx
cy = cy * ry
K2 = np.array([[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]])
p1 = np.matmul(K2, RT1)
p2 = np.matmul(K2, RT2)
pt1 = np.array([pt1[0] * rx, pt1[1] * ry])
pt2 = np.array([pt2[0] * rx, pt2[1] * ry])
point3d2 = cv2.triangulatePoints(p1, p2, pt1, pt2)
# Remember to divide out the 4th row. Make it homogeneous
point3d2 = point3d2 / point3d2[3]
print(point3d2)
[[-193.03965985]
[ -26.72133393]
[ 189.12512305]
[ 1. ]]
you see, point3d1 and point3d2 is not same,why?
After careful consideration, I was lucky to get a more plausible explanation, which I now state as follows to help others.
In a short conclusion:
Image scaling must specify a uniform (fx=fy) scaling factor in order to derive the correct intrinsic parameter K, otherwise inconsistencies in the x,y axis focal lengths with respect to the original image directly lead to deviations in the calculated 3D points!
Returning to the problem at the beginning, the given image size is 1080×1920, and its focal length is 1040 pixels, i.e. fx=fy=1040, because by definition fx=f/dx,fy=f/dy, where dx, dy are the number of pixels per unit length, and f is the actual physical size of the focal length; thus the a priori dx=dy can be introduced, which is constant This "convention" should also be followed for later image scaling.
Imagine if the scaled image fx,fy were obtained in different proportions, dx,dy would not be the same, causing distortion of the image, and in addition, according to the external projection matrix P = K*[R,t], fx,fy in K would vary disproportionately leading to a deviation in the calculated P!
BTW, Similarly, I put the reference answer to the experiment done by matlab at this link.
(Rewritten, compressed and partly corrected version)
I try some trivial rotation of a triangle in 3d to point its normal straight up without any distortion. I'm following this post calculate-rotation-matrix-to-align-vector, but for some reasons, my result is undesirable and I can't see the error in my code or my error in reasoning. My Python code as followed:
# defining the triangle with three points in 3d space (xyz)
v1 = [1,1,1]
v2 = [2,2,2]
v3 = [2,1,3]
# calculating the normal and converting to unit-vector
n = np.cross( np.subtract(v2,v1), np.subtract(v3,v1) )
a = n / np.sqrt( n[0]**2 + n[1]**2 + n[2]**2 )
b = [0,1,0]
v = np.cross(a,b)
s = np.sqrt(v.dot(v))
c = np.dot(a,b)
Vx = np.array([
[ 0 , -v[2], v[1] ],
[ v[2], 0 , -v[0]],
[-v[1], v[0], 0 ]
])
Vx2 = Vx.dot(Vx)
I = np.identity(3)
iv = (1/(1+c))
# single rotation matrix
R = np.add(I, np.add(Vx, np.multiply(Vx2,iv)))
Then calculating the centroid, the rotation matrix as described and also as separated matrices as explained here Rotating a Vector in 3D Space
cx = (v1[0]+v2[0]+v3[0]) / 3.0
cy = (v1[1]+v2[1]+v3[1]) / 3.0
cz = (v1[2]+v2[2]+v3[2]) / 3.0
v1t = [v1[0]-cx,v1[1]-cy,v1[2]-cz]
v2t = [v2[0]-cx,v2[1]-cy,v2[2]-cz]
v3t = [v3[0]-cx,v3[1]-cy,v3[2]-cz]
As well as the single rotation matrices:
rz = np.array([
[c,-s, 0],
[s, c, 0],
[0, 0, 1]
])
ry = np.array([
[ c, 0, s],
[ 0, 1, 0],
[-s, 0, c]
])
rx = np.array([
[1, 0, 0],
[0, c,-s],
[0, s, c]
])
rxyz = rx.dot(ry).dot(rz)
Now I apply the rotation, with R and rxyz:
p1 = R.dot(v1t)
p2 = R.dot(v2t)
p3 = R.dot(v3t)
p4 = rxyz.dot(v1t)
p5 = rxyz.dot(v2t)
p6 = rxyz.dot(v3t)
The result is now better than the first approach and the Y of p1 and p3 are the correctly same, but p2 is still wrong. And there still seems to be a distortion.
P1-3 (red)
[-0.20673470096224283, 1.1102230246251565e-16, -1.2299659828522118]
[-0.5865305980755144, -3.885780586188048e-16, 0.45993196570442385]
[0.7932652990377571, 1.1102230246251565e-16, 0.770034017147788]
P4-6 (blue)
[-1.1482080390363765, 0.30059830961195716, -0.38316381732390803]
[0.3040075530555323, -0.6336677067539932, -0.2481938771563051]
[0.8442004859808443, 0.3330693971420361, 0.6313576944802128]
The light gray triangle is the original translated to 0,0,0 and red and blue are the rotated ones.
I want to implement a "row wise" matrix multiplication.
More specifically speaking, I want to plot a set of arrows whose directions range from (-pi, pi). The following code is how I implemented it.
scan_phi = np.linspace(-np.pi*0.5, np.pi*0.5, 450)
points = np.ones((450, 2), dtype=np.float)
points[..., 0] = 0.0
n_pts = len(points)
sin = np.sin(scan_phi)
cos = np.cos(scan_phi)
rot = np.append(np.expand_dims(np.vstack([cos, -sin]).T, axis=1),
np.expand_dims(np.vstack([sin, cos]).T, axis=1),
axis=1)
points_rot = []
for idx, p in enumerate(points):
points_rot.append(np.matmul(rot[idx], p.T))
points_rot = np.array(points_rot)
sample = points_rot[::10]
ax = plt.axes()
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
for idx, p in enumerate(sample):
if idx == 0:
ax.arrow(0, 0, p[0], p[1], head_width=0.05, head_length=0.1, color='red')
else:
ax.arrow(0, 0, p[0], p[1], head_width=0.05, head_length=0.1, fc='k', ec='k')
plt.show()
In my code, "rot" ends up being an array of (450, 2, 2) meaning for each arrow, I have created a corresponding rotation matrix to rotate it. I have 450 points stored in "points" (450, 2) that I want to draw arrows with. (Here the arrows are all initialized with [0, 1]. However, it can be initialized with different values which is why I want to have 450 individual points instead of just rotating a single point by 450 different angles)
The way I did is using a for-loop, i.e. for each arrow, I transform it individually.
points_rot = []
for idx, p in enumerate(points):
points_rot.append(np.matmul(rot[idx], p.T))
points_rot = np.array(points_rot)
However, I wonder if there's any nicer and easy way to do this completely through numpy, such as some operations that can perform matrix multiplication row-wise. Any idea will be grateful, thanks in advance!
This is a nice use-case for np.einsum:
aa = np.random.normal(size=(450, 2, 2))
bb = np.random.normal(size=(450, 2))
cc = np.einsum('ijk,ik->ij', aa, bb)
So that each row of cc is the product of corresponding rows of aa and bb:
np.allclose(aa[3].dot(bb[3]), cc) # returns True
Explanation: the Einstein notation ijk,ik->ij is saying:
cc[i,j] = sum(aa[i,j,k] * bb[i,k] for k in range(2))
I.e., all variables that do not appear in the right-hand side are summed away.
I have the following problem. I have 3D point coordinates and I want to conenct them in an array or "draw a line" like you can do it in 2D with skimage(http://scikit-image.org/docs/0.13.x/api/skimage.draw.html#skimage.draw.line). The optimal case would be if i could directly draw a cylinder with a radius in an array and give different radius different values like rings arround the line.(wrinkles could be a problem here). There have been approaches to do that but they are not the right thing i think like here:
"Draw" a 3d line into an array
With the two approaches of #Paul Panzer (Fastest way to get all the points between two (X,Y) coordinates in python) you get all coordinates between two 2D Points, but how would it look like in 3D, esspecially the second approach which is faster?:
import numpy as np
from timeit import timeit
def connect(ends):
d0, d1 = np.abs(np.diff(ends, axis=0))[0]
if d0 > d1:
return np.c_[np.linspace(ends[0, 0], ends[1, 0], d0+1, dtype=np.int32),
np.linspace(ends[0, 1]+0.5, ends[1, 1]+0.5, d0+1, dtype=np.int32)]
else:
return np.c_[np.linspace(ends[0, 0]+0.5, ends[1, 0]+0.5, d1+1, dtype=np.int32),
np.linspace(ends[0, 1], ends[1, 1], d1+1, dtype=np.int32)]
def connect2(ends):
d0, d1 = np.diff(ends, axis=0)[0]
if np.abs(d0) > np.abs(d1):
return np.c_[np.arange(ends[0, 0], ends[1,0] + np.sign(d0), np.sign(d0), dtype=np.int32),
np.arange(ends[0, 1] * np.abs(d0) + np.abs(d0)//2,
ends[0, 1] * np.abs(d0) + np.abs(d0)//2 + (np.abs(d0)+1) * d1, d1, dtype=np.int32) // np.abs(d0)]
else:
return np.c_[np.arange(ends[0, 0] * np.abs(d1) + np.abs(d1)//2,
ends[0, 0] * np.abs(d1) + np.abs(d1)//2 + (np.abs(d1)+1) * d0, d0, dtype=np.int32) // np.abs(d1),
np.arange(ends[0, 1], ends[1,1] + np.sign(d1), np.sign(d1), dtype=np.int32)]
ends = np.array([[ 1520, -1140],
[ 1412, -973]])
Since you tagged the question 'vtk', I assume you can use it in your code. In that case, I think vtkTubeFilter does exactly what you are looking for - you create a vtkPolyData with all your line segments (you have your endpoints for that) and for each segment, the filter generates a cylinder. You can also specify different radius for each segment as you wanted: call SetVaryRadiusToVaryRadiusByScalar() on the filter to turn it on and provide the array with radii as scalar data.
You can find a code example that does all that (and more) here: https://www.vtk.org/Wiki/VTK/Examples/Cxx/VisualizationAlgorithms/TubesWithVaryingRadiusAndColors
I've put together this Forward Kinematics function for Baxter arm robot based on its hardware specs and the following joints axis:
The joint positions for the following forward kinematics are not matching the corresponding Cartesian coordinates, what am I doing wrong here?
def FK_function_2(joints):
def yaw(theta): #(rotation around z)
y = np.array([[np.cos(theta), -np.sin(theta), 0],
[np.sin(theta), np.cos(theta), 0],
[0, 0, 1] ])
return y
R01 = yaw(joints[0]).dot(np.array([[-1, 0, 0],
[0, 0, 1],
[0, 1, 0]]))
R12 = yaw(joints[1]).dot(np.array([[0, 0, -1],
[-1, 0, 0],
[0, 1, 0]]))
R23 = yaw(joints[2]).dot(np.array([[-1, 0, 0],
[0, 0, 1],
[0, 1, 0]]))
R34 = yaw(joints[3]).dot(np.array([[-1, 0, 0],
[0, 0, 1],
[0, 1, 0]]))
R45 = yaw(joints[4]).dot(np.array([[-1, 0, 0],
[0, 0, 1],
[0, 1, 0]]))
R56 = yaw(joints[5]).dot(np.array([[-1, 0, 0],
[0, 0, 1],
[0, 1, 0]]))
R67 = yaw(joints[6]).dot(np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]))
d = np.array([0.27035, 0, 0.36435, 0, 0.37429, 0, 0.229525])
a = np.array([0.069, 0, 0.069, 0, 0.010, 0, 0])
l1 = np.array([a[0]*np.cos(joints[0]), a[0]*np.sin(joints[0]), d[0]]);
l2 = np.array([a[1]*np.cos(joints[1]), a[1]*np.sin(joints[1]), d[1]]);
l3 = np.array([a[2]*np.cos(joints[2]), a[2]*np.sin(joints[2]), d[2]]);
l4 = np.array([a[3]*np.cos(joints[3]), a[3]*np.sin(joints[3]), d[3]]);
l5 = np.array([a[4]*np.cos(joints[4]), a[4]*np.sin(joints[4]), d[4]]);
l6 = np.array([a[5]*np.cos(joints[5]), a[5]*np.sin(joints[5]), d[5]]);
l7 = np.array([a[6]*np.cos(joints[6]), a[6]*np.sin(joints[6]), d[6]]);
unit = np.array([0, 0, 0, 1])
H0 = np.concatenate((np.concatenate((R01, l1.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
H1 = np.concatenate((np.concatenate((R12, l2.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
H2 = np.concatenate((np.concatenate((R23, l3.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
H3 = np.concatenate((np.concatenate((R34, l4.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
H4 = np.concatenate((np.concatenate((R45, l5.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
H5 = np.concatenate((np.concatenate((R56, l6.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
H6 = np.concatenate((np.concatenate((R67, l7.reshape(3, 1)), axis=1), unit.reshape(1,4)), axis=0)
T = H0.dot(H1).dot(H2).dot(H3).dot(H4).dot(H5).dot(H6)
return T[0:3, 3]
Ok, so I have been looking at this and checked your code. The code is good and works with your defined kinematic chain with transformations from the base to the end of the robotic arm.
(H0 * H1 * H2 * H3 * H4 * H5 * H6) is the correct kinematic chain where each represents a transformation from one joint to the next in the chain starting at the base of the arm.
The problem is your transformations are wrong. Your representation of H0 through H6 is not right and it is the numbers in these matrices that cause your transformations to not match the real transformations that take place. You need to from up the correct transformations from the base all the way to the end of the arm. Other than that, your approach is correct.
It looks like you are using normal DH parameters for your transformation matrices. Your values for a and d (and alpha which isn't shown in your code) are off and causing the transformations to be expressed incorrectly. The DH parameters are seen in https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters.
I found an exact guide for Baxter's forward kinematics to help after going through the modified DH table to set up the transformations. I would look at the modified DH parameters at the end of the wiki article above since the guide uses that.
Baxter Forward Kinematic Guide: https://www.ohio.edu/mechanical-faculty/williams/html/pdf/BaxterKinematics.pdf
In this paper, the author, Robert Williams, sets up the DH parameters for the Baxter robotic arm and gets values different than what you have (I know you are using the normal DH parameters, but I would look at using the modified ones). His table is:
With lengths of:
And using the modified DH matrix:
Now you can calculate matrices H0 through H6 and if you want you can also add the end effector geometry if you have that for an additional H7. Once you multiple them all together you should get the proper forward kinematic transformation (see the paper for additional resource). Both the left and right arms have the same kinematics.
When you multiply it all together then you get the expressions for the coordinates of x7, y7, and z7 from the base of the arm that are functions of the rotations of the joints and the geometry of the robot arm. See the paper on page 17 for the expressions for x7,y7, and z7. Also see page 14 for the individual transformations.
Also don't forget to express the angles in radians since your code uses regular trig functions.
One last update:
I just remembered that it is easier for me to think of the intermediate translation and rotational steps one-by-one (instead of jumping straight to the DH matrix). The two approaches will be equivalent, but I like to think of each individual step that it takes to get from one rotation frame to the next.
For this you can use these building blocks.
Pure Translation:
[1 0 0 u;
0 1 0 v;
0 0 1 w;
0 0 0 1]
Where u is the distance from the previous frame to the new frame measured from the previous x frame axis.
Where v is the distance from the previous frame to the new frame measured from the previous y frame axis.
Where w is the distance from the previous frame to the new frame measured from the previous z frame axis.
Rotation about the z-axis by arbitrary theta:
This represent the robot joint rotation to an arbitrary theta.
[cos(theta) -sin(theta) 0 0;
sin(theta) cos(theta) 0 0;
0 0 1 0;
0 0 0 1]
Combination of rotations around intermediate frames to get to final frame position: (these angles will usually be in increments of pi/2 or pi to be able to get to the final orientation)
Can use a rotation about the intermediate x axis, y axis, or z axis shown below.
(Rotation about x axis by alpha)
R_x(alpha) = [1 0 0 0;
0 cos(alpha) -sin(alpha) 0;
0 sin(alpha) cos(alpha) 0;
0 0 0 1];
(Rotation about y axis by beta)
R_y(beta) = [ cos(beta) 0 sin(beta) 0;
0 1 0 0;
-sin(beta) 0 cos(beta) 0;
0 0 0 1];
(Rotation about z axis by gamma):
[cos(gamma) -sin(gamma) 0 0;
sin(gamma) cos(gamma) 0 0;
0 0 1 0;
0 0 0 1]
So with these building blocks you can build the sequence of steps to go from one frame to another (essentially any H matrix can be decomposed into these steps). The chain would be something like:
[H](transformation from previous frame to the next frame) = [Pure Translation from previous joint to new joint expressed in the previous joint's frame] * [Rotation about previous frame's z-axis by theta (for the joint) (since the joint has many positions, theta is left as symbolic)] * [All other intermediate rotations to arrive at the new joint frame orientation expressed as rotations about intermediate axis frames]
That is essentially what the DH parameters helps you do, but I like to think of the individual steps to get from one frame to the next instead of jumping there with the DH parameters.
With the corrected H0 through H6 transformations, your approach is correct. Just change the definitions of H0 through H6 in your code.