How to rotate an rotation by an euler rotation in python? - python

What i want
Input: non-normalized axis rotation
Output: quaternion rotation, but additionally rotated by -90 degree y-axis (euler)
What i have
#!/usr/bin/env python3
#from math import radians, degrees, cos, sin, atan2, asin, pow, floor
#import numpy as np
from scipy.spatial.transform import Rotation
#r = Rotation.from_rotvec(rotation_by_axis).as_quat()
r = Rotation.from_quat([-0.0941422, 0.67905384, -0.2797612, 0.67212856]) # example
print("Input (as Euler): " + str(r.as_euler('xyz', degrees=True)))
print("Output (as Euler): " + str(r.apply([0, -90, 0])))
The result:
Input (as Euler): [-83.23902624 59.33323676 -98.88314731]
Output (as Euler): [-22.33941658 -74.31676511 45.58474405]
How to get the output [-83.23902624 -30.66676324 -98.88314731] instead?
Bad workaround
This works only sometimes (why?).
rotation = r.from_quat([rotation.x, rotation.y, rotation.z, rotation.w])
rotation = rotation.as_euler('xyz', degrees=True)
print(rotation)
rotation = r.from_euler('xyz', [rotation[0], rotation[1]-90, rotation[2]], degrees=True)
print(rotation.as_euler('xyz', degrees=True))
rotation = rotation.as_quat()
How to do it a better way?
Because sometimes i get wrong values:
[ -8.25897711 -16.54712028 -1.90525288]
[ 171.74102289 -73.45287972 178.09474712]
[ -7.18492129 22.22525264 0.44373851]
[ -7.18492129 -67.77474736 0.44373851]
[ 7.52491766 -37.71896037 -40.86915413]
[-172.47508234 -52.28103963 139.13084587]
[ -1.79610826 37.83068221 31.20184248]
[ -1.79610826 -52.16931779 31.20184248]
[-113.5719734 -54.28744892 141.73007557]
[ 66.4280266 -35.71255108 -38.26992443]
[ -83.23903078 59.33323752 -98.88315157]
[ -83.23903078 -30.66676248 -98.88315157]
[ -9.67960912 -7.23784945 13.56800885]
[ 170.32039088 -82.76215055 -166.43199115]
[ -6.21695895 5.66996884 -11.16152822]
[ -6.21695895 -84.33003116 -11.16152822]
[ 0. 0. 0. ]
[ 0. -90. 0. ]
[ 0. 0. 0. ]
[ 0. -90. 0. ]
Here wrong:
[ -8.25897711 -16.54712028 -1.90525288]
[ 171.74102289 -73.45287972 178.09474712]
Here okay:
[ -7.18492129 22.22525264 0.44373851]
[ -7.18492129 -67.77474736 0.44373851]
I require it for this:
https://github.com/Arthur151/ROMP/issues/193#issuecomment-1156960708

apply is for applying a rotation to vectors; it won't work on, e.g., Euler rotation angles, which aren't "mathematical" vectors: it doesn't make sense to add or scale them as triples of numbers.
To combine rotations, use *. So, e.g., to rotate by an additional 20 degrees about a y-axis defined by the first rotation:
In [1]: import numpy as np
In [2]: np.set_printoptions(suppress=True) # don't show round-off
In [3]: from scipy.spatial.transform import Rotation
In [4]: def e(x,y,z): return Rotation.from_euler('xyz', [x,y,z], degrees=True)
In [5]: def s(r): return r.as_euler('xyz', degrees=True)
In [6]: display(s(e(0,20,0) * e(10,0,0)))
Out[6]: array([10., 20., 0.])
However, in general this rotation won't just add to the y-component total rotation. This is because the additional rotation's axes are defined by the first rotation, but the total rotation includes everything combined:
In [7]: s(e(0,20,0) * e(0,0,10))
Out[7]: array([ 3.61644157, 19.68349808, 10.62758414])
Combining rotations as shown above is quite standard; e.g., in a multi-jointed robot, to find the orientation of the final element, you'd use the "combining" technique shown above, with one rotation per joint, defined by the appropriate axes (e.g., z for a "hip" yawing rotation, x for a "wrist" rolling rotation)
If you do need to manipulate Euler angles, your "bad workaround" is fine. Bear in mind that the middle rotation in Euler representations is normally limited to be under 90 degrees absolute value:
In [8]: s(e(0,135,0))
Out[8]: array([180., 45., 180.])

Related

Difference between scipy.linalg.expm versus hand-coded one

I was trying to implement the matrix exponential function as in scipy.linalg.expm. I gained inspiration from kaityo256's github repository. I thus wrote down the following.
from scipy.linalg import expm
from scipy.linalg import eigh
from scipy.linalg import inv
from math import exp as math_exp
from numpy import array, zeros
from numpy.random import random_sample
from numpy.testing import assert_allclose
def diag2sqr(x):
'''Makes an square matrix from a diagonal one.
Takes a 1d matrix. Determines its data type.
Finds out the shape of the 1d matrix.
Makes an empty square matrix with both
dimensions equal to largest (nonzero) dimension of
the 1d matrix. It then fills the elements of the
1d matrix into diagonal slots of the empty
square one.
Parameters
----------
x : ndarray
ndarray of be coverted to a square ndarray
Returns
-------
xsqr : ndarray
ndarray with diagonals sameas those of x
all other elements are zero
dtype same as that of x
'''
x_flat = x.ravel()
xsqr = zeros((x_flat.shape[0], x_flat.shape[0]), dtype=x.dtype)
# Making the empty matrix
for i in range(x_flat.shape[0]):
xsqr[i, i] = x_flat[i]
# filling up the ith element
print('xsqr', xsqr)
return xsqr
def kaityo_expm(x, ):
'''Exponentiates an ndarray (kaityo).
Exponentiates a ndarray in the most naive way.
Parameters
----------
x : ndarray
The ndarray to be exponentiated
Returns
-------
kexpm : ndarray
x after exponentiating
'''
rx, ux = eigh(x)
# Find eigenvalues and eigenvectors
# eigenvectors composed to form a unitary
ux_inv = inv(ux)
# Inverse of the unitary
# tx = diag([array([math_exp(i) for i in rx]).ravel()])
# tx = array([math_exp(i) for i in rx])
tx = diag2sqr(array([math_exp(i) for i in rx]))
# Constructing the diagonal matrix
kexpm1 = tx#ux_inv
kexpm = ux#kexpm1
return kexpm
Afterwards, I tried to test the above code versus scipy.linalg.expm.
x = random_sample((10, 10))
assert_allclose(expm(x), kaityo_expm(x))
This leads to the following output.
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatch: 100%
Max absolute difference: 7.04655733
Max relative difference: 0.59875635
x: array([[18.032424, 16.224408, 12.432163, 16.614248, 12.85653 , 13.705387,
15.096966, 10.577946, 18.399573, 17.938062],
[16.352809, 17.525898, 12.79079 , 16.295562, 13.512996, 14.407979,...
y: array([[18.649103, 13.157682, 11.264763, 16.099163, 15.2293 , 17.854499,
11.691586, 13.412066, 15.023189, 15.598455],
[13.157682, 13.612502, 9.628261, 12.659313, 13.559437, 13.382417,..
Obviously, both the implementations differ.
The questions are as follows:
Is it acceptable for them to differ?
Is my implementation wrong?
If my implementation is wrong, how do I fix it?
If my implementation is correct, when is it safe to use scipy.linalg.expm?
I have seen the following questions:
Matrix exponentiation with scipy: expm, expm2 and expm3
from a mathematical approach the definition of exponential of a matrix is made using the Taylor series of the exponential, so:
let A be a diagonal square matrix:
the problem arise when A is a generic square matrix, so before doing the exponential you will need do diagonalize it using eigenvalue and eigenvectors:
with U the matrix of eigenvectors and Lambda the matrix with the eigenvalues on the diagonal.
at this point we are close to finding what is an exponential of a matrix:
now lets implement this result in a simple script:
>>> import numpy as np
>>> import scipy.linalg as ln
>>> A = [[2/3, -4/3, 2],
[5/6, 4/3, -2],
[5/6, -2/3, 0]]
>>> A = np.matrix(A)
>>> print(A)
[[ 0.66666667 -1.33333333 2. ]
[ 0.83333333 1.33333333 -2. ]
[ 0.83333333 -0.66666667 0. ]]
>>> eigvalue, eigvectors = np.linalg.eig(A)
>>> print("eigvalue: ", eigvalue)
>>> print("eigvectors:")
>>> print(eigvectors)
eigvalue: [ 1. -1. 2.]
eigvectors:
[[ 0.81649658 0.27216553 0.87287156]
[ 0.40824829 -0.68041382 -0.21821789]
[ 0.40824829 -0.68041382 0.43643578]]
>>> e_Lambda = np.eye(np.size(A, 0))*(np.exp(eigvalue))
>>> print(e_Lambda)
[[2.71828183 0. 0. ]
[0. 0.36787944 0. ]
[0. 0. 7.3890561 ]]
>>> e_A = eigvectors*e_Lambda*eigvectors.I
>>> print(e_A)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> e_A2 = ln.expm(A)
>>> print(e_A2)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> np.testing.assert_allclose(e_A, e_A2)
>>> print(e_A - e_A2)
[[-1.77635684e-15 1.77635684e-15 -8.88178420e-16]
[ 4.44089210e-16 -1.77635684e-15 8.88178420e-16]
[-2.22044605e-16 0.00000000e+00 4.44089210e-16]]
we see that the result is basically the same, so i think it's safe to use scipy.linalg.expm for matrix exponentiation.
i created a repo with the notebook for further testing.

adding points per pixel along some axis for 2D polygon

Assuming I have an open polygon represented by a list of 2D points. E.g. the representation of some kind of a triangle-polygon without a basis would be:
import numpy as np
polygon_arr = np.array([[0,0], [15,10], [2,4]])
I'm looking for an elegant way to enrich the representation, i.e. adding points to the polygon_arr such that the polygon itself wouldn't change but for every y value (in the range of the polygon) there would be a matching point in the polygon_arr.
Example:
simple_line_polygon = np.array([[0,0], [10,5]])
enriched_representation = foo(simple_line_polygon)
# foo() should return: np.array([[0,0], [2,1], [4,2], [6,3], [8,4], [10,5]])
I thought of considering each two adjacent points in the polygon, construct a line equation (y=mx+n) and sample it for each y within the range; then treat special cases such as two points are vertical (so the line equation is not defined) and the case where the points are already closer to each other than one-pixel change in y value. However, this is not so elegant and would appreciate better ideas.
There is no need for a line equation here. You can just scale the x and y distances between points separately. If there should be a minimum distance between the points, you can check that by computing the Euclidean distance between corners. Here is a small function that hopefully does what you are after:
import numpy as np
def enrich_polygon(polygon, maxpoints = 5, mindist=1):
result = []
##looping over all lines of the polygon:
for start, end in zip(polygon, np.vstack([polygon[1:],polygon[:1]])):
dist = np.sqrt(np.sum((start-end)**2)) ##distance between points
N = int(min(maxpoints+1,dist/mindist)) ##amount of sub-sections
if N < 2: ##mindist already reached
result += [start]
##generating the new points:
##put all points (including original start) on the line into results
else:
result += [
start+i*(end-start)/(N-1) for i in range(N-1)
]
return np.array(result)
polygon_arr = np.array([[0,0], [15,10], [2,4]])
res = enrich_polygon(polygon_arr)
print(res)
The function takes the original polygon and iterates over pairs of neighbouring corner points. If the distance between two corners is larger than mindist, new points will be added up to maxpoints (the maximum amount of points to be added per line). For the given example the result looks like this:
[[ 0. 0. ]
[ 3. 2. ]
[ 6. 4. ]
[ 9. 6. ]
[12. 8. ]
[15. 10. ]
[12.4 8.8 ]
[ 9.8 7.6 ]
[ 7.2 6.4 ]
[ 4.6 5.2 ]
[ 2. 4. ]
[ 1.33333333 2.66666667]
[ 0.66666667 1.33333333]]

Python: approximate the integration over a region with finite number of points

I have a set of 2D array of mesh grid points that covers the rectangular region. (-100,100)x(-100,100)
>>> x # shape is (4000000, 2)
>>> array([[-100. , -100. ],
[ -99.9, -100. ],
[ -99.8, -100. ],
...,
[ 99.7, 99.9],
[ 99.8, 99.9],
[ 99.9, 99.9]])
I have a function R^2 -> R such that can be evaluated over all theses points; however, I don't have analytic form of my function (you can think it as a neural network, can evaluate on any points on this mesh grid rectangular )
>>> f_x # the evaluation on pts; and its shape is (4000000,)
>>> array([-1.34405857e+47, -1.34137180e+47, -1.33868771e+47, ...,
-5.54445000e+02, -5.54445000e+02, -5.54445000e+02])
Now; I want to integrate my function over any of the subregion (let's say (-20,20)x(-20,20) ). Since my f_x supposed to be a valid normalized distribution with most of mass in this region, the integral over such region should be close to one
How do I do this efficiently in python? Thanks
quad looks to be what you need:
from scipy.integrate import quad
integration = quad(array, -20, 20)
delta_from_one = 1 - integration
Hope that helps...

Determining a homogeneous affine transformation matrix from six points in 3D using Python

I am given the locations of three points:
p1 = [1.0, 1.0, 1.0]
p2 = [1.0, 2.0, 1.0]
p3 = [1.0, 1.0, 2.0]
and their transformed counterparts:
p1_prime = [2.414213562373094, 5.732050807568877, 0.7320508075688767]
p2_prime = [2.7677669529663684, 6.665063509461097, 0.6650635094610956]
p3_prime = [2.7677669529663675, 5.665063509461096, 1.6650635094610962]
The affine transformation matrix is of the form
trans_mat = np.array([[…, …, …, …],
[…, …, …, …],
[…, …, …, …],
[…, …, …, …]])
such that with
import numpy as np
def transform_pt(point, trans_mat):
a = np.array([point[0], point[1], point[2], 1])
ap = np.dot(a, trans_mat)[:3]
return [ap[0], ap[1], ap[2]]
you would get:
transform_pt(p1, trans_mat) == p1_prime
transform_pt(p2, trans_mat) == p2_prime
transform_pt(p3, trans_mat) == p3_prime
Assuming the transformation is homogeneous (consists of only rotations and translations), how can I determine this transformation matrix?
From a CAD program, I know the matrix is:
trans_mat = np.array([[0.866025403784, -0.353553390593, -0.353553390593, 0],
[0.353553390593, 0.933012701892, -0.066987298108, 0],
[0.353553390593, -0.066987298108, 0.933012701892, 0],
[0.841081377402, 5.219578794378, 0.219578794378, 1]])
I'd like to know how this can be found.
Six points alone is not enough to uniquely determine the affine transformation. However, based on what you had asked in a question earlier (shortly before it was deleted) as well as your comment, it would seem that you are not merely looking for an affine transformation, but a homogeneous affine transformation.
This answer by robjohn provides the solution to the problem. Although it solves a more general problem with many points, the solution for 6 points can be found at the very bottom of the answer. I shall transcribe it here in a more programmer-friendly format:
import numpy as np
def recover_homogenous_affine_transformation(p, p_prime):
'''
Find the unique homogeneous affine transformation that
maps a set of 3 points to another set of 3 points in 3D
space:
p_prime == np.dot(p, R) + t
where `R` is an unknown rotation matrix, `t` is an unknown
translation vector, and `p` and `p_prime` are the original
and transformed set of points stored as row vectors:
p = np.array((p1, p2, p3))
p_prime = np.array((p1_prime, p2_prime, p3_prime))
The result of this function is an augmented 4-by-4
matrix `A` that represents this affine transformation:
np.column_stack((p_prime, (1, 1, 1))) == \
np.dot(np.column_stack((p, (1, 1, 1))), A)
Source: https://math.stackexchange.com/a/222170 (robjohn)
'''
# construct intermediate matrix
Q = p[1:] - p[0]
Q_prime = p_prime[1:] - p_prime[0]
# calculate rotation matrix
R = np.dot(np.linalg.inv(np.row_stack((Q, np.cross(*Q)))),
np.row_stack((Q_prime, np.cross(*Q_prime))))
# calculate translation vector
t = p_prime[0] - np.dot(p[0], R)
# calculate affine transformation matrix
return np.column_stack((np.row_stack((R, t)),
(0, 0, 0, 1)))
For your sample inputs, this recovers the exact same matrix as what you had obtained from the CAD program:
>>> recover_homogenous_affine_transformation(
np.array(((1.0,1.0,1.0),
(1.0,2.0,1.0),
(1.0,1.0,2.0))),
np.array(((2.4142135623730940, 5.732050807568877, 0.7320508075688767),
(2.7677669529663684, 6.665063509461097, 0.6650635094610956),
(2.7677669529663675, 5.665063509461096, 1.6650635094610962))))
array([[ 0.8660254 , -0.35355339, -0.35355339, 0. ],
[ 0.35355339, 0.9330127 , -0.0669873 , 0. ],
[ 0.35355339, -0.0669873 , 0.9330127 , 0. ],
[ 0.84108138, 5.21957879, 0.21957879, 1. ]])
Finding a transformation is like solving any system of equations with unknown. First you have to write down the equation, which in your case means that you must know what transformation you are looking for. E.g. a rigid translation takes three parameters (x, y, and z) so you must have at least three parameters. General rotation takes another three parameters, which give you six unknowns. Scaling give you another three parameters for a total of 9 parameters. Since you state only three points, that yield nine unknows, this is the transformation that you are looking for. This means that you are ignoring any shearing and projection. You should always know the type of transformation that you are looking for.
Once you know the type of transformation you should write down the matrix equation, and then solve for the unknowns. This can be done with a linear algerbra library through a matrix multiplication, e.g. by numpy.
It is possible to determine transformation matrix if original data (p1,p2,p3 in your case) and transformed data (p1_prime,p2_prime,p3_prime) are given as shown below:
>>> p # original data
array([[ 1., 1., 1.],
[ 1., 2., 1.],
[ 1., 1., 2.]])
>>> p_prime # transformed data
array([[ 2.41421356, 5.73205081, 0.73205081],
[ 2.76776695, 6.66506351, 0.66506351],
[ 2.76776695, 5.66506351, 1.66506351]])
# Get transformation matrix
>>> trans = np.dot(np.linalg.inv(p),p_prime)
>>> trans # transformation matrix
array([[ 1.70710678, 4.86602541, -0.13397459],
[ 0.35355339, 0.9330127 , -0.0669873 ],
[ 0.35355339, -0.0669873 , 0.9330127 ]])
# obtain transformed data from original data and transformation matrix
>>> np.dot(a, trans)
array([[ 2.41421356, 5.73205081, 0.73205081],
[ 2.76776695, 6.66506351, 0.66506351],
[ 2.76776695, 5.66506351, 1.66506351]])
In your case since there is some unknown data transformed ap[3] values for all the three points, the transformation matrix cannot be obtained. It can only be obtained if these three values are known.
This problem is called point-to-point registration or point-set registration.
For a rigid transform, ie ignoring shearing and scaling, I like this tutorial. I involved finding the centroids and applying Singular Value Decomposition.
Note that for your particular case, with exactly three points, you could find a closed form solution.
OpenCV is good to help with this.
Oh, check out Finding translation and scale on two sets of points to get least square error in their distance?

Eigenvectors in Numpy: Very bad numerics? Did I do something wrong?

For some calculations I need an eigenvalue decomposition. Now I tried to evaluate the functions of numpy and noticed that there is a very bad behavior! Look at this:
import numpy as np
N = 3
A = np.matrix(np.random.random([N,N]))
A = 0.5*(A.H + A) #Hermetian part
la, V = np.linalg.eig(A)
VI = np.matrix(np.linalg.inv(V))
V = np.matrix(V)
/edit: I chose a hermetian Matrix now, so it is normal.
The mathematics say that we should have VI * VH = 1, and VH * A * V = VI * A * V = D, where D is the diagonal matrix of the eigenvalues. The result which I got from a random matrix was:
print(A.H*A - A*A.H)
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
this shows that A is normal.
print(V.H*A*V)
[[ 1.71513832e+00 5.55111512e-17 -1.11022302e-16]
[ -1.11022302e-16 -5.17694280e-01 0.00000000e+00]
[ -7.63278329e-17 -4.51028104e-17 1.28559996e-01]]
print(VI*A*V)
[[ 1.71513832e+00 -2.77555756e-16 -2.22044605e-16]
[ 7.49400542e-16 -5.17694280e-01 -4.16333634e-17]
[ -3.33066907e-16 1.70002901e-16 1.28559996e-01]]
This two work correct, since the off-diagonals are very small and on the diagonal we have the eigenvalues.
print(VI*V.H)
[[ 0.50868822 -0.57398479 0.64169912]
[ 0.16362266 0.79620605 0.58248052]
[-0.84525968 -0.19130446 0.49893755]]
This should be one, but its far away from it.
So folks, now tell me, what has gone wrong during making the eigenvectors, even in this small example?? Can anybody tell me when I have to care while using this functions, and what I can do against the great missmatch?
Quote from numpy.linalg.eig documentation:
Likewise, the (complex-valued) matrix of eigenvectors v is unitary if the matrix a is normal, i.e., if dot(a, a.H) = dot(a.H, a), where a.H denotes the conjugate transpose of a.
Obviously, in the example you have, A^H A != A A^H, so the matrix V is not unitary.
Therefore, V.T.conj() is not related to the inverse of V.
The most common case where this assumption is correct is for hermitian matrices.

Categories

Resources