Converting a rotation matrix to euler angles and back - special case? - python

I'm disassembling a rotation matrix to Euler angles (Tait-Bryan angles more specifically in the order x-y-z, that is rotation around x axis first) and back to a rotation matrix. I used the transforms3d python library (https://github.com/matthew-brett/transforms3d) and also followed this tutorial www.gregslabaugh.net/publications/euler.pdf Both give the same result.
The problem is that the reassambled rotation matrix doesn't match the one that I started with.
The matrix I'm working with was created by the "decomposeHomographyMat" function from openCV, so I expect it to be a valid rotation matrix. Maybe it is a special case?
The matrix is
The three angles are [-1.8710997 , 0.04623301, -0.03679793]. If I convert them back to a rotation matrix I get
where R_23 cannot be a rounding error.
Following the paper above, rotation around the y axis (beta) can be calculated by asin(-R_31). Another valid angle would be pi-asin(-R_31).
The angle around the x axis (alpha) can be calculated by atan2(R_32,R_33). I could also get alpha by asin(R_32/cos(beta)) or by acos(R_33/cos(beta)). If I use the latter two equations I only get the same result for alpha if I use beta=pi-arcsin(-R_31), which implies that there is only one valid solution for beta. atan2(R_32,R_33) gives a different result from both.
Anyway something seems to be wrong with my matrix or I cannot figure out why the disassambly doesn't work.
import numpy as np
def rot2eul(R):
beta = -np.arcsin(R[2,0])
alpha = np.arctan2(R[2,1]/np.cos(beta),R[2,2]/np.cos(beta))
gamma = np.arctan2(R[1,0]/np.cos(beta),R[0,0]/np.cos(beta))
return np.array((alpha, beta, gamma))
def eul2rot(theta) :
R = np.array([[np.cos(theta[1])*np.cos(theta[2]), np.sin(theta[0])*np.sin(theta[1])*np.cos(theta[2]) - np.sin(theta[2])*np.cos(theta[0]), np.sin(theta[1])*np.cos(theta[0])*np.cos(theta[2]) + np.sin(theta[0])*np.sin(theta[2])],
[np.sin(theta[2])*np.cos(theta[1]), np.sin(theta[0])*np.sin(theta[1])*np.sin(theta[2]) + np.cos(theta[0])*np.cos(theta[2]), np.sin(theta[1])*np.sin(theta[2])*np.cos(theta[0]) - np.sin(theta[0])*np.cos(theta[2])],
[-np.sin(theta[1]), np.sin(theta[0])*np.cos(theta[1]), np.cos(theta[0])*np.cos(theta[1])]])
return R
R = np.array([[ 0.9982552 , -0.03323557, -0.04880523],
[-0.03675031, 0.29723396, -0.95409716],
[-0.04621654, -0.95422606, -0.29549393]])
ang = rot2eul(R)
eul2rot(ang)
import transforms3d.euler as eul
ang = eul.mat2euler(R, axes='sxyz')
eul.euler2mat(ang[0], ang[1], ang[2], axes='sxyz')

It turns out the rotation matrix has a negative determinant, which makes it an improper rotation matrix. The openCV function "decomposeHomographyMat" has a bug: https://github.com/opencv/opencv/issues/4978

May be you can use scipy
from scipy.spatial.transform import Rotation
### first transform the matrix to euler angles
r = Rotation.from_matrix(rotation_matrix)
angles = r.as_euler("zyx",degrees=True)
#### Modify the angles
print(angles)
angles[0] += 5
#### Then transform the new angles to rotation matrix again
r = Rotation.from_euler("zyx",angles,degrees=True)
new_rotation_matrix = new_r.as_matrix()

Related

How can I rotate a 2d image using a target image, landmark coordinates, the least squares approach, and a rotation matrix?

I have two 2d images, one is the source image and the other is a target image; I need to rotate the source image to match the target image using python (scikit & numpy). I have 3 landmark coordinates for each image, as follows:
image1_points = [(12,16),(7,4),(25,20)]
image2_points = [(15,22),(1,22),(25,10)]
I believe the following steps are what's needed:
Create rotation matrix using least squares approach using the 3 landmark coordinates
Use the rotation matrix to get theta
Convert theta to degrees (for the angle)
Use the apply_angle method with the angle to rotate the image
I've been trying to use these points and the least squares approach to compute a linear transformation matrix that transforms points from the source to the target image.
I know I need to create a rotation matrix, but having never taken algebra I'm a bit lost. I've done lots of reading, and tried using scipy's built-in procrustes to do an affine transformation below (which may be all wrong).
m1, m2, d = scipy.spatial.procrustes(target_points, source_points)
a = np.dot(m1.T, m2, out=None) / norm(m1)**2
#separate x and y for the sake of convenience
ref_x = m2[::2]
ref_y = m2[1::2]
x = m1[::2]
y = m1[1::2]
b = np.sum(x*ref_y - ref_x*y) / norm(m1)**2
scale = np.sqrt(a**2+b**2)
theta = atan(b / max(a.all(), 10**-10)) #avoid dividing by 0
degrees = cos(radians(theta))
apply_angle(source_img, degrees)
However, this is not giving me the result I would expect. It's giving me a degree around 1, where I would expect a degree around 72. I suspect that the degree is what's needed to rotate the image as the angle parameter.
Any help would be hugely appreciated. Thank you!

Angle between two vectors in the interval [0,360]

I'm trying to find the angle between two vectors.
Following is the code that I use to evaluate the angle between vectors ba and bc
import numpy as np
import scipy.linalg as la
a = np.array([6,0])
b = np.array([0,0])
c = np.array([1,1])
ba = a - b
bc = c - b
cosine_angle = np.dot(ba, bc) / (la.norm(ba) * la.norm(bc))
angle = np.arccos(cosine_angle)
print (np.degrees(angle))
My question is,
here in this code:
for both c = np.array([1,1]) and c = np.array([1,-1]) you get 45 degrees as the answer. I can understand this in a mathematical viewpoint because, from the dot product you always focus on the angle in the interval [0,180].
But geometrically this is misleading as the point c is in two different locations for [1,1] and [1,-1].
So is there a way that I can get the angle in the interval [0,360] for a general starting point
b = np.array([x,y])
Appreciate your help
Conceptually, obtaining the angle between two vectors using the dot product is perfectly alright. However, since the angle between two vectors is invariant upon translation/rotation of the coordinate system, we can find the angle subtended by each vector to the positive direction of the x-axis and subtract one value from the other.
The advantage is, we'll use np.arctan2to find the angles, which returns angles in the range [-π,π] and hence you get an idea of the quadrant your vector lies in.
# Syntax: np.arctan2(y, x) - put the y value first!
# Instead of explicitly referring by indices, you can unpack each vector in reverse, like so:
# np.arctan2(*bc[::-1])
angle = np.arctan2(bc[1], bc[0]) - np.arctan2(ba[1], ba[0])
Which you can then appropriately transform to get a value within [0, 2π].

Inverse FFT returns negative values when it should not

I have several points (x,y,z coordinates) in a 3D box with associated masses. I want to draw an histogram of the mass-density that is found in spheres of a given radius R.
I have written a code that, providing I did not make any errors which I think I may have, works in the following way:
My "real" data is something huge thus I wrote a little code to generate non overlapping points randomly with arbitrary mass in a box.
I compute a 3D histogram (weighted by mass) with a binning about 10 times smaller than the radius of my spheres.
I take the FFT of my histogram, compute the wave-modes (kx, ky and kz) and use them to multiply my histogram in Fourier space by the analytic expression of the 3D top-hat window (sphere filtering) function in Fourier space.
I inverse FFT my newly computed grid.
Thus drawing a 1D-histogram of the values on each bin would give me what I want.
My issue is the following: given what I do there should not be any negative values in my inverted FFT grid (step 4), but I get some, and with values much higher that the numerical error.
If I run my code on a small box (300x300x300 cm3 and the points of separated by at least 1 cm) I do not get the issue. I do get it for 600x600x600 cm3 though.
If I set all the masses to 0, thus working on an empty grid, I do get back my 0 without any noted issues.
I here give my code in a full block so that it is easily copied.
import numpy as np
import matplotlib.pyplot as plt
import random
from numba import njit
# 1. Generate a bunch of points with masses from 1 to 3 separated by a radius of 1 cm
radius = 1
rangeX = (0, 100)
rangeY = (0, 100)
rangeZ = (0, 100)
rangem = (1,3)
qty = 20000 # or however many points you want
# Generate a set of all points within 1 of the origin, to be used as offsets later
deltas = set()
for x in range(-radius, radius+1):
for y in range(-radius, radius+1):
for z in range(-radius, radius+1):
if x*x + y*y + z*z<= radius*radius:
deltas.add((x,y,z))
X = []
Y = []
Z = []
M = []
excluded = set()
for i in range(qty):
x = random.randrange(*rangeX)
y = random.randrange(*rangeY)
z = random.randrange(*rangeZ)
m = random.uniform(*rangem)
if (x,y,z) in excluded: continue
X.append(x)
Y.append(y)
Z.append(z)
M.append(m)
excluded.update((x+dx, y+dy, z+dz) for (dx,dy,dz) in deltas)
print("There is ",len(X)," points in the box")
# Compute the 3D histogram
a = np.vstack((X, Y, Z)).T
b = 200
H, edges = np.histogramdd(a, weights=M, bins = b)
# Compute the FFT of the grid
Fh = np.fft.fftn(H, axes=(-3,-2, -1))
# Compute the different wave-modes
kx = 2*np.pi*np.fft.fftfreq(len(edges[0][:-1]))*len(edges[0][:-1])/(np.amax(X)-np.amin(X))
ky = 2*np.pi*np.fft.fftfreq(len(edges[1][:-1]))*len(edges[1][:-1])/(np.amax(Y)-np.amin(Y))
kz = 2*np.pi*np.fft.fftfreq(len(edges[2][:-1]))*len(edges[2][:-1])/(np.amax(Z)-np.amin(Z))
# I create a matrix containing the values of the filter in each point of the grid in Fourier space
R = 5
Kh = np.empty((len(kx),len(ky),len(kz)))
#njit(parallel=True)
def func_njit(kx, ky, kz, Kh):
for i in range(len(kx)):
for j in range(len(ky)):
for k in range(len(kz)):
if np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2) != 0:
Kh[i][j][k] = (np.sin((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R)-(np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R*np.cos((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R))*3/((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R)**3
else:
Kh[i][j][k] = 1
return Kh
Kh = func_njit(kx, ky, kz, Kh)
# I multiply each point of my grid by the associated value of the filter (multiplication in Fourier space = convolution in real space)
Gh = np.multiply(Fh, Kh)
# I take the inverse FFT of my filtered grid. I take the real part to get back floats but there should only be zeros for the imaginary part.
Density = np.real(np.fft.ifftn(Gh,axes=(-3,-2, -1)))
# Here it shows if there are negative values the magnitude of the error
print(np.min(Density))
D = Density.flatten()
N = np.mean(D)
# I then compute the histogram I want
hist, bins = np.histogram(D/N, bins='auto', density=True)
bin_centers = (bins[1:]+bins[:-1])*0.5
plt.plot(bin_centers, hist)
plt.xlabel('rho/rhom')
plt.ylabel('P(rho)')
plt.show()
Do you know why I'm getting these negative values? Do you think there is a simpler way to proceed?
Sorry if this is a very long post, I tried to make it very clear and will edit it with your comments, thanks a lot!
-EDIT-
A follow-up question on the issue can be found [here].1
The filter you create in the frequency domain is only an approximation to the filter you want to create. The problem is that we are dealing with the DFT here, not the continuous-domain FT (with its infinite frequencies). The Fourier transform of a ball is indeed the function you describe, however this function is infinitely large -- it is not band-limited!
By sampling this function only within a window, you are effectively multiplying it with an ideal low-pass filter (the rectangle of the domain). This low-pass filter, in the spatial domain, has negative values. Therefore, the filter you create also has negative values in the spatial domain.
This is a slice through the origin of the inverse transform of Kh (after I applied fftshift to move the origin to the middle of the image, for better display):
As you can tell here, there is some ringing that leads to negative values.
One way to overcome this ringing is to apply a windowing function in the frequency domain. Another option is to generate a ball in the spatial domain, and compute its Fourier transform. This second option would be the simplest to achieve. Do remember that the kernel in the spatial domain must also have the origin at the top-left pixel to obtain a correct FFT.
A windowing function is typically applied in the spatial domain to avoid issues with the image border when computing the FFT. Here, I propose to apply such a window in the frequency domain to avoid similar issues when computing the IFFT. Note, however, that this will always further reduce the bandwidth of the kernel (the windowing function would work as a low-pass filter after all), and therefore yield a smoother transition of foreground to background in the spatial domain (i.e. the spatial domain kernel will not have as sharp a transition as you might like). The best known windowing functions are Hamming and Hann windows, but there are many others worth trying out.
Unsolicited advice:
I simplified your code to compute Kh to the following:
kr = np.sqrt(kx[:,None,None]**2 + ky[None,:,None]**2 + kz[None,None,:]**2)
kr *= R
Kh = (np.sin(kr)-kr*np.cos(kr))*3/(kr)**3
Kh[0,0,0] = 1
I find this easier to read than the nested loops. It should also be significantly faster, and avoid the need for njit. Note that you were computing the same distance (what I call kr here) 5 times. Factoring out such computation is not only faster, but yields more readable code.
Just a guess:
Where do you get the idea that the imaginary part MUST be zero? Have you ever tried to take the absolute values (sqrt(re^2 + im^2)) and forget about the phase instead of just taking the real part? Just something that came to my mind.

Distance between point and arc in 3D

I want to compute the distance between an arc and a point in a 3D space. All I found is the distance between a circle and a point link (which is either wrong, or where I made a mistake, as I get wrong values):
P = np.array([1,0,1])
center = np.array([0,0,0])
radius = 1
n2 = np.array([0,0,1])
Delta = P-center
dist_tmp = np.sqrt( (n2*Delta)**2 + (np.abs(np.cross(n2, Delta))-radius)**2 )
dist = np.linalg.norm(dist_tmp)
I have a circle in the x-y-plane with origin at x-y-z = 0 and radius = 1. The point of interest is in distance 1 above the circle. The result of the distance from the code is 1.73.. and not 1.
What is the right equation for point-circle distance?
How can I extend it to point-arc distance?
You have several errors in your code. Here is the answer to your first question.
First, you try to implement the dot product of n2 and Delta as n2*Delta, but that is not what the multiplication of 2 np arrays does. Use np.dot() instead. Next, you try to take the "absolute value" (magnitude) of a vector with np.abs, but that latter is for real and complex numbers only. One way to get the vector magnitude is np.linalg.norm(). Changing those gives you the proper answer, and you don't need the calculation you used for variable dist. So use
Delta = P-center
dist = np.sqrt(np.dot(n2, Delta)**2 + (np.linalg.norm(np.cross(n2, Delta))- radius)**2)
That gives the proper answer for dist, 1.0.

How do I retrieve the angle between two vectors 3D?

I am new in python.
I have two vectors in 3d space, and I want to know the angle between two
I tried:
vec1=[x1,y1,z1]
vec2=[x2,y2,z2]
angle=np.arccos(np.dot(vec1,vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2)))
but when change the order, vec2,vec1 obtain the same angle and no higher.
I want to give me a greater angle when the order of the vectors changes.
Use a function to help you choose which angle do you want. In the beggining of your code, write:
def angle(v1, v2, acute):
# v1 is your firsr vector
# v2 is your second vector
angle = np.arccos(np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2)))
if (acute == True):
return angle
else:
return 2 * np.pi - angle
Then, when you want to calculate an angle (in radians) in your program just write
angle(vec1, vec2, 'True')
for acute angles, and
angle(vec2, vec1, 'False')
for obtuse angles.
For example:
vec1 = [1, -1, 0]
vec2 = [1, 1, 0]
#I am explicitly converting from radian to degree
print(180* angle(vec1, vec2, True)/np.pi) #90 degrees
print(180* angle(vec2, vec1, False)/np.pi) #270 degrees
If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy.
import numpy as np
import vg
vec1 = np.array([x1, y1, z1])
vec2 = np.array([x2, y2, z2])
vg.angle(vec1, vec2)
You can also specify a viewing angle to compute the angle via projection:
vg.angle(vec1, vec2, look=vg.basis.z)
Or compute the signed angle via projection:
vg.signed_angle(vec1, vec2, look=vg.basis.z)
I created the library at my last startup, where it was motivated by uses like this: simple ideas which are verbose or opaque in NumPy.
What you are asking is impossible as the plane that contains the angle can be oriented two ways and nothing in the input data gives a clue about it.
All you can do is to compute the smallest angle between the vectors (or its complement to 360°), and swapping the vectors can't have an effect.
The dot product isn't guilty here, this is a geometric dead-end.
The dot product is commutative, so you'll have to use a different metric. It doesn't care about the order.
Since the dot product is commutative, simply reversing the order you put the variables into the function will not work.
If your objective is to find the obtuse(larger) angle rather than the acute(smaller) one, subtract the value returned by your function from 360 degrees. Since you seem to have a criteria for when you want to switch the variables around, you should use that same criteria to determine when to subtract your found value from 360. This will give you the value you are looking for in these cases.

Categories

Resources