This should be easy, but I've been all over trying to find a simple explanation that I can grasp. I have an object that I'd like to represent in OpenGL as a cone. The object has x, y, z coordinates and a velocity vector vx, vy, and vz. The cone should point in the direction of the velocity vector.
So, I think my PyOpenGL code should look something like this:
glPushMatrix()
glTranslate(x, y, z)
glPushMatrix()
# do some sort of rotation here #
glutSolidCone(base, height, slices, stacks)
glPopMatrix()
glPopMatrix()
So, is that correct (so far)? What do I put in place of the "# do some sort of rotation here #" ?
In my world, the Z-axis points up (0, 0, 1) and, without any rotations, so does my cone.
Okay, Reto Koradi's answer seems to be the approach that I should take, but I'm not sure of some of the implementation details and my code is not working.
If I understand correctly, the rotation matrix should be a 4x4. Reto shows me how to get a 3x3, so I'm assuming that the 3x3 should be the upper-left corner of a 4x4 identity matrix. Here's my code:
import numpy as np
def normalize(v):
norm = np.linalg.norm(v)
if norm > 1.0e-8: # arbitrarily small
return v/norm
else:
return v
def transform(v):
bz = normalize(v)
if (abs(v[2]) < abs(v[0])) and (abs(v[2]) < abs(v[1])):
by = normalize(np.array([v[1], -v[0], 0]))
else:
by = normalize(np.array([v[2], 0, -v[0]]))
#~ by = normalize(np.array([0, v[2], -v[1]]))
bx = np.cross(by, bz)
R = np.array([[bx[0], by[0], bz[0], 0],
[bx[1], by[1], bz[1], 0],
[bx[2], by[2], bz[2], 0],
[0, 0, 0, 1]], dtype=np.float32)
return R
and here is the way it gets inserted into the rendering code:
glPushMatrix()
glTranslate(x, y, z)
glPushMatrix()
v = np.array([vx, vy, vz])
glMultMatrixf(transform(v))
glutSolidCone(base, height, slices, stacks)
glPopMatrix()
glPopMatrix()
Unfortunately, this isn't working. My test case cones just do not point correctly and I can't identify the failure mode. Without the "glutMultMatrixf(transform(v)" line, the cones align along the z-axis, as expected.
It's working. Reto Koradi correctly identified that the rotation matrix needed to be transposed in order to match the column order of OpenGL. The code should look like this (before optimization):
def transform(v):
bz = normalize(v)
if (abs(v[2]) < abs(v[0])) and (abs(v[2]) < abs(v[1])):
by = normalize(np.array([v[1], -v[0], 0]))
else:
by = normalize(np.array([v[2], 0, -v[0]]))
#~ by = normalize(np.array([0, v[2], -v[1]]))
bx = np.cross(by, bz)
R = np.array([[bx[0], by[0], bz[0], 0],
[bx[1], by[1], bz[1], 0],
[bx[2], by[2], bz[2], 0],
[0, 0, 0, 1]], dtype=np.float32)
return R.T
A helpful concept to remember here is that a linear transformation can also be interpreted as a change of coordinate systems. In other words, instead of picturing points being transformed within a coordinate system, you can just as well picture the points staying in place, but their coordinates being expressed in a new coordinate system. When looking at the matrix expressing the transformation, the base vectors of this new coordinate system are the column vectors of the matrix.
In the following, the base vectors of the new coordinate system are named bx, by and bz. Since the columns of a rotation matrix need to be orthonormal, bx, by and bz need to form an orthonormal set of vectors.
In this case, the original cone is oriented along the z-axis. Since you want the cone to be oriented along (vx, vy, vz) instead, we use this vector as the z-axis of our new coordinate system. Since we want an orthonormal coordinate system, the only thing left to do to obtain bz is to normalize this vector:
[vx]
bz = normalize([vy])
[vz]
Since the cone is rotationally symmetrical, it does not really matter how the remaining two base vectors are chosen, just as long as they are both orthogonal to bz, and orthogonal to each other. A simple way to find an arbitrary orthogonal vector to a given vector is to keep one coordinate 0, swap the other two coordinates, and change the sign of one of those two coordinates. Again, the vector needs to be normalized. Vectors we could choose with this approach include:
[ vy] [ vz] [ 0 ]
by = normalize([-vx]) by = normalize([ 0 ]) by = normalize([ vz])
[ 0 ] [-vx] [-vy]
The dot product of each of these vectors with (vx, vy, vz) is zero, which means that the vectors are orthogonal.
While the choice between these (or other variations) is mostly arbitrary, care must be taken to not end up with a degenerate vector. For example if vx and vy are both zero, using the first of these vector would be bad. To avoid choosing a (near) degenerate vector, a simple strategy is to use the first of these three vectors if vz is smaller than both vx and vy, and one of the other two otherwise.
With two new base vectors in place, the third is the cross product of the other two:
bx = by x bz
All that's left is to populate the rotation matrix with column vectors bx, by and bz, and the rotation matrix is complete:
[ bx.x by.x bz.x ]
R = [ bx.y by.y bz.y ]
[ bx.z by.z bz.z ]
If you need a 4x4 matrix, e.g. because you are using the legacy fixed function OpenGL pipeline, you can extend this to a 4x4 matrix:
[ bx.x by.x bz.x 0 ]
R = [ bx.y by.y bz.y 0 ]
[ bx.z by.z bz.z 0 ]
[ 0 0 0 1 ]
Related
I need to sort a selection of 3D coordinates in a winding order as seen in the image below. The bottom-right vertex should be the first element of the array and the bottom-left vertex should be the last element of the array. This needs to work given any direction that the camera is facing the points and at any orientation of those points. Since "top-left","bottom-right", etc is relative, I assume I can use the camera as a reference point? We can also assume all 4 points will be coplanar.
I am using the Blender API (writing a Blender plugin) and have access to the camera's view matrix if that is even necessary. Mathematically speaking is this even possible if so how? Maybe I am overcomplicating things?
Since the Blender API is in Python I tagged this as Python, but I am fine with pseudo-code or no code at all. I'm mainly concerned with how to approach this mathematically as I have no idea where to start.
Since you assume the four points are coplanar, all you need to do is find the centroid, calculate the vector from the centroid to each point, and sort the points by the angle of the vector.
import numpy as np
def sort_points(pts):
centroid = np.sum(pts, axis=0) / pts.shape[0]
vector_from_centroid = pts - centroid
vector_angle = np.arctan2(vector_from_centroid[:, 1], vector_from_centroid[:, 0])
sort_order = np.argsort(vector_angle) # Find the indices that give a sorted vector_angle array
# Apply sort_order to original pts array.
# Also returning centroid and angles so I can plot it for illustration.
return (pts[sort_order, :], centroid, vector_angle[sort_order])
This function calculates the angle assuming that the points are two-dimensional, but if you have coplanar points then it should be easy enough to find the coordinates in the common plane and eliminate the third coordinate.
Let's write a quick plot function to plot our points:
from matplotlib import pyplot as plt
def plot_points(pts, centroid=None, angles=None, fignum=None):
fig = plt.figure(fignum)
plt.plot(pts[:, 0], pts[:, 1], 'or')
if centroid is not None:
plt.plot(centroid[0], centroid[1], 'ok')
for i in range(pts.shape[0]):
lstr = f"pt{i}"
if angles is not None:
lstr += f" ang: {angles[i]:.3f}"
plt.text(pts[i, 0], pts[i, 1], lstr)
return fig
And now let's test this:
With random points:
pts = np.random.random((4, 2))
spts, centroid, angles = sort_points(pts)
plot_points(spts, centroid, angles)
With points in a rectangle:
pts = np.array([[0, 0], # pt0
[10, 5], # pt2
[10, 0], # pt1
[0, 5]]) # pt3
spts, centroid, angles = sort_points(pts)
plot_points(spts, centroid, angles)
It's easy enough to find the normal vector of the plane containing our points, it's simply the (normalized) cross product of the vectors joining two pairs of points:
plane_normal = np.cross(pts[1, :] - pts[0, :], pts[2, :] - pts[0, :])
plane_normal = plane_normal / np.linalg.norm(plane_normal)
Now, to find the projections of all points in this plane, we need to know the "origin" and basis of the new coordinate system in this plane. Let's assume that the first point is the origin, the x axis joins the first point to the second, and since we know the z axis (plane normal) and x axis, we can calculate the y axis.
new_origin = pts[0, :]
new_x = pts[1, :] - pts[0, :]
new_x = new_x / np.linalg.norm(new_x)
new_y = np.cross(plane_normal, new_x)
Now, the projections of the points onto the new plane are given by this answer:
proj_x = np.dot(pts - new_origin, new_x)
proj_y = np.dot(pts - new_origin, new_y)
Now you have two-dimensional points. Run the code above to sort them.
After many hours, I finally found a solution. #Pranav Hosangadi's solution worked for the 2D side of things. However, I was having trouble projecting the 3D coordinates to 2D coordinates using the second part of his solution. I also tried projecting the coordinates as described in this answer, but it did not work as intended. I then discovered an API function called location_3d_to_region_2d() (see docs) which, as the name implies, gets the 2D screen coordinates in pixels of the given 3D coordinate. I didn't need to necessarily "project" anything into 2D in the first place, getting the screen coordinates worked perfectly fine and is much more simple. From that point, I could sort the coordinates using Pranav's function with some slight adjustments to get it in the order illustrated in the screenshot of my first post and I wanted it returned as a list instead of a NumPy array.
import bpy
from bpy_extras.view3d_utils import location_3d_to_region_2d
import numpy
def sort_points(pts):
"""Sort 4 points in a winding order"""
pts = numpy.array(pts)
centroid = numpy.sum(pts, axis=0) / pts.shape[0]
vector_from_centroid = pts - centroid
vector_angle = numpy.arctan2(
vector_from_centroid[:, 1], vector_from_centroid[:, 0])
# Find the indices that give a sorted vector_angle array
sort_order = numpy.argsort(-vector_angle)
# Apply sort_order to original pts array.
return list(sort_order)
# Get 2D screen coords of selected vertices
region = bpy.context.region
region_3d = bpy.context.space_data.region_3d
corners2d = []
for corner in selected_verts:
corners2d.append(location_3d_to_region_2d(
region, region_3d, corner))
# Sort the 2d points in a winding order
sort_order = sort_points(corners2d)
sorted_corners = [selected_verts[i] for i in sort_order]
Thanks, Pranav for your time and patience in helping me solve this problem!
There is a simpler and faster solution for the Blender case:
1.) The following code sorts 4 planar points in 2D (vertices of the plane object in Blender) very efficiently:
def sort_clockwise(pts):
rect = np.zeros((4, 2), dtype="float32")
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect
2.) Blender keeps vertices related data, such as the translation, rotation and scale in the world matrix. If you query for vertices.co(ordinates) only, you just get the original coordinates, without translation, rotation and scaling. But that does not affect the order of vertices. That simplifies the problem because what you get is actually a 2D (with z's = 0) mesh data. If you sort that 2D data (excluding z's) you will get the information, the sort indices for the 3D sorted data. You can modify the code above to get the indices from that 2D array. For the plane object of Blender, for some reason the order is always [0,1,3,2], not [0,1,2,3]. The following modified code gives the sorted indices for the vertices data in 2D.
def sorted_ix_clockwise(pts):
#rect = zeros((4, 2), dtype="float32")
ix = array([0,0,0,0])
s = pts.sum(axis=1)
#rect[0] = pts[argmin(s)]
#rect[2] = pts[argmax(s)]
ix[0] = argmin(s)
ix[2] = argmax(s)
dif = diff(pts, axis=1)
#rect[1] = pts[argmin(dif)]
#rect[3] = pts[argmax(dif)]
ix[1] = argmin(dif)
ix[3] = argmax(dif)
return ix
You can use these indices to get the actual 3D sorted data, which you can obtain by multiplying vertices coordinates with the world matrix to include any translation, rotation and scaling.
I have a 3D vector and a 3D face normal. How do I go along to move this vector along the given face normal using Python (with or without numpy)?
Ideally, I'd build a matrix using the face normal with the x and y and multiply it by the original vector or something like that, but I can't get my head around on how to build it. It's been a while since Linear Algebra.
EDIT:
Thanks for pointing out that my question was too broad.
My goal is to get a new point, that is x and y units away from the original point, along the face defined by its normal.
Example: If the point is (0,0,0) and the normal is (0, 0, 1), the result would be (x, y, 0).
Example 2: If the point is (1, 0, 0) and the normal is (0, 1, 0), the result would be (1+x, 0, y).
I'd need to extrapolate that to work with any point, normal, x and y.
The projection of a vector to a plane defined by its normal is:
def projection(vector, normal):
return vector - vector.dot(normal) * normal
Presumably this means you want something like:
x + projection(y, normal)
def give_me_a_new_vertex_position_along_normal(old_vertex_position, normal):
new_vertex_position = old_vertex_position + normal
return new_vertex_position
There is a difference between affine spaces (your normals) and euclidean/linear spaces (your vertices).
Vectors in linear space have coordinates associated with them, while vectors in affine space do not.
Adding an affine-spaced vector to a linear-spaced vector is called projection and that is what you are looking to do.
(forgive my terminology - it has been a long time since I took an advanced math class)
Let's say I have n "planes" each "perpendicular" to a single axis in m-dimensional space. No two planes are perpendicular to the same axis. I believe I can safely assume that there will be some intersection between all n planes.
I want to project point a onto the intersection and get the position vector for the result.
For example:
I have a single plane whose normal vector is (0.75, 0, 0) and a point a at position (0.25, 0, 1). I want to get the position vector of point a projected onto the plane.
Another example:
I have two planes represented by normal vectors (0.5, 0, 0) and (0, 1, 0). I have a point a at position (0.1, 0.1, 0.1). I want to get the position vector of the point projected onto the result of the intersection between my two planes (a line)
Your "planes" in m-dimensional space are (m-1)-dimensional objects. They are usually referred to as hyperplanes — a generalization of planes, 2-dimensional objects in 3-dimensional space. To define a hyperplane you need not only a normal vector but also a point (think of lines in two-dimensional space: all parallel lines share the same direction, and in order to isolate one you need to specify a point).
I suspect you mean all of your hyperplanes to pass through the origin (in which case indeed there is a point in the intersection — the origin itself), and I interpret your "being perpendicular to a single axis" as saying that the normal vectors all point along some coordinate axis (in other words, they have a single nonzero component). In that case, all you have to do to find the projection of an arbitrary point (vector, really) onto the intersection is set to zero the components of the point (again, vector, really) along the normal vectors of your hyperplanes.
Let me go through your examples:
The (hyper)plane in 3-dimensional space with normal vector (0.75, 0, 0) is the yz-plane: the projection of an arbitrary point (x, y, z) is (0, y, z) — the hyperplane has a normal vector along the first coordinate, so set to zero the first component of the point (for the last time: vector, really). In particular, (0.25, 0, 1) projects to (0, 0, 1).
The planes perpendicular to (0.5, 0, 0) and (0, 1, 0) are the yz- and xz-planes. Their intersection is the z-axis. The projection of the point (0.1, 0.1, 0.1) is (0, 0, 0.1).
The projection can be computed by solving an overdetermined system in the sense of least squares, with lstsq. The columns of the matrix of the system is formed by normal vectors, used as columns (hence, transpose on the second line below).
coeff are coefficients to be attached to these normal vectors; this linear combination of normal vectors is subtracted from the given point to obtain the projection
import numpy as np
normals = np.transpose(np.array([[0.5, 0, 0], [0, 1, 0]])) # normals
point = np.array([0.1, 0.1, 0.1]) # point
coeff = np.linalg.lstsq(normals, point)[0]
proj = point - np.dot(normals, coeff)
print(proj)
Output: [0, 0, 0.1].
Skip to Update 2 below, if you don't want to read too much background.
I'm trying to implement a model for simple orbital simulations (two body).
However, when I try to use the code I've written, the plots generated from the result look quite odd.
The program uses initial state vectors (position and velocity) to calculate the Keplerian orbital elements, which are used to then calculate the next position, and returned as the next two state vectors.
This seems to work fine, and by itself, plots correctly as long as I keep the plot on the orbital plane. But I would like to rotate the plot to the frame of reference (the parent body) so that I can see a cool 3D view of what the orbits look like (obvs).
Right now, I suspect that the bug is in how I convert from the two state vectors in the orbital plane, to rotating them to the frame of reference. I am using the equations from step 6 of this document to create the following code from (but applying individual roation matricies [copied from here]):
from numpy import sin, cos, matrix, newaxis, asarray, squeeze, dot
def Rx(theta):
"""
Return a rotation matrix for the X axis and angle *theta*
"""
return matrix([
[1, 0, 0 ],
[0, cos(theta), -sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
def Rz(theta):
"""
Return a rotation matrix for the Z axis and angle *theta*
"""
return matrix([
[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0],
[0, 0, 1],
], dtype="float64")
def rotate1(vector, O, i, w):
# The starting value of *vector* is just a 1-dimensional numpy
# array.
# Transform into a column vector.
vector = vector[:, newaxis]
# Perform the rotation
R = Rz(-O) * Rx(-i) * Rz(-w)
res2 = dot(R, vector)
# Transform back into a row vector (because that's what
# the rest of the program uses)
return squeeze(asarray(res2))
(For context, this is the full class I am using for the orbit model.)
When I plot X and Y coordinates from the result, I get this:
But when I change the rotation matrix to R = Rz(-O) * Rx(-i), I get this more plausible plot (although obviously missing one rotation, and slightly off-center):
And when I reduce it further to R = Rx(-i), as one would expect, I get this:
So as I said, I am fairly sure that it is not the orbital calculation code that is behaving weirdly, but rather some error in the rotation code. But I'm not sure where to narrow this down, as I'm pretty new to both numpy and matrix math in general.
Update: Based on stochastic's answer I transposed the matricies (R = Rz(-O).T * Rx(-i).T * Rz(-w).T), but then got this plot:
which made me wonder if my conversion to screen coordinates was somehow wrong -- but it looks correct to me (and is the same code as the more-correct plots with less rotation) namely:
def recenter(v_position, viewport_width, viewport_height):
x, y, z = v_position
# the size of the viewport in meters
bounds = 20000000
# viewport_width is the screen pixels (800)
scale = viewport_width/bounds
# Perform the scaling operation
x *= scale
y *= scale
# recenter to screen X and Y measured from the top-left corner
# of the viewport
x += viewport_width/2
y = viewport_height/2 - y
# Cast to int, because we don't care about pixel fractions
return int(x), int(y)
Update 2
Although I have triple-checked my implementation of the equations, as well as the rotations with stochastic's help, I still can't get the orbits to come out right. They still appear basically the same as in the plots above.
Using data from the NASA Horizon's system, I set up an orbit with specific state vectors from the ISS (2457380.183935185 = A.D. 2015-Dec-23 16:24:52.0000 (TDB)), and checked them against the Kepler orbit elements for the same moment in time, which produces this result:
inclination :
0.900246137041
0.900246137041
true_anomaly :
0.11497063007
0.0982485984565
long_of_asc_node :
3.80727461492
3.80727461492
eccentricity :
0.000429082122137
0.000501850615905
semi_major_axis :
6778560.7037
6779057.01374
mean_anomaly :
0.114872215066
0.0981501816537
argument_of_periapsis :
0.843226618347
0.85994864996
The top values are my (calculated) values, and the bottom values are the NASA ones. Obviously some floating point precision error is to be expected, but the variations in mean_anomaly and true_anomaly did strike me as larger than I expected. (I'm currently running all of my numpy calculations using float128 numbers on a 64-bit system).
In addition, the resulting orbit still looks like the (quite) eccentric first plot, above (even though I know that this LEO ISS orbit is quite circular). So I'm a bit stumped as to what the source of the problem could be.
I believe you have at least two problems.
After looking more closely at the orbital simulation you are doing (see this additional document from the comments), I think the main problem is the initially-very-reasonable-but-yet-untrue assumption that the final plot should look like an ellipse. In general it will not, since an orbiting body will not necessarily stay in a single plane.
The other problem, I think, is that your rotation matrices are the transpose of what they should be, per the document you described (see below).
On transposed rotation matrices
The document you cited does not directly specify whether R_x and R_z should be right-handed rotations of the axes or of the vector they will multiply, though you can figure it out from equation 9 (or 10). It turns out that they should be right-handed rotations of the axes, not the vector. That means that they should be defined like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta), sin(theta) ],
[0,-sin(theta), cos(theta) ],
], dtype="float64")
instead of like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta),-sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
I found this out by reproducing equation 9 by hand on paper.
In that equation, look at the first component of the vector r(t).
There are two terms: one with o_x in it and one with o_y.
Look at the thing multliplying o_y. It is: -(sin(omega)*cos(Omega)+cos(omega)*cos(i)*sin(Omega)).
That leading minus sign is the key. It comes from the minus sign in the first row of your Rz matrix.
Since the Omega, i, and omega in equation 9 are all negated, that means that the minus sign needs to be on the second row of R_z, which would mean that R_z represents a right-handed rotation of the axes, not the vector.
Similarly, we can look at the o_y component of the last term and see that the minus sign needs to be on the second row of R_x, meaning (thank goodness for sanity) the both R_z and R_x right-handed rotations of the axes.
Your Rx and Rz functions are currently defining right handed rotations of a vector, not the axes.
You can fix this by either (all three are equivalent):
Removing the minus signs on your euler angles: Rz(O) * Rx(i) * Rz(w)
transposing your rotation matrices: Rz(-O).T * Rx(-i).T * Rz(-w).T
moving the - sign in the definition of Rx and Rz to the second row sine term, as shown above
I am going to mark stochastic's answer as right, because a) he deserves the points for being so helpful, and b) his advice was fundamentally correct.
However the source of the weird plot actually ended up being these lines in the linked Orbit class:
self.v_position = self.rotate(v_position, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
self.v_velocity = self.rotate(v_velocity, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
Notice that the self.v_position property is updated before the call to rotate the velocity vector happens; one might also notice, when reading the code, that I in my cleverness decided to make all of the orbital element values methods wrapped in #property decorators to make the calculations more clear.
But of course, this also means the methods are called -- and the values recalculated -- every time a property was accessed. So the second call to self.rotate() happens with slightly different values of the orbital elements from the first call and, more importantly, with values that don't match up 100% correctly with the "current" position and velocity state vectors!
So after a few days of banging my head against this bug, I figured it out from a bit of yak-shaving I was doing in the form of a refactoring, and now it all works perfectly.
What I want to do is to rotate a 2D numpy array over a given angle. The approach I'm taking is using a rotation matrix. The rotation matrix I defined as:
angle = 65.
theta = (angle/180.) * numpy.pi
rotMatrix = numpy.array([[numpy.cos(theta), -numpy.sin(theta)],
[numpy.sin(theta), numpy.cos(theta)]])
The matrix I want to rotate is shaped (1002,1004). However, just for testing purposes I created a 2D array with shape (7,6)
c = numpy.array([[0,0,6,0,6,0], [0,0,0,8,7,0], [0,0,0,0,5,0], [0,0,0,3,4,0], [0,0,2,0,1,0], [0,8,0,0,9,0], [0,0,0,0,15,0]])
Now, when I apply the rotation matrix on my 2D array I get the following error:
c = numpy.dot(rotMatrix, c)
print c
c = numpy.dot(rotMatrix, c)
ValueError: matrices are not aligned
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
What am I doing wrong?
You seem to be looking for scipy.ndimage.interpolation.rotate, or similar. If you specifically want 90, 180, or 270 degree rotations, which do not require interpolation, then numpy.rot90 is better.
Matrix dimensions will need to be compatible in order to obtain a matrix product. You are trying to multiply a 7x6 matrix with a 2x2 matrix. This is not mathematically coherent. It only really makes sense to apply a 2D rotation to a 2D vector to obtain the transformed coordinates.
The result of a matrix product is defined only when the left hand matrix has column count equal to right hand matrix row count.
You may want to look at skimage.transform. This module has several useful functions including rotation. No sense in rewriting something that is already done.
You can not rotate any ndim vector using 2D matrix.
I did not find an in built function in numpy. I was hoping that this is a very common functionality and should be there. Let me know if you find it.
Mean while I have create function of my own.
def rotate(vector, theta, rotation_around=None) -> np.ndarray:
"""
reference: https://en.wikipedia.org/wiki/Rotation_matrix#In_two_dimensions
:param vector: list of length 2 OR
list of list where inner list has size 2 OR
1D numpy array of length 2 OR
2D numpy array of size (number of points, 2)
:param theta: rotation angle in degree (+ve value of anti-clockwise rotation)
:param rotation_around: "vector" will be rotated around this point,
otherwise [0, 0] will be considered as rotation axis
:return: rotated "vector" about "theta" degree around rotation
axis "rotation_around" numpy array
"""
vector = np.array(vector)
if vector.ndim == 1:
vector = vector[np.newaxis, :]
if rotation_around is not None:
vector = vector - rotation_around
vector = vector.T
theta = np.radians(theta)
rotation_matrix = np.array([
[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]
])
output: np.ndarray = (rotation_matrix # vector).T
if rotation_around is not None:
output = output + rotation_around
return output.squeeze()
if __name__ == '__main__':
angle = 30
print(rotate([1, 0], 30)) # passing one point
print(rotate([[1, 0], [0, 1]], 30)) # passing multiple points