How to interpret the OpenCV warp matrix? [duplicate] - python

I am playing with the affine transform in OpenCV and I am having trouble getting an intuitive understanding of it workings, and more specifically, just how do I specify the parameters of the map matrix so I can get a specific desired result.
To setup the question, the procedure I am using is 1st to define a warp matrix, then do the transform.
In OpenCV the 2 routines are (I am using an example in the excellent book OpenCV by Bradski & Kaehler):
cvGetAffineTransorm(srcTri, dstTri, warp_matrix);
cvWarpAffine(src, dst, warp_mat);
To define the warp matrix, srcTri and dstTri are defined as:
CvPoint2D32f srcTri[3], dstTri[3];
srcTri[3] is populated as follows:
srcTri[0].x = 0;
srcTri[0].y = 0;
srcTri[1].x = src->width - 1;
srcTri[1].y = 0;
srcTri[2].x = 0;
srcTri[2].y = src->height -1;
This is essentially the top left point, top right point, and bottom left point of the image for starting point of the matrix. This part makes sense to me.
But the values for dstTri[3] just are confusing, at least, when I vary a single point, I do not get the result I expect.
For example, if I then use the following for the dstTri[3]:
dstTri[0].x = 0;
dstTri[0].y = 0;
dstTri[1].x = src->width - 1;
dstTri[1].y = 0;
dstTri[2].x = 0;
dstTri[2].y = 100;
It seems that the only difference between the src and the dst point is that the bottom left point is moved to the right by 100 pixels. Intuitively, I feel that the bottom part of the image should be shifted to the right by 100 pixels, but this is not so.
Also, if I use the exact same values for dstTri[3] that I use for srcTri[3], I would think that the transform would produce the exact same image--but it does not.
Clearly, I do not understand what is going on here. So, what does the mapping from the srcTri[] to the dstTri[] represent?

Here is a mathematical explanation of an affine transform:
this is a matrix of size 3x3 that applies the following transformations on a 2D vector: Scale in X axis, scale Y, rotation, skew, and translation on the X and Y axes.
These are 6 transformations and thus you have six elements in your 3x3 matrix. The bottom row is always [0 0 1].
Why? because the bottom row represents the perspective transformation in axis x and y, and affine transformation does not include perspective transform.
(If you want to apply perspective warping use homography: also 3x3 matrix )
What is the relation between 6 values you insert into affine matrix and the 6 transformations it does? Let us look at this 3x3 matrix like
e*Zx*cos(a), -q1*sin(a) , dx,
e*q2*sin(a), Z y*cos(a), dy,
0 , 0 , 1
The dx and
dy elements are translation in x and y axis (just move the picture left-right, up down).
Zx is the relative scale(zoom) you apply to the image in X axis.
Zy is the same as above for y axis
a is the angle of rotation of the image. This is tricky since when you want to rotate by 'a' you have to insert sin(), cos() in 4 different places in the matrix.
'q' is the skew parameter. It is rarely used. It will cause your image to skew on the side (q1 causes y axis affects x axis and q2 causes x axis affect y axis)
Bonus: 'e' parameter is actually not a transformation. It can have values 1,-1. If it is 1 then nothing happens, but if it is -1 than the image is flipped horizontally. You can use it also to flip the image vertically but, this type of transformation is rarely used.
Very important Note!!!!!
The above explanation is mathematical. It assumes you multiply the matrix by the column vector from the right. As far as I remember, Matlab uses reverse multiplication (row vector from the left) so you will need to transpose this matrix. I am pretty sure that OpenCV uses regular multiplication but you need to check it.
Just enter only translation matrix (x shifted by 10 pixels, y by 1).
1,0,10
0,1,1
0,0,1
If you see a normal shift than everything is OK, but If shit appears than transpose the matrix to:
1,0,0
0,1,0
10,1,1

Related

Transforming 2D image point to 3D world point where Z !=0

the below code transforms a detected 2D-image point to it's 3D location on a defined plane Grid in 3D-world.
This mean Z=0, and taking into account that the Extrinsics and Intrinsics are known, we can compute the corresponding 3D_point of the detected 2D-image point:
import cv2
import numpy as np
#load extrinsics & intrinsics
with np.load('parameters_cam1.npz') as X:
mtx, dist = [X[i] for i in ('mtx','dist','rvecs','tvecs')]
with np.load('extrincic.npz') as X:
rvecs1,tvecs1 = [X[i] for i in('rvecs1','tvecs1')]
#prepare rotation matrix
R_mtx, jac=cv2.Rodrigues(rvecs1)
#prepare projection matrix
Extrincic=cv2.hconcat([R_mtx,tvecs1])
Projection_mtx=mtx.dot(Extrincic)
#delete the third column since Z=0
Projection_mtx = np.delete(Projection_mtx, 2, 1)
#finding the inverse of the matrix
Inv_Projection = np.linalg.inv(Projection_mtx)
#detected image point (extracted from a que)
img_point=np.array([pts1_blue[0]])
#adding dimension 1 in order for the math to be correct (homogeneous coordinates)
img_point=np.vstack((img_point,np.array(1)))
#calculating the 3D point which located on the 3D plane
3D_point=Inv_Projection.dot(img_point)
#show results
print('3D_pt_method1\n',3D_point)
#output
3D_pt_method1
[[0.01881387]
[0.0259416 ]
[0.04150276]]
By normalizing the point (dividing by the third value) the result is
`X_World=0.45331611680765327 # 45.3 cm from defined world point cm which is correct
Y_world=0.6250572251098481 # 62.5 cm which is also correct
By evaluating the results, it turns out that they are correct.
I now that we can't retrieve the the Z coordinate of the 3D world point since depth information is lost going from 3d to 2d. The following equation also performs the inverse projection of the 2D point onto 3D world and can be found in all literature, and the result is an equation which represents a line on which the 3D_ world point must lie on
I put the equation 3.15 into code, however without setting Z=0, meaning to say with out deleting the third column of the projection matrix like i did in the previous method (Just as it's written) by doing the following the following:
#inverting the rotation matrix
INV_R=np.linalg.inv(R_mtx)
#inverting the camera matrix
INV_k=np.linalg.inv(mtx)
#multiplying the tow matrices
kinv_Rinv=INV_k.dot(INV_R)
#calcuating the 3D_point X which expressed in eq.3.15
3D_point=kinv_Rinv.dot(img_point)+tvecs1
#print the results
print('3D_pt_method2\n',3D_point)
and the result was
3D_pt_method2 #how should one understand these coordinates ?
[[-9.12505825]
[-5.57152147]
[40.12264881]]
My question is, How should i understand or interpret this result? as it doesn't make any sense compared to the previous method where Z=0. the 3D 3x1 resulted vector seems to give an intuition that it's values represents simply the 3D X, Y and Z of the detected image_point. However, this is not true if we compare X and Y with the previous method!!
So what is literally the difference between 3D_pt_method1 and 3D_pt_method2???
I hope i could express my self and really appreciate helping me understand the difference between the two implementations!
Note: the Grid that represents my defined World-plane and can be seen in the below image in which the distance between every two yellow points is 40 cm
Thanks in advance
You miss the key variable "w" in method2.
You can get help from referring to this article: https://blog.csdn.net/zhou4411781/article/details/103876478
This article is written in Chinese, but you can just try to get the point from those formula in that article if you cannot understand Chinese.
Simply speaking:
You said right "I know that we can't retrieve the the Z coordinate of the 3D world point since depth information is lost going from 3d to 2d. "
This also means: If you know the depth (the Z axis value in world coordination), you can get 3d ordinate by 2d ordinate and the depth. As well, if you know the X or Y axis value in world coordination, you can also get the result.

How to recalculate the coordinates of a point after scaling and rotation?

I have the coordinates of 6 points in an image
(170.01954650878906, 216.98866271972656)
(201.3812255859375, 109.42137145996094)
(115.70114135742188, 210.4272918701172)
(45.42426300048828, 97.89037322998047)
(167.0367889404297, 208.9329833984375)
(70.13690185546875, 140.90538024902344)
I have a point as center [89.2458, 121.0896]. I am trying to re-calculate the position of points in python using 4 rotation degree (from 0,90,-90,180) and 6 scaling factor (0.5,0.75,1,1.10,1.25,1.35,1.5).
My question is how can I rotate and scale the abovementioned points relative to the center point and get the new coordinates of those 6 points?
Your help is really appreciated.
Mathematics
A mathematical approach would be to represent this data as vectors from the center to the image-points, translate these vectors to the origin, apply the transformation and relocate them around the center point. Let's look at how this works in detail.
Representation as vectors
We can show these vectors in a grid, this will produce following image
This image provides a nice way to look at these points, so we can see our actions happening in a visual way. The center point is marked with a dot at the beginning of all the arrows, and the end of each arrow is the location of one of the points supplied in the question.
A vector can be seen as a list of the values of the coordinates of the point so
my_vector = [point[0], point[1]]
could be a representation for a vector in python, it just holds the coordinates of a point, so the format in the question could be used as is! Notice that I will use the position 0 for the x-coordinate and 1 for the y-coordinate throughout my answer.
I have only added this representation as a visual aid, we can look at any set of two points as being a vector, no calculation is needed, this is only a different way of looking at those points.
Translation to origin
The first calculations happen here. We need to translate all these vectors to the origin. We can very easily do this by subtracting the location of the center point from all the other points, for example (can be done in a simple loop):
point_origin_x = point[0] - center_point[0] # Xvalue point - Xvalue center
point_origin_y = point[1] - center_point[1] # Yvalue point - Yvalue center
The resulting points can now be rotated around the origin and scaled with respect to the origin. The new points (as vectors) look like this:
In this image, I deliberately left the scale untouched, so that it is clear that these are exactly the same vectors (arrows), in size and orientation, only shifted to be around (0, 0).
Why the origin
So why translate these points to the origin? Well, rotations and scaling actions are easy to do (mathematically) around the origin and not as easy around other points.
Also, from now on, I will only include the 1st, 2nd and 4th point in these images to save some space.
Scaling around the origin
A scaling operation is very easy around the origin. Just multiply the coordinates of the point with the factor of the scaling:
scaled_point_x = point[0] * scaling_factor
scaled_point_y = point[1] * scaling_factor
In a visual way, that looks like this (scaling all by 1.5):
Where the blue arrows are the original vectors and the red ones are the scaled vectors.
Rotating
Now for rotating. This is a little bit harder, because a rotation is most generally described by a matrix multiplication with this vector.
The matrix to multiply with is the following
(from wikipedia: Rotation Matrix)
So if V is the vector than we need to perform V_r = R(t) * V to get the rotated vector V_r. This rotation will always be counterclockwise! In order to rotate clockwise, we simply need to use R(-t).
Because only multiples of 90° are needed in the question, the matrix becomes a almost trivial. For a rotation of 90° counterclockwise, the matrix is:
Which is basically in code:
rotated_point_x = -point[1] # new x is negative of old y
rotated_point_y = point[0] # new y is old x
Again, this can be nicely shown in a visual way:
Where I have matched the colors of the vectors.
A rotation 90° clockwise will than be
rotated_counter_point_x = point[1] # x is old y
rotated_counter_point_y = -point[0] # y is negative of old x
A rotation of 180° will just be taking the negative coordinates or, you could just scale by a factor of -1, which is essentially the same.
As last point of these operations, might I add that you can scale and/or rotated as much as you want in a sequence to get the desired result.
Translating back to the center point
After the scaling actions and/or rotations the only thing left is te retranslate the vectors to the center point.
retranslated_point_x = new_point[0] + center_point_x
retranslated_point_y = new_point[1] + center_point_y
And all is done.
Just a recap
So to recap this long post:
Subtract the coordinates of the center point from the coordinates of the image-point
Scale by a factor with a simply multiplication of the coordinates
Use the idea of the matrix multiplication to think about the rotation (you can easily find these things on Google or Wikipedia).
Add the coordinates of the center point to the new coordinates of the image-point
I realize now that I could have just given this recap, but now there is at least some visual aid and a slight mathematical background in this post, which is also nice. I really believe that such problems should be looked at from a mathematical angle, the mathematical description can help a lot.

Manually wirting code for warpAffine in python

I want to implement affine transformation by not using library functions.
I have an image named "transformed" and I want to apply inverse transformation to obtain "img_org" image. Right now, I am using my own basic GetBilinearPixel function to set the intensity value. But, the image is not transforming properly.This is what I came up with. :
This is image("transformed.png"):
This is image("img_org.png"):
But My goal is to produce this image:
You can see the transformation matrix here:
pts1 = np.float32( [[693,349] , [605,331] , [445,59]] )
pts2 = np.float32 ( [[1379,895] , [1213,970] ,[684,428]] )
Mat = cv2.getAffineTransform(pts2,pts1)
B=Mat
code:
img_org=np.zeros(shape=(780,1050))
img_size=np.zeros(shape=(780,1050))
def GetBilinearPixel(imArr, posX, posY):
return imArr[posX][posY]
for i in range(1,img.shape[0]-1):
for j in range(1,img.shape[1]-1):
pos=np.array([[i],[j],[1]],np.float32)
#print pos
pos=np.matmul(B,pos)
r=int(pos[0][0])
c=int(pos[1][0])
#print r,c
if(c<=1024 and r<=768 and c>=0 and r>=0):
img_size[r][c]=img_size[r][c]+1
img_org[r][c] += GetBilinearPixel(img, i, j)
for i in range(0,img_org.shape[0]):
for j in range(0,img_org.shape[1]):
if(img_size[i][j]>0):
img_org[i][j] = img_org[i][j]/img_size[i][j]
Is my logic wrong? I know that i have applied very inefficient algorithm.
Is there any insight that i am missing?
Or can you give me any other algorithm which will work fine.
(Request) . I don't want to use warpAffine function.
So I vectorized the code and this method works---I can't find the exact issue with your implementation, but maybe this will shed some light (plus the speed is way faster).
The setup to vectorize is to create a linear (homogeneous) array containing every point in the image. We want an array that looks like
x0 x1 ... xN x0 x1 ... xN ..... x0 x1 ... xN
y0 y0 ... y0 y1 y1 ... y1 ..... yM yM ... yM
1 1 ... 1 1 1 ... 1 ..... 1 1 ... 1
So that every point (xi, yi, 1) is included. Then transforming is just a single matrix multiplication with your transformation matrix and this array.
To simplify matters (partially because your image naming conventions confused me), I'll say the original starting image is the "destination" or dst because we want to transform back to the "source" or src image. Bearing that in mind, creating this linear homogenous array could look something like this:
dst = cv2.imread('img.jpg', 0)
h, w = dst.shape[:2]
dst_y, dst_x = np.indices((h, w)) # similar to meshgrid/mgrid
dst_lin_homg_pts = np.stack((dst_x.ravel(), dst_y.ravel(), np.ones(dst_y.size)))
Then, to transform the points, just create the transformation matrix and multiply. I'll round the transformed pixel locations because I'm using them as an index and not bothering with interpolation:
src_pts = np.float32([[693, 349], [605, 331], [445, 59]])
dst_pts = np.float32([[1379, 895], [1213, 970], [684, 428]])
transf = cv2.getAffineTransform(dst_pts, src_pts)
src_lin_pts = np.round(transf.dot(dst_lin_homg_pts)).astype(int)
Now this transformation will send some pixels to negative indices, and if we index with those, it'll wrap around the image---probably not what we want to do. Of course in the OpenCV implementation, it just cuts those pixels off completely. But we can just shift all the transformed pixels so that all of the locations are positive and we don't cut off any (you can of course do whatever you want in this regard):
min_x, min_y = np.amin(src_lin_pts, axis=1)
src_lin_pts -= np.array([[min_x], [min_y]])
Then we'll need to create the source image src which the transform maps into. I'll create it with a gray background so we can see the extent of the black from the dst image.
trans_max_x, trans_max_y = np.amax(src_lin_pts, axis=1)
src = np.ones((trans_max_y+1, trans_max_x+1), dtype=np.uint8)*127
Now all we have to do is place some corresponding pixels from the destination image into the source image. Since I didn't cut off any of the pixels and there's the same number of pixels in both linear points array, I can just assign the transformed pixels the color they had in the original image.
src[src_lin_pts[1], src_lin_pts[0]] = dst.ravel()
Now, of course, this isn't interpolating on the image. But there's no built-ins in OpenCV for interpolation (there is backend C functions for other methods to use but not that you can access in Python AFAIK). But, you have the important parts---where the destination image gets mapped to, and the original image, so you can use any number of libraries to interpolate onto that grid. Or just implement a linear interpolation yourself as it's not too difficult. You'll probably want to un-round the warped pixel locations of course before then.
cv2.imshow('src', src)
cv2.waitKey()
Edit: Also this same method will work for warpPerspective too, although your resulting matrix multiplication will give a three-rowed (homogeneous) vector, and you'll need to divide the first two rows by the third row to set them back into Cartesian world. Other than that, everything else stays the same.

Create Numpy Array Representing a Geometric Shape

As the title suggests, how would one create a numpy array of 3D coordinates of a geometric shape?
Currently, I have the easiest shape already figured out:
latva = 6
latvb = 6
latvc = 6
latdiv = 20
latvadiv = latva / latdiv
latvbdiv = latvb / latdiv
latvcdiv = latvc / latdiv
lol = np.zeros((latdiv**3,4),dtype=np.float64)
lol[:,:3] = (np.arange(latdiv**3)[:,None]//(latdiv**2,latdiv,1)*(latvadiv,latvbdiv,latvcdiv)%(latva,latvb,latvc))
creates an array of (8000,4). If you then split the array along the 1,2,3 column (Ignoring the 4th as it's meaningless in this question) and plot it (Personally, I use pyplot) you get a Cube!
Easy enough. Also works for a rectangle.
But I've not the foggiest idea of how to get any further - say plotting a rhombus.
I'm not interested in black magic like spheres, ovals or shapes whose sides do not change following a line. Just things like your standard rhombus/Rhomboid/Parallelepiped/Whatever_you_want_to_call_it.
Any ideas on how to accomplish this?
Because you already have convenient method to generate points in square or cube, the simplest way to make rhombus, parallelogram for 2D case and parallelepiped for 3D case is to apply affine transform to calculate new point coordinates.
For example, to make rhombus, you can find matrix as combination of translation by (-centerX, -centerY), rotation by Pi/4, scaling along axes (if needed) and translation to needed position.
AffMatrix = ShiftMatrix * RotateMatrix * ScaleMatrix * BackShiftMatrix
for each point coordinates:
(NewX, NewY) = (AffMatrix) * (X, Y)
Rhomboid will include also shear transform.
I think that numpy has ready-to-use routines to create and combine (multiply) affine matrices.

Two-body orbit modelling problems

Skip to Update 2 below, if you don't want to read too much background.
I'm trying to implement a model for simple orbital simulations (two body).
However, when I try to use the code I've written, the plots generated from the result look quite odd.
The program uses initial state vectors (position and velocity) to calculate the Keplerian orbital elements, which are used to then calculate the next position, and returned as the next two state vectors.
This seems to work fine, and by itself, plots correctly as long as I keep the plot on the orbital plane. But I would like to rotate the plot to the frame of reference (the parent body) so that I can see a cool 3D view of what the orbits look like (obvs).
Right now, I suspect that the bug is in how I convert from the two state vectors in the orbital plane, to rotating them to the frame of reference. I am using the equations from step 6 of this document to create the following code from (but applying individual roation matricies [copied from here]):
from numpy import sin, cos, matrix, newaxis, asarray, squeeze, dot
def Rx(theta):
"""
Return a rotation matrix for the X axis and angle *theta*
"""
return matrix([
[1, 0, 0 ],
[0, cos(theta), -sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
def Rz(theta):
"""
Return a rotation matrix for the Z axis and angle *theta*
"""
return matrix([
[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0],
[0, 0, 1],
], dtype="float64")
def rotate1(vector, O, i, w):
# The starting value of *vector* is just a 1-dimensional numpy
# array.
# Transform into a column vector.
vector = vector[:, newaxis]
# Perform the rotation
R = Rz(-O) * Rx(-i) * Rz(-w)
res2 = dot(R, vector)
# Transform back into a row vector (because that's what
# the rest of the program uses)
return squeeze(asarray(res2))
(For context, this is the full class I am using for the orbit model.)
When I plot X and Y coordinates from the result, I get this:
But when I change the rotation matrix to R = Rz(-O) * Rx(-i), I get this more plausible plot (although obviously missing one rotation, and slightly off-center):
And when I reduce it further to R = Rx(-i), as one would expect, I get this:
So as I said, I am fairly sure that it is not the orbital calculation code that is behaving weirdly, but rather some error in the rotation code. But I'm not sure where to narrow this down, as I'm pretty new to both numpy and matrix math in general.
Update: Based on stochastic's answer I transposed the matricies (R = Rz(-O).T * Rx(-i).T * Rz(-w).T), but then got this plot:
which made me wonder if my conversion to screen coordinates was somehow wrong -- but it looks correct to me (and is the same code as the more-correct plots with less rotation) namely:
def recenter(v_position, viewport_width, viewport_height):
x, y, z = v_position
# the size of the viewport in meters
bounds = 20000000
# viewport_width is the screen pixels (800)
scale = viewport_width/bounds
# Perform the scaling operation
x *= scale
y *= scale
# recenter to screen X and Y measured from the top-left corner
# of the viewport
x += viewport_width/2
y = viewport_height/2 - y
# Cast to int, because we don't care about pixel fractions
return int(x), int(y)
Update 2
Although I have triple-checked my implementation of the equations, as well as the rotations with stochastic's help, I still can't get the orbits to come out right. They still appear basically the same as in the plots above.
Using data from the NASA Horizon's system, I set up an orbit with specific state vectors from the ISS (2457380.183935185 = A.D. 2015-Dec-23 16:24:52.0000 (TDB)), and checked them against the Kepler orbit elements for the same moment in time, which produces this result:
inclination :
0.900246137041
0.900246137041
true_anomaly :
0.11497063007
0.0982485984565
long_of_asc_node :
3.80727461492
3.80727461492
eccentricity :
0.000429082122137
0.000501850615905
semi_major_axis :
6778560.7037
6779057.01374
mean_anomaly :
0.114872215066
0.0981501816537
argument_of_periapsis :
0.843226618347
0.85994864996
The top values are my (calculated) values, and the bottom values are the NASA ones. Obviously some floating point precision error is to be expected, but the variations in mean_anomaly and true_anomaly did strike me as larger than I expected. (I'm currently running all of my numpy calculations using float128 numbers on a 64-bit system).
In addition, the resulting orbit still looks like the (quite) eccentric first plot, above (even though I know that this LEO ISS orbit is quite circular). So I'm a bit stumped as to what the source of the problem could be.
I believe you have at least two problems.
After looking more closely at the orbital simulation you are doing (see this additional document from the comments), I think the main problem is the initially-very-reasonable-but-yet-untrue assumption that the final plot should look like an ellipse. In general it will not, since an orbiting body will not necessarily stay in a single plane.
The other problem, I think, is that your rotation matrices are the transpose of what they should be, per the document you described (see below).
On transposed rotation matrices
The document you cited does not directly specify whether R_x and R_z should be right-handed rotations of the axes or of the vector they will multiply, though you can figure it out from equation 9 (or 10). It turns out that they should be right-handed rotations of the axes, not the vector. That means that they should be defined like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta), sin(theta) ],
[0,-sin(theta), cos(theta) ],
], dtype="float64")
instead of like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta),-sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
I found this out by reproducing equation 9 by hand on paper.
In that equation, look at the first component of the vector r(t).
There are two terms: one with o_x in it and one with o_y.
Look at the thing multliplying o_y. It is: -(sin(omega)*cos(Omega)+cos(omega)*cos(i)*sin(Omega)).
That leading minus sign is the key. It comes from the minus sign in the first row of your Rz matrix.
Since the Omega, i, and omega in equation 9 are all negated, that means that the minus sign needs to be on the second row of R_z, which would mean that R_z represents a right-handed rotation of the axes, not the vector.
Similarly, we can look at the o_y component of the last term and see that the minus sign needs to be on the second row of R_x, meaning (thank goodness for sanity) the both R_z and R_x right-handed rotations of the axes.
Your Rx and Rz functions are currently defining right handed rotations of a vector, not the axes.
You can fix this by either (all three are equivalent):
Removing the minus signs on your euler angles: Rz(O) * Rx(i) * Rz(w)
transposing your rotation matrices: Rz(-O).T * Rx(-i).T * Rz(-w).T
moving the - sign in the definition of Rx and Rz to the second row sine term, as shown above
I am going to mark stochastic's answer as right, because a) he deserves the points for being so helpful, and b) his advice was fundamentally correct.
However the source of the weird plot actually ended up being these lines in the linked Orbit class:
self.v_position = self.rotate(v_position, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
self.v_velocity = self.rotate(v_velocity, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
Notice that the self.v_position property is updated before the call to rotate the velocity vector happens; one might also notice, when reading the code, that I in my cleverness decided to make all of the orbital element values methods wrapped in #property decorators to make the calculations more clear.
But of course, this also means the methods are called -- and the values recalculated -- every time a property was accessed. So the second call to self.rotate() happens with slightly different values of the orbital elements from the first call and, more importantly, with values that don't match up 100% correctly with the "current" position and velocity state vectors!
So after a few days of banging my head against this bug, I figured it out from a bit of yak-shaving I was doing in the form of a refactoring, and now it all works perfectly.

Categories

Resources