I am attempting to use the 4th order Yoshida integration technique to model the orbit of satellites in circular orbits around the Earth.
However, the orbits I achieve spiral away quite quickly. The code for a Moon like satellite is below. Interestingly, the particles behaved when I use Euler method, however, I wanted to try a more accurate method. The issue could then be within how I have implemented the algorithm itself.
I have tried using the gravitational parameter rather then computing G*M, but this did not help. I also reduced the time-step, messed around with units, printed and checked values for various integration steps etc., but could not find anything.
Is this the correct use of this algorithm?
G = 6.674e-20 # km^3 kg^-1 s^-2
day = 60.0 * 60.0 * 24.0 # length of day
dt = day / 10.0
M = 5.972e24 # kg
N = 1
delta = np.random.random(1) * 2.0 * np.pi / N
angles = np.linspace(0.0, 2.0 * np.pi, N) + delta
rad = np.random.uniform(low = 384e3, high = 384e3, size = (N))
x, y = rad * np.cos(angles), ringrad * np.sin(angles)
vx, vy = np.sqrt(G*M / rad) * -np.sin(angles), np.sqrt(G*M / rad) * np.cos(angles)
def update(frame):
global x, y, vx, vy, dt, day
positions.set_data(x, y)
# coefficients
q = 2**(1/3)
w1 = 1 / (2 - q)
w0 = -q * w1
d1 = w1
d3 = w1
d2 = w0
c1 = w1 / 2
c2 = (w0 + w1) / 2
c3 = c2
c4 = c1
# Step 1
x1 = x + c1*vx*dt
y1 = y + c1*vy*dt
dist1 = np.hypot(x1, y1)
acc1 = -(G*M) / (dist1**2.0)
dx1 = x1 - x
dy1 = y1 - y
accx1 = (acc1*dx1)/(x1)
accy1 = (acc1*dy1)/(y1)
vx1 = vx + d1*accx1*dt
vy1 = vy + d1*accy1*dt
# Step 2
x2 = x1 + c2*vx1*dt
y2 = y1 + c2*vy1*dt
dist2 = np.hypot(x2, y2)
acc2 = -(G*M) / (dist2**2.0)
dx2 = x2 - x1
dy2 = y2 - y1
accx2 = (acc2*dx2)/(x2)
accy2 = (acc2*dy2)/(y2)
vx2 = vx1 + d2*accx2*dt
vy2 = vy1 + d2*accy2*dt
# Step 3
x3 = x2 + c3*vx2*dt
y3 = y2 + c3*vy2*dt
dist3 = np.hypot(x3, y3)
acc3 = -(G*M) / (dist3**2.0)
dx3 = x3 - x2
dy3 = y3 - y2
accx3 = (acc3*dx3)/(x3)
accy3 = (acc3*dy3)/(y3)
vx3 = vx2 + d3*accx3*dt
vy3 = vy2 + d3*accy3*dt
# Full step
x = x3 + c4*vx3*dt
y = y3 + c4*vy3*dt
vx = vx3
vy = vy3
return positions
Related
I am trying to subtract gravity from acceleration data. I have the quaternion values in w, x, y, z form. This is my code:
# acc - accelerometer data
# g - gravity vector g = (0, 0, 9.81)
# q - orientation quaternion
# The approach is to rotate gravity vector to the current orientation and subtract it from acc.
def subtract_gravity(acc, g, q):
# rotate gravity to the current orientation q, by performing q * g * q_conjugate
g_rotated = rotate(g, q)
# subtract rotated g from acc
return np.subtract(acc, g_rotated)
# For the rotation, we need to perform qgq_conjuagte
def rotate(g, q):
# Convert gravity vector into quaternion
q_g = (0.0,) + g
# rotate gravity to the current orientation q, by performing q * g * q_conjugate
return q_mult(q_mult(q, q_g), q_conjugate(q))[1:]
# Normalize quaternion - the norm should be 1
def normalize(q, tolerance=0.00001):
mag2 = sum(n * n for n in q)
if abs(mag2 - 1.0) > tolerance:
mag = sqrt(mag2)
q = tuple(n / mag for n in q)
return q
def q_mult(q1, q2):
w1, x1, y1, z1 = q1
w2, x2, y2, z2 = q2
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return w, x, y, z
def q_conjugate(q):
w, x, y, z = q
return (w, -x, -y, -z)
a = [9.1, -3.7, -0.03]
b = [0.12, 0.79, 0.56, 0.21]
acc = np.array(a)
q = np.array(b)
g = (0, 0, 9.81)
output = subtract_gravity(acc, g, q)
However, the output I am getting seems incorrect. I am keeping the IMU such that the gravity is predominantly affecting the X-axis and so gravity subtraction should just result in near zero x-axis, but the above code outputs [4.48 -4.25 8.61]. Any ideas on why this is happening - are there bugs in my logic?
Can somebody please point me in the right direction...
I need to find the parameters a,b,c,d of two functions:
Y1 = ( (a * X1 + b) * p0 + (c * X2 + d) * p1 ) / (a * X1 + b + c * X2 + d)
Y2 = ( (a * X2 + b) * p2 + (c * X2 + d) * p3 ) / (a * X1 + b + c * X2 + d)
X1, X2 (independent variables) and Y1, Y2 (dependent variables) are observations, i.e. one-dimensional arrays with thousands of entries each.
p0, p1, p2, p3 are known constants (scalars).
I successfully solved the problem with the first function only with a curve-fit (see below), but how do i solve the problem for Y1 and Y2 ?
Thank you.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
X = [X1,X2]
def fitFunc(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p0 + (c * X2 + d) * p1) / (a * X1 + b + c * X2 + d)
fitPar, fitCov = curve_fit(fitFunc, X, Y1)
print(fitPar)
One way would be to minimize both your functions together using scipy.optimize.minimze. In the example below, a function residual is passed a, b, c, and d as initial guesses. Using these guesses, Y1 and Y2 are evaluated, then the mean squared error is taken using the data and predicted values of respective functions. The error is returned as the mean error of the two functions. The optimized set of parameters is stored in res as res.x.
import numpy as np
from scipy.optimize import minimize
#p0 = ... known
#p1 = ... known
#p2 = ... known
#p3 = ... known
def Y1(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p0 + (c * X2 + d) * p1) / (a * X1 + b + c * X2 + d)
def Y2(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p2 + (c * X2 + d) * p3) / (a * X1 + b + c * X2 + d)
X1 = np.array([X1]) # your X1 array
X2 = np.array([X2]) # your X2 array
X = np.array([X1, X2])
y1_data = np.array([y1_data]) # your y1 data
y2_data = np.array([y2_data]) # your y2 data
def residual(x):
a = x[0]
b = x[1]
c = x[2]
d = x[3]
y1_pred = Y1(X,a,b,c,d)
y2_pred = Y2(X,a,b,c,d)
err1 = np.mean((y1_data - y1_pred)**2)
err2 = np.mean((y2_data - y2_pred)**2)
error = (err1 + err2) / 2
return error
x0 = [1, 1, 1, 1] # Initial guess for a, b, c, and d respectively
res = minimize(residual, x0, method="Nelder-Mead")
print(res.x)
I'd like to fade the values of a line on the Y axis using a gradient that I can control.
This is a simplified version of what I'm trying right now.
y_values = [1,1,1,1,1,1]
for i, y in enumerate(y_values): #i is the index, y is the individual y value
perc = i/(len(y_values)-1) #progress the fade by the percentage of the index divided by the total values
amt = y * perc #y is the target amount to decrease multiplied by index percentage
y -= amt
print(i,y)
This code produces this:
1.0
0.8
0.6
0.4
0.2
0.0
It's creating a linear fade, but how do I increase the fade with an exponential fade like this?
Thank you!
To make exponential fading, you have to provide two coefficients to provide initial factor 1.0 at the start value x1 and desired final factor k at the end of interval x2
y = y * f(x)
f(x) = A * exp(-B * x)
So
f(x1) = 1 = A * exp(B * x1)
f(x2) = k = A * exp(B * x2)
divide the second by the first
k = exp(B * (x2 - x1))
ln(k) = B * (x2 - x1)
so
B = ln(k) / (x2 - x1)
A = exp(B * x1)
Example for x1 = 0, x2 = 60, k = 0.01
B = -4.6/60= -0.076
A = 1
f(x) = exp(-0.076*x)
f(30) = exp(-0.076*20) = 0.1
Python example:
import math
def calcfading(x1, x2, ratio):
B = math.log(ratio) / (x2 - x1)
A = math.exp(B * x1)
return A, B
def coef(x, fade):
return fade[0]*math.exp(x*fade[1])
cosine = [[x, math.cos(x)] for x in range(0, 11)]
print(cosine)
print()
fade = calcfading(0, 10, 0.01)
expcosine = [[s[0], coef(s[0], fade)*s[1]] for s in cosine]
print(expcosine)
I have tried adapting euler method code, and am using euler to calculate the first value so there are two for me to use in verlet but when i plot the graph i just get two perpendicular straight lines.
this is the code:
for t in t_array:
if t == 0:
x0 = x
x1 = x0
a = -k * x1 / m
x2 = x1 + dt * v
v = v + dt * a
x_list.append(x2)
v_list.append(v)
else:
x2 = x1
x1 = x0
x2 = 2*x1 - x0 + dt**2*a
v = (1/dt)*(x2 - x1)
x_list.append(x2)
v_list.append(v)
and then i plot a graph using matplotlib.
I'm trying to adapt this noise module to a project I'm working on but I'm not getting the results I was expecting:
https://pypi.python.org/pypi/noise/
Instead of a varied height-map, each row tends to have the exact same value. If I create something 250x250 it will generate the same value 250 times and then generate a new value 250 times until it ends.
Here is the function I'm currently using. I understand this function fairly well but I'm just not sure how to get more "interesting" results. Thank you for your help.
class SimplexNoise(BaseNoise):
def noise2(self, x, y):
"""2D Perlin simplex noise.
Return a floating point value from -1 to 1 for the given x, y coordinate.
The same value is always returned for a given x, y pair unless the
permutation table changes (see randomize above).
"""
# Skew input space to determine which simplex (triangle) we are in
s = (x + y) * _F2
i = floor(x + s)
j = floor(y + s)
t = (i + j) * _G2
x0 = x - (i - t) # "Unskewed" distances from cell origin
y0 = y - (j - t)
if x0 > y0:
i1 = 1; j1 = 0 # Lower triangle, XY order: (0,0)->(1,0)->(1,1)
else:
i1 = 0; j1 = 1 # Upper triangle, YX order: (0,0)->(0,1)->(1,1)
x1 = x0 - i1 + _G2 # Offsets for middle corner in (x,y) unskewed coords
y1 = y0 - j1 + _G2
x2 = x0 + _G2 * 2.0 - 1.0 # Offsets for last corner in (x,y) unskewed coords
y2 = y0 + _G2 * 2.0 - 1.0
# Determine hashed gradient indices of the three simplex corners
perm = BaseNoise.permutation
ii = int(i) % BaseNoise.period
jj = int(j) % BaseNoise.period
gi0 = perm[ii + perm[jj]] % 12
gi1 = perm[ii + i1 + perm[jj + j1]] % 12
gi2 = perm[ii + 1 + perm[jj + 1]] % 12
# Calculate the contribution from the three corners
tt = 0.5 - x0**2 - y0**2
if tt > 0:
g = _GRAD3[gi0]
noise = tt**4 * (g[0] * x0 + g[1] * y0)
else:
noise = 0.0
tt = 0.5 - x1**2 - y1**2
if tt > 0:
g = _GRAD3[gi1]
noise += tt**4 * (g[0] * x1 + g[1] * y1)
tt = 0.5 - x2**2 - y2**2
if tt > 0:
g = _GRAD3[gi2]
noise += tt**4 * (g[0] * x2 + g[1] * y2)
return noise * 70.0 # scale noise to [-1, 1]
win = pygcurse.PygcurseWindow(85, 70, 'Generate')
octaves = 2
ysize = 150
xsize = 150
freq = 32.0 * octaves
for y in range(ysize):
for x in range(xsize):
tile = SimplexNoise.noise2(x / freq, y / freq, octaves)
win.write(str(tile) + "\n")