Fitting an ellipse through orbital data - python

I've generated a bunch of data for the (x,y,z) coordinates of a planet as it orbits around the Sun. Now I want to fit an ellipse through this data.
What I tried to do:
I created a dummy ellipse based on five parameters: The semi-major axis & eccentricity that defines the size & shape and the three euler angles that rotate the ellipse around. Since my data is not always centered at origin I also need to translate the ellipse requiring additional three variables (dx,dy,dz).
Once I initialise this function with these eight variables I get back N number of points that lie on this ellipse. (N = number of data points I am plotting the ellipse through)
I calculate the deviation of these dummy points from the actual data and then I minimise this deviation using some minimisation method to find the best fitting values for these eight variables.
My problem is with the very last part: minimising the deviation and finding the variables' values.
To minimise the deviation I use scipy.optimize.minimize to try and approximate the best fitting variables but it just doesn't do good enough of a job:
Here is an image of what one of my best fits looks like and that's with a very generously accurate initial guess. (blue = data, red = fit)
Here is the entire code. (No data required, it generates its own phony data)
In short, I use this scipy function:
initial_guess = [0.3,0.2,0.1,0.7,3,0.0,-0.1,0.0]
bnds = ((0.2, 0.5), (0.1, 0.3), (0, 2*np.pi), (0, 2*np.pi), (0, 2*np.pi), (-0.5,0.5), (-0.5,0.5), (-0.3,0.3)) #reasonable bounds for the variables
result = optimize.minimize(deviation, initial_guess, args=(data,), method='L-BFGS-B', bounds=bnds, tol=1e-8) #perform minimalisation
semi_major,eccentricity,inclination,periapsis,longitude,dx,dy,dz = result["x"]
To minimize this error (or deviation) function:
def deviation(variables, data):
"""
This function calculates the cumulative seperation between the ellipse fit points and data points and returns it
"""
num_pts = len(data[:,0])
semi_major,eccentricity,inclination,periapsis,longitude,dx,dy,dz = variables
dummy_ellipse = generate_ellipse(num_pts,semi_major,eccentricity,inclination,periapsis,longitude,dz,dy,dz)
deviations = np.zeros(len(data[:,0]))
pair_deviations = np.zeros(len(data[:,0]))
# Calculate separation between each pair of points
for j in range(len(data[:,0])):
for i in range(len(data[:,0])):
pair_deviations[i] = np.sqrt((data[j,0]-dummy_ellipse[i,0])**2 + (data[j,1]-dummy_ellipse[i,1])**2 + (data[j,2]-dummy_ellipse[i,2])**2)
deviations[j] = min(pair_deviations) # only pick the closest point to the data point j.
total_deviation = sum(deviations)
return total_deviation
(My code may be a bit messy & inefficient, I'm new to this)
I may be making some logical error in my coding but I think it comes down to the scipy.minimize.optimize function. I don't know enough about how it works and what to expect of it. I was also recommended to try Markov chain Monte Carlo when dealing with this many variables. I did take a look at the emcee, but it's a little above my head right now.

First, you have a typo in your objective function that prevents optimization of one of the variables:
dummy_ellipse = generate_ellipse(...,dz,dy,dz)
should be
dummy_ellipse = generate_ellipse(...,dx,dy,dz)
Also, taking sqrt out and minimizing the sum of squared euclidean distances makes it numerically somewhat easier for the optimizer.
Your objective function is also not everywhere differentiable because of the min(), as assumed by the BFGS solver, so its performance will be suboptimal.
Also, approaching the problem from analytical geometry perspective may help: an ellipse in 3d is defined as a solution of two equations
f1(x,y,z,p) = 0
f2(x,y,z,p) = 0
Where p are the parameters of the ellipse. Now, to fit the parameters to a data set, you could try to minimize
F(p) = sum_{j=1}^N [f1(x_j,y_j,z_j,p)**2 + f2(x_j,y_j,z_j,p)**2]
where the sum goes over data points.
Even better, in this problem formulation you could use optimize.leastsq, which may be more efficient in least squares problems.

Related

Is there a term for finding a minimal set of N points that approximate a curve?

I have spent some time answering How do I discretize a continuous function avoiding noise generation (see picture), and throughout, I felt like I was reinventing a bike.
Essentially, the problem is:
You are given a curve function - for any x, you can obtain y.
You want to approximate the curve using a piecewise-linear function with exactly N points, based on some error metric, e.g. distance to the curve, or minimize the absolute difference of the area under the curves (thanks to #QuangHoang for pointing out these are different).
Here's an example of a curve I approximated using 20 points:
Question: I've coded this up using repeated bisections. Is there a library I could have used? Is there a nice term of this problem type that I failed to google out? Does this generalize to a broader problem set?
Edit: upon request, here's how I've done it:
Google Colab
Data:
import numpy as np
from scipy.signal import gaussian
N_MOCK = 2000
# A nice-ish mock distribution
xs = np.linspace(-10.0, 10.0, num=N_MOCK)
sigmoid = 1 / (1 + np.exp(-xs))
gauss = gaussian(N_MOCK, std=N_MOCK / 10)
ys = gauss - sigmoid + 1
xs += 10
xs /= 20
Plotting:
import matplotlib.pyplot as plt
def plot_graph(cont_time, cont_array, disc_time, disc_array, plot_name):
"""A simplified version of the provided plotting function"""
# Setting Axis properties and titles
fig, ax = plt.subplots(figsize=(20, 4))
ax.set_title(plot_name)
# Plotting stuff
ax.plot(cont_time, cont_array, label="Continuous", color='#0000ff')
ax.plot(disc_time, disc_array, label="Discrete", color='#00ff00')
fig.legend(loc="upper left", bbox_to_anchor=(0,1), bbox_transform=ax.transAxes)
Here's how I solved it, but I hope there's a more standard way:
import warnings
warnings.simplefilter('ignore', np.RankWarning)
def line_error(x0, y0, x1, y1, ideal_line, integral_points=100):
"""Assume a straight line between (x0,y0)->(x1,p1). Then sample the perfect line multiple times and compute the distance."""
straight_line = np.poly1d(np.polyfit([x0, x1], [y0, y1], 1))
xs = np.linspace(x0, x1, num=integral_points)
ys = straight_line(xs)
perfect_ys = ideal_line(xs)
err = np.abs(ys - perfect_ys).sum() / integral_points * (x1 - x0) # Remove (x1 - x0) to only look at avg errors
return err
def discretize_bisect(xs, ys, bin_count):
"""Returns xs and ys of discrete points"""
# For a large number of datapoints, without loss of generality you can treat xs and ys as bin edges
# If it gives bad results, you can edges in many ways, e.g. with np.polyline or np.histogram_bin_edges
ideal_line = np.poly1d(np.polyfit(xs, ys, 50))
new_xs = [xs[0], xs[-1]]
new_ys = [ys[0], ys[-1]]
while len(new_xs) < bin_count:
errors = []
for i in range(len(new_xs)-1):
err = line_error(new_xs[i], new_ys[i], new_xs[i+1], new_ys[i+1], ideal_line)
errors.append(err)
max_segment_id = np.argmax(errors)
new_x = (new_xs[max_segment_id] + new_xs[max_segment_id+1]) / 2
new_y = ideal_line(new_x)
new_xs.insert(max_segment_id+1, new_x)
new_ys.insert(max_segment_id+1, new_y)
return new_xs, new_ys
Run:
BIN_COUNT = 25
new_xs, new_ys = discretize_bisect(xs, ys, BIN_COUNT)
plot_graph(xs, ys, new_xs, new_ys, f"Discretized and Continuous comparison, N(cont) = {N_MOCK}, N(disc) = {BIN_COUNT}")
print("Bin count:", len(new_xs))
Note: while I prefer numpy, the answer can be a library in any language, or the name of the mathematical term. Please do not write lots of code, as I have done that myself already :)
Is there a nice term of this problem type that I failed to google out? Does this generalize to a broader problem set?
I know this problem as Expected Improvement (EI), or Bayesian optimization (permalink on archive.org). Given an expensive black box function for which you would like to find the global maximum, this algorithm yields the next position where to check for that maximum.
At first glance, this is different from your problem. You are looking for a way to approximate a curve with a small number of samples, while EI provides the places where the function has its most likely maximum. But both problems are equivalent insofar that you minimize an error function (which will change when you add another sample to your approximation) with the fewest possible points.
I believe this is the original research paper.
Jones, Donald & Schonlau, Matthias & Welch, William. (1998). Efficient Global Optimization of Expensive Black-Box Functions. Journal of Global Optimization. 13. 455-492. 10.1023/A:1008306431147.
From section 1:
[...] the technique often requires the fewest function evaluations of all
competing methods. This is possible because, with typical engineering functions,
one can often interpolate and extrapolate quite accurately over large distances in
the design space. Intuitively, the method is able to ‘see’ obvious trends or patterns
in the data and ‘jump to conclusions’ instead of having to move step-by-step along
some trajectory.
As to why it is efficient:
[...] the response surface approach provides a credible stopping rule based
on the expected improvement from further searching. Such a stopping rule is possible because the statistical model provides confidence intervals on the function’s
value at unsampled points – and the ‘reasonableness’ of these confidence intervals
can be checked by model validation techniques.

Least-square spline interpolation forcing interpolant to pass through specific points

I am having issues in implementing some less-than-usual interpolation problem. I have some (x,y) data points scattered along some curve which a priori I don't know, and I want to reconstruct this curve at my best, interpolating my point with min square error. I thought of using scipy.interpolate.splrep for this purpose (but maybe there are better options you would advise to use). The additional difficulty in my case, is that I want to constrain the spline curve to pass through some specific points of my original data. I assume that playing with knots and weights could make the trick, but I don't know how to do so (I am procrastinating avoidance of spline interpolation theory besides basic fitting procedures). Also, for some undisclosed reasons, when I try to setup knots in my splrep I get the same error of this post, which keeps complicating things. The following is my sample code:
from __future__ import division
import numpy as np
import scipy.interpolate as spi
import matplotlib.pylab as plt
# Some surrogate sample data
f = lambda x : x**2 - x/2.
x = np.arange(0.,20.,0.1)
y = f(4*(x + np.random.normal(size=np.size(x))))
# I want to use spline interpolation with least-square fitting criterion, making sure though that the spline starts
# from the origin (or in general passes through a precise point of my dataset).
# In my case for example I would like the spline to originate from the point in x=0. So I attempted to include as first knot x=0...
# but it won't work, nor I am sure this is the right procedure...
fy = spi.splrep(x,y)
fy = spi.splrep(x,y,t=fy[0])
yy = spi.splev(x,fy)
plt.plot(x,y,'-',x,yy,'--')
plt.show()
which despite the fact I am even passing knots computed from a first call of splrep, it will give me:
File "/usr/lib64/python2.7/site-packages/scipy/interpolate/fitpack.py", line 289, in splrep
res = _impl.splrep(x, y, w, xb, xe, k, task, s, t, full_output, per, quiet)
File "/usr/lib64/python2.7/site-packages/scipy/interpolate/_fitpack_impl.py", line 515, in splrep
raise _iermess[ier][1](_iermess[ier][0])
ValueError: Error on input data
You use the weights argument of splrep: can give these points you need fixed very large weights. This is a workaround for sure, so keep an eye on the fit quality and stability.
Setting high weights for specific points is indeed a working solution as suggested by #ev-br. In addition, because there is no direct way to match derivatives at the extrema of the curve, the same rationale can be applied in this case as well. Say you want the derivative in y[0] and y[-1] match the derivative of your data points, then you add large weights also for y[1] and y[-2], i.e.
weights = np.ones(len(x))
weights[[0,-1]] = 100 # Promote spline interpolant through first and last point
weights[[1,-2]] = 50 # Make spline interpolant derivative tend to derivatives at first/last point
fy = spi.splrep(x,y,w=weights,s=0.1)
yy = spi.splev(x,fy)

Reconstructing curve from gradient

Suppose I have a curve, and then I estimate its gradient via finite differences by using np.gradient. Given an initial point x[0] and the gradient vector, how can I reconstruct the original curve? Mathematically I see its possible given this system of equations, but I'm not certain how to do it programmatically.
Here is a simple example of my problem, where I have sin(x) and I compute the numerical difference, which matches cos(x).
test = np.vectorize(np.sin)(x)
numerical_grad = np.gradient(test, 30./100)
analytical_grad = np.vectorize(np.cos)(x)
## Plot data.
ax.plot(test, label='data', marker='o')
ax.plot(numerical_grad, label='gradient')
ax.plot(analytical_grad, label='proof', alpha=0.5)
ax.legend();
I found how to do it, by using numpy's trapz function (trapezoidal rule integration).
Following up on the code I presented on the question, to reproduce the input array test, we do:
x = np.linspace(1, 30, 100)
integral = list()
for t in range(len(x)):
integral.append(test[0] + np.trapz(numerical_grad[:t+1], x[:t+1]))
The integral array then contains the results of the numerical integration.
You can restore initial curve using integration.
As life example: If you have function for position for 1D moving, you can get function for velocity as derivative (gradient)
v(t) = s(t)' = ds / dt
And having velocity, you can potentially get position (not all functions are integrable analytically - in this case numerical integration is used) with some unknown constant (shift) added - and with initial position you can restore exact value
s(T) = Integral[from 0 to T](v(t)dt) + s(0)

Fitting Fresnel Equations Using Scipy

I am attempting a non-linear fit of Fresnel equations with data of reflectance against angle of incidence. Found on this site http://en.wikipedia.org/wiki/Fresnel_equations are two graphs that have a red and blue line. I need to basically fit the blue line when n1 = 1 to my data.
Here I use the following code where th is theta, the angle of incidence.
def Rperp(th, n, norm, constant):
numerator = np.cos(th) - np.sqrt(n**2.0 - np.sin(th)**2.0)
denominator = 1.0 * np.cos(th) + np.sqrt(n**2.0 - np.sin(th)**2.0)
return ((numerator / denominator)**2.0) * norm + constant
The parameters I'm looking for are:
the index of refraction n
some normalization to multiply by and
a constant to shift the baseline of the graph.
My attempt is the following:
xdata = angle[1:] * 1.0 # angle of incidence
ydata = greenDD[1:] # reflectance
params = curve_fit(Rperp, xdata, ydata)
What I get is a division of zero apparently and gives me [1, 1, 1] for the parameters. The Fresnel equation itself is the bit without the normalizer and the constant in Rperp. Theta in the equation is the angle of incidence also. Overall I am just not sure if I am doing this right at all to get the parameters.
The idea seems to be the first parameter in the function is the independent variable and the rest are the dependent variables going to be found. Then you just plug into scipy's curve_fit and it will give you a fit to your data for the parameters. If it is just getting around division of zero, which I had though might be integer division, then it seems like I should be set. Any help is appreciated and let me know if things need to be clarified (such as np is numpy).
Make sure to pass the arguments to the trigonometric functions, like sine, in radians, not degrees.
As for why you're getting a negative refractive index returned: it is because in your function, you're always squaring the refractive index. The curve_fit algorithm might end up in a local minimum state where (by accident) n is negative, because it has the same value as n positive.
Ideally, you'd add constraints to the minimization problem, but for this (simple) problem, just observe your formula and remember that a result of negative n is simply solved by changing the sign, as you did.
You could also try passing an initial guess to the algorithm and you might observe that it will not end up in the local minimum with negative value.

Truncated multivariate normal in SciPy?

I'm trying to automate a process that at some point needs to draw samples from a truncated multivariate normal. That is, it's a normal multivariate normal distribution (i.e. Gaussian) but the variables are constrained to a cuboid. My given inputs are the mean and covariance of the full multivariate normal but I need samples in my box.
Up to now, I'd just been rejecting samples outside the box and resampling as necessary, but I'm starting to find that my process sometimes gives me (a) large covariances and (b) means that are close to the edges. These two events conspire against the speed of my system.
So what I'd like to do is sample the distribution correctly in the first place. Googling led only to this discussion or the truncnorm distribution in scipy.stats. The former is inconclusive and the latter seems to be for one variable. Is there any native multivariate truncated normal? And is it going to be any better than rejecting samples, or should I do something smarter?
I'm going to start working on my own solution, which would be to rotate the untruncated Gaussian to it's principal axes (with an SVD decomposition or something), use a product of truncated Gaussians to sample the distribution, then rotate that sample back, and reject/resample as necessary. If the truncated sampling is more efficient, I think this should sample the desired distribution faster.
So, according to the Wikipedia article, sampling a multivariate truncated normal distribution (MTND) is more difficult. I ended up taking a relatively easy way out and using an MCMC sampler to relax an initial guess towards the MTND as follows.
I used emcee to do the MCMC work. I find this package phenomenally easy-to-use. It only requires a function that returns the log-probability of the desired distribution. So I defined this function
from numpy.linalg import inv
def lnprob_trunc_norm(x, mean, bounds, C):
if np.any(x < bounds[:,0]) or np.any(x > bounds[:,1]):
return -np.inf
else:
return -0.5*(x-mean).dot(inv(C)).dot(x-mean)
Here, C is the covariance matrix of the multivariate normal. Then, you can run something like
S = emcee.EnsembleSampler(Nwalkers, Ndim, lnprob_trunc_norm, args = (mean, bounds, C))
pos, prob, state = S.run_mcmc(pos, Nsteps)
for given mean, bounds and C. You need an initial guess for the walkers' positions pos, which could be a ball around the mean,
pos = emcee.utils.sample_ball(mean, np.sqrt(np.diag(C)), size=Nwalkers)
or sampled from an untruncated multivariate normal,
pos = numpy.random.multivariate_normal(mean, C, size=Nwalkers)
and so on. I personally do several thousand steps of sample discarding first, because it's fast, then force the remaining outliers back within the bounds, then run the MCMC sampling.
The number of steps for convergence is up to you.
Note also that emcee easily supports basic parallelization by adding the argument threads=Nthreads to the EnsembleSampler initialization. So you can make this blazing fast.
I have reimplemented an algorithm which does not depend on MCMC but creates independent and identically distributed (iid) samples from the truncated multivariate normal distribution. Having iid samples can be very useful! I used to also use emcee as described in the answer by Warrick, but for convergence the number of samples needed exploded in higher dimensions, making it impractical for my use case.
The algorithm was introduced by Botev (2016) and uses an accept-reject algorithm based on minimax exponential tilting. It was originally implemented in MATLAB but reimplementing it for Python increased the performance significantly compared to running it using the MATLAB engine in Python. It also works well and is fast at higher dimensions.
The code is available at: https://github.com/brunzema/truncated-mvn-sampler.
An Example:
d = 10 # dimensions
# random mu and cov
mu = np.random.rand(d)
cov = 0.5 - np.random.rand(d ** 2).reshape((d, d))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov, cov)
# constraints
lb = np.zeros_like(mu) - 1
ub = np.ones_like(mu) * np.inf
# create truncated normal and sample from it
n_samples = 100000
tmvn = TruncatedMVN(mu, cov, lb, ub)
samples = tmvn.sample(n_samples)
Plotting the first dimension results in:
Reference:
Botev, Z. I., (2016), The normal law under linear restrictions: simulation and estimation via minimax tilting, Journal of the Royal Statistical Society Series B, 79, issue 1, p. 125-148
Simulating truncated multivariate normal can be tricky and usually involves some conditional sampling by MCMC.
My short answer is, you can use my code (https://github.com/ralphma1203/trun_mvnt)!!! It implements the Gibbs sampler algorithm from , which can handle general linear constraints in the form of , even when you have non-full rank D and more constraints than the dimensionality.
import numpy as np
from trun_mvnt import rtmvn, rtmvt
########## Traditional problem, probably what you need... ##########
##### lower < X < upper #####
# So D = identity matrix
D = np.diag(np.ones(4))
lower = np.array([-1,-2,-3,-4])
upper = -lower
Mean = np.zeros(4)
Sigma = np.diag([1,2,3,4])
n = 10 # want 500 final sample
burn = 100 # burn-in first 100 iterates
thin = 1 # thinning for Gibbs
random_sample = rtmvn(n, Mean, Sigma, D, lower, upper, burn, thin)
# Numpy array n-by-p as result!
random_sample
########## Non-full rank problem (more constraints than dimension) ##########
Mean = np.array([0,0])
Sigma = np.array([1, 0.5, 0.5, 1]).reshape((2,2)) # bivariate normal
D = np.array([1,0,0,1,1,-1]).reshape((3,2)) # non-full rank problem
lower = np.array([-2,-1,-2])
upper = np.array([2,3,5])
n = 500 # want 500 final sample
burn = 100 # burn-in first 100 iterates
thin = 1 # thinning for Gibbs
random_sample = rtmvn(n, Mean, Sigma, D, lower, upper, burn, thin) # Numpy array n-by-p as result!
A little late I guess but for the record, you could use Hamiltonian Monte Carlo. A module in Matlab exists named HMC exact. It shouldn't be too difficult to translate in Py.

Categories

Resources