Scipy Optimization Solution for Multivariable equation - python

I have a formula here to calculate the volume.
def VolCalc(H,L,R,V):
return L * ((math.acos((R - H)/R) * R**2) - ((R - H)* math.sqrt((2 * R * H) - H**2))) # Volume
However, I have been given the value of the volume (V) and from this the height (H) must be calculated. The value of Radius (R) and Length (L) is known.
This requires a numerical approximation solution and need to know which Scipy optimisation tool must be used. I've looked myself and struggling to find the correct one.
I appreciate any help with this matter.

I believe you are looking for scipy.optimize.minimize.
And setup this up as a minimisation problem where you are looking to find the minimum of a scalar function:
def target(H):
return abs(V - (L * ((math.acos((R - H)/R) * R**2) - ((R - H)* math.sqrt((2 * R * H) - H**2)))))
where you need to define the values of V, R and L.

Related

Intersection of Parabolas for Voronoi Diagram

This is my first question on Stack Overflow, so if I have formatted something incorrectly or not added the correct information, please let me know and I will fix it when I can.
I'm trying to implement Fortune's Algorithm for creating a Voronoi Diagram in Roblox. Roblox uses a more limited version of Lua, so I can't just import modules or copy-paste most code. Fortune's Algorithm uses parabolas to find the points required to make the edges around each site and I found an equation in Fortunes Algorithm: An intuitive explanation that generates a parabola where every point on the parabola is equidistant from the focus and the directrix.
​I know how to find the intersection of two parabolas when they are in standard form, but I have been unable to get the above equation into standard form. I've tried (on paper) putting it on both sides of the equals sign, setting it equal to zero, then expanding the polynomials using wolframAlpha's polynomial expander, producing
[(-X^2 * fx) + (X^2 * f'y) + (2 * dy * fx * X) - (2 * dy * f'x * X) + (2 * f'x * fy * X) - (2 * fx * f'y * X) + (dy^2 * fy) - (dy^2 * f'y) - (dy * fy^2) + (dy * f'y^2) - (fy * f'y^2) + (f'y * fy^2) + (dy * f'x^2) - (f'x^2 * fy) - (dy * fx^2) + (f'y * fx^2)] / [2(dy^2) - (dy * fy) - (dy * f'x) + (fy * f'y))]
where X is a variable, fx and fy are the x and y coordinates of the first focus, f'x and f'y are the x and y coordinates of the second focus, and dy is the y coordinate of the directrix. I am unsure of where to go from here.
I asked this on Stack Overflow in the hopes of finding someone who has implemented it before, but I may also ask it on a more math-related website as I just need the formula that gives the intercept(s). Having said this, if anyone has a code answer to this, Python would be preferred as I am most familiar with it.
Unrelated to the main question, does anyone know if there is a way to format the equation in Stack Overflow that is more easy to look at than a single line, and if so, how?

Is there any way to choose how many features are selected in Binary Particle Swarm Optimization?

I implemented BPSO as a feature selection approach using the pyswarms library. I followed this tutorial.
Is there a way to limit the maximum number of features? If not, are there other particle swarm (or genetic/simulated annealing) python-implementations that have this functionality?
An easy way is to introduce a penalty for using any number of features. The in the following code a objective i defined
# Perform classification and store performance in P
classifier.fit(X_subset, y)
P = (classifier.predict(X_subset) == y).mean()
# Compute for the objective function
j = (alpha * (1.0 - P)
+ (1.0 - alpha) * (1 - (X_subset.shape[1] / total_features)))
return j
What you could do, is add a penalty if the number of features is about max_num_features, e.g.
features_count = np.count_nonzero(m)
features_overflow = np.clip( max_num_features - features_count, 0, 10)
feature_overflow_penalty = (features_overflow / 10)
and define a new objective with:
j = (alpha * (1.0 - P)
+ (1.0 - alpha) * (1 - (X_subset.shape[1] / total_features))) - feature_overflow_penalty
This is not tested, and there is work to do to find the right penalty. A alternative is to never suggest/try features above certain threshold.

FiPy - Domain stretches / Frame growth

I'm trying to solve a simple diffusion equation (dT/dt = K d2T/dx2 ) on a domain whose depth ( h(t) ) changes in time. The resulting equation is therefore:
dT/dt = K/h^2 d2T/dx2 + z/h dh/dt dT/dz
where z is now a fixed 0->1 domain.
The new term is frame advection and I'm trying to include it as such but I'm struggling with the spatially dependent coefficient.
When I include it outside the convection term:
mesh.cellCenters[0]*PowerLawConvectionTerm(...)
I get this error:
TermMultiplyError: Must multiply terms by int or float
But if I reorganise the equation so the spatial dependence is inside the convection term:
PowerLawConvectionTerm(coeff=(mesh.cellCenters[0]**2,),...)
I get a different error when solving the equation:
AssertionError: assert( len(id1) == len(id2) == len(vector) )
What is the correct way to include these terms? Is there a silly mistake I'm making somewhere?
The best way to solve this might be to split the last term into two parts so that the equation in FiPy is written
fipy.TransientTerm() == fipy.DiffusionTerm(K / h**2) \
+ fipy.ConvectionTerm(z * z_hat * h_t / h) \
- h_t / h * T
In FiPy there can't be multipliers outside of the term's derivative so an extra source term is required. Here it is assumed that
K = 1. ## some constant
h = fipy.Variable(...) ## variable that is continuously updated
h_old = fipy.Variable(...) ## variable that is continuously updated
h_t = (h - h_old) / dt ## variable dependent on h and h_old
T = fipy.CellVariable(...)
z_hat = [0, 1] ## vector required for convection term coefficient
T is the variable being solved for, h and h_old are explicilty updated at every sweep or time step using setValue based on some formula. Additionally, the last term can be split into an explicit and an implicit source
- h_t / h * T -> - fipy.ImplicitSourceTerm(1 / dt) + h_old / h / dt * T
depending on how the h_t is evaluated. The implicit source should make the solution very stable.

Python - Vincenty's inverse formula not converging (Finding distance between points on Earth)

I'm attempting to implement Vincenty's inverse problem as described on wiki HERE
The problem is that lambda is simply not converging. The value stays the same if I try to iterate over the sequence of formulas, and I'm really not sure why. Perhaps I've just stared myself blind on an obvious problem.
It should be noted that I'm new to Python and still learning the language, so I'm not sure if it's misuse of the language that might cause the problem, or if I do have some mistakes in some of the calculations that I perform. I just can't seem to find any mistakes in the formulas.
Basically, I've written in the code in as close of a format as I could to the wiki article, and the result is this:
import math
# Length of radius at equator of the ellipsoid
a = 6378137.0
# Flattening of the ellipsoid
f = 1/298.257223563
# Length of radius at the poles of the ellipsoid
b = (1 - f) * a
# Latitude points
la1, la2 = 10, 60
# Longitude points
lo1, lo2 = 5, 150
# For the inverse problem, we calculate U1, U2 and L.
# We set the initial value of lamb = L
u1 = math.atan( (1 - f) * math.tan(la1) )
u2 = math.atan( (1 - f) * math.tan(la2) )
L = (lo2 - lo1) * 0.0174532925
lamb = L
while True:
sinArc = math.sqrt( math.pow(math.cos(u2) * math.sin(lamb),2) + math.pow(math.cos(u1) * math.sin(u2) - math.sin(u1) * math.cos(u2) * math.cos(lamb),2) )
cosArc = math.sin(u1) * math.sin(u2) + math.cos(u1) * math.cos(u2) * math.cos(lamb)
arc = math.atan2(sinArc, cosArc)
sinAzimuth = ( math.cos(u1) * math.cos(u2) * math.sin(lamb) ) // ( sinArc )
cosAzimuthSqr = 1 - math.pow(sinAzimuth, 2)
cosProduct = cosArc - ((2 * math.sin(u1) * math.sin(u2) ) // (cosAzimuthSqr))
C = (f//16) * cosAzimuthSqr * (4 + f * (4 - 3 * cosAzimuthSqr))
lamb = L + (1 - C) * f * sinAzimuth * ( arc + C * sinArc * ( cosProduct + C * cosArc * (-1 + 2 * math.pow(cosProduct, 2))))
print(lamb)
As mentioned the problem is that the value "lamb" (lambda) will not become smaller. I've even tried to compare my code to other implementations, but they looked just about the same.
What am I doing wrong here? :-)
Thank you all!
First, you should convert you latitudes in radians too (you already do this for your longitudes):
u1 = math.atan( (1 - f) * math.tan(math.radians(la1)) )
u2 = math.atan( (1 - f) * math.tan(math.radians(la2)) )
L = math.radians((lo2 - lo1)) # better than * 0.0174532925
Once you do this and get rid of // (int divisions) and replace them by / (float divisions), lambda stops repeating the same value through your iterations and starts following this path (based on your example coordinates):
2.5325205864224847
2.5325167509030906
2.532516759118641
2.532516759101044
2.5325167591010813
2.5325167591010813
2.5325167591010813
As you seem to expect a convergence precision of 10^(−12), it seems to make the point.
You can now exit the loop (lambda having converged) and keep going until you compute the desired geodesic distance s.
Note: you can test your final value s here.
Even if it is correctly implemented, Vincenty's algorithm will fail to
converge for some points. (This problem was noted by Vincenty.)
I give an algorithm which is guaranteed to
converge in Algorithms for geodesics; there's a python
implementation available here. Finally, you can find more
information on the problem at the Wikipedia page,
Geodesics on an ellipsoid. (The talk page has examples
of pairs of points for which Vincenty, as implemented by the NGS,
fails to converge.)

Euler method (explicit and implicit)

I'd like to implement Euler's method (the explicit and the implicit one)
(https://en.wikipedia.org/wiki/Euler_method) for the following model:
x(t)' = q(x_M -x(t))x(t)
x(0) = x_0
where q, x_M and x_0 are real numbers.
I know already the (theoretical) implementation of the method. But I couldn't figure out where I can insert / change the model.
Could anybody help?
EDIT: You were right. I didn't understand correctly the method. Now, after a few hours, I think that I really got it! With the explicit method, I'm pretty sure (nevertheless: could anybody please have a look at my code? )
With the implicit implementation, I'm not very sure if it's correct. Could please anyone have a look at the implementation of the implicit method and give me a feedback what's correct / not good?
def explizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = x + dt * (q * (xM - x) * x)
print '%6.3f %6.3f' % (t, x)
def implizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = (1.0 / (1.0 - q *(xM + x) * x))
print '%6.3f %6.3f' % (t, x)
Pre-emptive note: Although the general idea should be correct, I did all the algebra in place in the editor box so there might be mistakes there. Please, check it yourself before using for anything really important.
I'm not sure how you come to the "implicit" formula
x = (1.0 / (1.0 - q *(xM + x) * x))
but this is wrong and you can check it by comparing your "explicit" and "implicit" results: they should slightly diverge but with this formula they will diverge drastically.
To understand the implicit Euler method, you should first get the idea behind the explicit one. And the idea is really simple and is explained at the Derivation section in the wiki: since derivative y'(x) is a limit of (y(x+h) - y(x))/h, you can approximate y(x+h) as y(x) + h*y'(x) for small h, assuming our original differential equation is
y'(x) = F(x, y(x))
Note that the reason this is only an approximation rather than exact value is that even over small range [x, x+h] the derivative y'(x) changes slightly. It means that if you want to get a better approximation of y(x+h), you need a better approximation of "average" derivative y'(x) over the range [x, x+h]. Let's call that approximation just y'. One idea of such improvement is to find both y' and y(x+h) at the same time by saying that we want to find such y' and y(x+h) that y' would be actually y'(x+h) (i.e. the derivative at the end). This results in the following system of equations:
y'(x+h) = F(x+h, y(x+h))
y(x+h) = y(x) + h*y'(x+h)
which is equivalent to a single "implicit" equation:
y(x+h) - y(x) = h * F(x+h, y(x+h))
It is called "implicit" because here the target y(x+h) is also a part of F. And note that quite similar equation is mentioned in the Modifications and extensions section of the wiki article.
So now going to your case that equation becomes
x(t+dt) - x(t) = dt*q*(xM -x(t+dt))*x(t+dt)
or equivalently
dt*q*x(t+dt)^2 + (1 - dt*q*xM)*x(t+dt) - x(t) = 0
This is a quadratic equation with two solutions:
x(t+dt) = [(dt*q*xM - 1) ± sqrt((dt*q*xM - 1)^2 + 4*dt*q*x(t))]/(2*dt*q)
Obviously we want the solution that is "close" to the x(t) which is the + solution. So the code should be something like:
b = (q * xM * dt - 1)
x(t+h) = (b + (b ** 2 + 4 * q * x(t) * dt) ** 0.5) / 2 / q / dt
(editor note:) Applying the binomial complement, this formula has the numerically more stable form for small dt, where then b < 0,
x(t+h) = (2 * x(t)) / ((b ** 2 + 4 * q * x(t) * dt) ** 0.5 - b)

Categories

Resources