How to calculate the perimeter of an ellipse - python

I want to calculate the perimeter of an ellipse with given values for minor and major axis. I'm currently using Python.
I have calculated the minor axis and major axis lengths for the ellipse i.e. a and b.
It’s easy to calculate the area but I want to calculate the perimeter of the ellipse for calculating a rounded length. Do you have any idea?

According to Ramanujan's first approximation formula of finding perimeter of Ellipse ->
>>> import math
>>>
>>> def calculate_perimeter(a,b):
... perimeter = math.pi * ( 3*(a+b) - math.sqrt( (3*a + b) * (a + 3*b) ) )
... return perimeter
...
>>> calculate_perimeter(2,3)
15.865437575563961
You can compare the result with google calculator also

a definition problem: major, minor axes differ from semi-major, semi-minor
the OP should be clear, those grabbing, comparing to online solutions should be too
you can get sympy to (numerically) solve the problem, I'm using the full axes definition
from sympy import *
a, b, w = symbols('a b w')
x = a/2 * cos(w)
y = b/2 * sin(w)
dx = diff(x, w)
dy = diff(y, w)
ds = sqrt(dx**2 + dy**2)
def perimeter(majr, minr):
return Integral(ds.subs([(a,majr),(b,minr)]), (w, 0, 2*pi)).evalf().doit()
print('test1: a, b = 1 gives dia = 1 circle, perimeter/pi = ',
perimeter(1, 1)/pi.evalf())
print('test2: a, b = 4,6 ellipse perimeter = ', perimeter(4,6))
test1: a, b = 1 gives dia = 1 circle, perimeter/pi = 1.00000000000000
test2: a, b = 4,6 ellipse perimeter = 15.8654395892906
its also possible to export the symbolic ds equation as a function to try with other Python lib integration functions
func_dw = lambdify((w, a, b), ds)
from scipy import integrate
print(integrate.quad(func_dw, 0, 2*np.pi, args=(4, 6)))
(15.865439589290586, 2.23277254813499e-12)
scipy.integrate.quad(func, a, b, args=()...
Returns:
y : float, The integral of func from a to b.
abserr : float, An estimate of the
absolute error in the result

As Mark stated in a comment, you can simply use scipy.special.ellipe. This implementation uses the complete elliptic integral of the second kind as approximated in the original C function ellpe.c. As described in scipy's docs:
the computation uses the approximation,
E(m) ~ P(1-m) - (1-m) log(1-m) Q(1-m)
where P and Q are tenth-order polynomials
from scipy.special import ellipe
a = 3.5
b = 2.1
# eccentricity squared
e_sq = 1.0 - b**2/a**2
# circumference formula
C = 4 * a * ellipe(e_sq)
17.868899204378693

This is kind of a meta answer comparing the ones above.
Actually, Ramanujan's second approximation is more accurate and a bit more complex than the formula in Rezwan4029's answer (which uses Ramanujan's first approximation). The second approximation is:
π * ((a+b) + (3(a-b)²) / (10*(a+b) + sqrt(a² + 14ab + b²)))
But I looked at all the answers above and compared their results. For good reasons which will become apparent later I chose Gabriel's version as the truth source, i.e. the value to compare the others against.
For the answer Rezwan4029 gave, I plotted the error in percent over a grid of 2**(-10) .. 2**9. This is the result (both axes are the power, so the point (3|5) shows the error for an ellipse of radii 2**3, 2**5):
It is obvious that only the difference in the power is relevant for the error, so I also plotted this:
What emerges in any case is that the error ranges from 0 for circles to 0.45% for extremely eccentric ellipses. Depending on your application this might be completely acceptable or render the solution unusable.
For Ramanujan's 2nd approximation formula the situation is very similar, the error is about 1/10 of the former:
The sympy solution of Mark Dickinson and the scipy solution of Gabriel still have still some differences, but they are at most in the range of 1e-6, so a different ball park. But the sympy solution is extremely slow, so the scipy version probably should be used in most cases.
For the sake of completeness, here's a distribution of the error (this time the logarithm of the error is on the z-axis, otherwise it wouldn't tell us very much, so the height corresponds roughly with the negative of the number of valid digits):
Conclusion: Use the scipy method. It's fast and very likely very accurate, maybe even the most accurate of the three proposed methods.

Use the improvement made by a russian mathematician few years ago (not infinite series calculation but convergence calculation using AGM and MAGM) http://www.ams.org/notices/201208/rtx120801094p.pdf or
https://indico-hlit.jinr.ru/event/187/contributions/1769/attachments/543/931/SAdlaj.pdf
An use is there: surface plots in matplotlib using a function z = f(x,y) where f cannot be written in standard functions. HowTo? (script for drawing a surface including isoperimeter curves: it means all X-Y from a curve are all half-parameter of all ellipses having the same perimeter). Or contact direct the mathematician, or buy at springernature.com the article "An Arithmetic-Geometric Mean of a Third Kind!",Semjon Adlaj, Federal Research Center “Informatics and Control” of the Russian Academy of Sciences, Vavilov St. 44, Moscow 119333, Russia SemjonAdlaj#gmail.com

There are some good answers but I wanted to clarify things in terms of exact/approximate calculations, as well as computational speed.
For the exact circumference using pure python, check out my pyellipse code https://gist.github.com/TimSC/4be20baeac7890e15773d31efb752d23 The approach I implemented was proposed by Adlaj 2012 (as suggested by #floppy_molly).
Alternatively, for the exact circumference, use scipy.special.ellipe as described by #Gabriel. This is twice as slow as Adlaj 2012.
For good approximation that is fast to compute and has no scipy dependency, see Ramanujan's 2nd approximation as described by #Alfe
For another good approximation that is fast to compute (that avoids using square root), use the Padé approximation by Jacobsen and Waadeland 1985 http://www.numericana.com/answer/ellipse.htm#hudson
h = pow(a-b, 2.0) / pow(a+b, 2.0)
C = (math.pi * (a+b) * (256.0 - 48.0 * h - 21.0 * h*h)
/(256.0 - 112.0 * h + 3.0 * h*h))
There are many other approaches but these are the most useful for normal applications.

Related

Double antiderivative computation in python

I have the following problem. I have a function f defined in python using numpy functions. The function is smooth and integrable on positive reals. I want to construct the double antiderivative of the function (assuming that both the value and the slope of the antiderivative at 0 are 0) so that I can evaluate it on any positive real smaller than 100.
Definition of antiderivative of f at x:
integrate f(s) with s from 0 to x
Definition of double antiderivative of f at x:
integrate (integrate f(t) with t from 0 to s) with s from 0 to x
The actual form of f is not important, so I will use a simple one for convenience. But please note that even though my example has a known closed form, my actual function does not.
import numpy as np
f = lambda x: np.exp(-x)*x
My solution is to construct the antiderivative as an array using naive numerical integration:
N = 10000
delta = 100/N
xs = np.linspace(0,100,N+1)
vs = f(xs)
avs = np.cumsum(vs)*delta
aavs = np.cumsum(avs)*delta
This of course works but it gives me arrays instead of functions. But this is not a big problem as I can interpolate aavs using a spline to get a function and get rid of the arrays.
from scipy.interpolate import UnivariateSpline
aaf = UnivariateSpline(xs, aavs)
The function aaf is approximately the double antiderivative of f.
The problem is that even though it works, there is quite a bit of overhead before I can get my function and precision is expensive.
My other idea was to interpolate f by a spline and take the antiderivative of that, however this introduces numerical errors that are too big for what I want to use the function.
Is there any better way to do that? By better I mean faster without sacrificing accuracy.
Edit: What I hope is possible is to use some kind of Fourier transform to avoid integrating twice. I hope that there is some convenient transform of vs that allows to multiply the values component-wise with xs and transform back to get the double antiderivative. I played with this a bit, but I got lost.
Edit: I figured out that by using the trapezoidal rule instead of a naive sum, increases the accuracy quite a bit. Using Simpson's rule should increase the accuracy further, but it's somewhat fiddly to do with numpy arrays.
Edit: As #user202729 rightfully complains, this seems off. The reason it seems off is because I have skipped some details. I explain here why what I say makes sense, but it does not affect my question.
My actual goal is not to find the double antiderivative of f, but to find a transformation of this. I have skipped that because I think it only confuses the matter.
The function f decays exponentially as x approaches 0 or infinity. I am minimizing the numerical error in the integration by starting the sum from 0 and going up to approximately the peak of f. This ensure that the relative error is approximately constant. Then I start from the opposite direction from some very big x and go back to the peak. Then I do the same for the antiderivative values.
Then I transform the aavs by another function which is sensitive to numerical errors. Then I find the region where the errors are big (the values oscillate violently) and drop these values. Finally I approximate what I believe are good values by a spline.
Now if I use spline to approximate f, it introduces an absolute error which is the dominant term in a rather large interval. This gets "integrated" twice and it ends up being a rather large relative error in aavs. Then once I transform aavs, I find that the 'good region' has shrunk considerably.
EDIT: The actual form of f is something I'm still looking into. However, it is going to be a generalisation of the lognormal distribution. Right now I am playing with the following family.
I start by defining a generalization of the normal distribution:
def pdf_n(params, center=0.0, slope=8):
scale, min, diff = params
if diff > 0:
r = min
l = min + diff
else:
r = min - diff
l = min
def retfun(m):
x = (m - center)/scale
E = special.expit(slope*x)*(r - l) + l
return np.exp( -np.power(1 + x*x, E)/2 )
return np.vectorize(retfun)
It may not be obvious what is happening here, but the result is quite simple. The function decays as exp(-x^(2l)) on the left and as exp(-x^(2r)) on the right. For min=1 and diff=0, this is the normal distribution. Note that this is not normalized. Then I define
g = pdf(params)
f = np.vectorize(lambda x:g(np.log(x))/x/area)
where area is the normalization constant.
Note that this is not the actual code I use. I stripped it down to the bare minimum.
You can compute the two np.cumsum (and the divisions) at once more efficiently using Numba. This is significantly faster since there is no need for several temporary arrays to be allocated, filled, read again and freed. Here is a naive implementation:
import numba as nb
#nb.njit('float64[::1](float64[::1], float64)') # Assume vs is contiguous
def doubleAntiderivative_naive(vs, delta):
res = np.empty(vs.size, dtype=np.float64)
sum1, sum2 = 0.0, 0.0
for i in range(vs.size):
sum1 += vs[i] * delta
sum2 += sum1 * delta
res[i] = sum2
return res
However, the sum is not very good in term of numerical stability. A Kahan summation is needed to improve the accuracy (or possibly the alternative Kahan–Babuška-Klein algorithm if you are paranoid about the accuracy and performance do not matter so much). Note that Numpy use a pair-wise algorithm which is quite good but far from being prefect in term of accuracy (this is a good compromise for both performance and accuracy).
Moreover, delta can be factorized during in the summation (ie. the result just need to be premultiplied by delta**2).
Here is an implementation using the more accurate Kahan summation:
#nb.njit('float64[::1](float64[::1], float64)')
def doubleAntiderivative_accurate(vs, delta):
res = np.empty(vs.size, dtype=np.float64)
delta2 = delta * delta
sum1, sum2 = 0.0, 0.0
c1, c2 = 0.0, 0.0
for i in range(vs.size):
# Kahan summation of the antiderivative of vs
y1 = vs[i] - c1
t1 = sum1 + y1
c1 = (t1 - sum1) - y1
sum1 = t1
# Kahan summation of the double antiderivative of vs
y2 = sum1 - c2
t2 = sum2 + y2
c2 = (t2 - sum2) - y2
sum2 = t2
res[i] = sum2 * delta2
return res
Here is the performance of the approaches on my machine (with an i5-9600KF processor):
Numpy cumsum: 51.3 us
Naive Numba: 11.6 us
Accutate Numba: 37.2 us
Here is the relative error of the approaches (based on the provided input function):
Numpy cumsum: 1e-13
Naive Numba: 5e-14
Accutate Numba: 2e-16
Perfect precision: 1e-16 (assuming 64-bit numbers are used)
If f can be easily computed using Numba (this is the case here), then vs[i] can be replaced by calls to f (inlined by Numba). This helps to reduce the memory consumption of the computation (N can be huge without saturating your RAM).
As for the interpolation, the splines often gives good numerical result but they are quite expensive to compute and AFAIK they require the whole array to be computed (each item of the array impact all the spline although some items may have a negligible impact alone). Regarding your needs, you could consider using Lagrange polynomials. You should be careful when using Lagrange polynomials on the edges. In your case, you can easily solve the numerical divergence issue on the edges by extending the array size with the border values (since you know the derivative on each edges of vs is 0). You can apply the interpolation on the fly with this method which can be good for both performance (typically if the computation is parallelized) and memory usage.
First, I created a version of the code I found more intuitive. Here I multiply cumulative sum values by bin widths. I believe there is a small error in the original version of the code related to the bin width issue.
import numpy as np
f = lambda x: np.exp(-x)*x
N = 1000
xs = np.linspace(0,100,N+1)
domainwidth = ( np.max(xs) - np.min(xs) )
binwidth = domainwidth / N
vs = f(xs)
avs = np.cumsum(vs)*binwidth
aavs = np.cumsum(avs)*binwidth
Next, for visualization here is some very simple plotting code:
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.scatter( xs, vs )
plt.figure()
plt.scatter( xs, avs )
plt.figure()
plt.scatter( xs, aavs )
plt.show()
The first integral matches the known result of the example expression and can be seen on wolfram
Below is a simple function that extracts an element from the second derivative. Note that int is a bad rounding function. I assume this is what you have implemented already.
def extract_double_antideriv_value(x):
return aavs[int(x/binwidth)]
singleresult = extract_double_antideriv_value(50.24)
print('singleresult', singleresult)
Whatever full computation steps are required, we need to know them before we can start optimizing. Do you have a million different functions to integrate? If you only need to query a single double anti-derivative many times, your original solution should be fairly ideal.
Symbolic Approximation:
Have you considered approximations to the original function f, which can have closed form integration solutions? You have a limited domain on which the function lives. Perhaps approximate f with a Taylor series (which can be constructed with known maximum error) then integrate exactly? (consider Pade, Taylor, Fourier, Cheby, Lagrange(as suggested by another answer), etc...)
Log Tricks:
Another alternative to dealing with spiky errors, would be to take the log of your original function. Is f always positive? Is the integration error caused because the neighborhood around the max is very small? If so, you can study ln(f) or even ln(ln(f)) instead. It would really help to understand what f looks like more.
Approximation Integration Tricks
There exist countless integration tricks in general, which can make approximate closed form solutions to undo-able integrals. A very common one when exponetnial functions are involved (I think yours is expoential?) is to use Laplace's Method. But which trick to pull out of the bag is highly dependent upon the conditions which f satisfies.

expand 1 dim vector by using taylor series of log(1+e^x) in python

I need to non-linearly expand on each pixel value from 1 dim pixel vector with taylor series expansion of specific non-linear function (e^x or log(x) or log(1+e^x)), but my current implementation is not right to me at least based on taylor series concepts. The basic intuition behind is taking pixel array as input neurons for a CNN model where each pixel should be non-linearly expanded with taylor series expansion of non-linear function.
new update 1:
From my understanding from taylor series, taylor series is written for a function F of a variable x in terms of the value of the function F and it's derivatives in for another value of variable x0. In my problem, F is function of non-linear transformation of features (a.k.a, pixels), x is each pixel value, x0 is maclaurin series approximation at 0.
new update 2
if we use taylor series of log(1+e^x) with approximation order of 2, each pixel value will yield two new pixel by taking first and second expansion terms of taylor series.
graphic illustration
Here is the graphical illustration of the above formulation:
Where X is pixel array, p is approximation order of taylor series, and α is the taylor expansion coefficient.
I wanted to non-linearly expand pixel vectors with taylor series expansion of non-linear function like above illustration demonstrated.
My current attempt
This is my current attempt which is not working correctly for pixel arrays. I was thinking about how to make the same idea applicable to pixel arrays.
def taylor_func(x, approx_order=2):
x_ = x[..., None]
x_ = tf.tile(x_, multiples=[1, 1, approx_order+ 1])
pows = tf.range(0, approx_order + 1, dtype=tf.float32)
x_p = tf.pow(x_, pows)
x_p_ = x_p[..., None]
return x_p_
x = Input(shape=(4,4,3))
x_new = Lambda(lambda x: taylor_func(x, max_pow))(x)
my new updated attempt:
x_input= Input(shape=(32, 32,3))
def maclurin_exp(x, powers=2):
out= 0
for k in range(powers):
out+= ((-1)**k) * (x ** (2*k)) / (math.factorial(2 * k))
return res
x_input_new = Lambda(lambda x: maclurin_exp(x, max_pow))(x_input)
This attempt doesn't yield what the above mathematical formulation describes. I bet I missed something while doing the expansion. Can anyone point me on how to make this correct? Any better idea?
goal
I wanted to take pixel vector and make non-linearly distributed or expanded with taylor series expansion of certain non-linear function. Is there any possible way to do this? any thoughts? thanks
This is a really interesting question but I can't say that I'm clear on it as of yet. So, while I have some thoughts, I might be missing the thrust of what you're looking to do.
It seems like you want to develop your own activation function instead of using something RELU or softmax. Certainly no harm there. And you gave three candidates: e^x, log(x), and log(1+e^x).
Notice log(x) asymptotically approaches negative infinity x --> 0. So, log(x) is right out. If that was intended as a check on the answers you get or was something jotted down as you were falling asleep, no worries. But if it wasn't, you should spend some time and make sure you understand the underpinnings of what you doing because the consequences can be quite high.
You indicated you were looking for a canonical answer and you get a two for one here. You get both a canonical answer and highly performant code.
Considering you're not likely to able to write faster, more streamlined code than the folks of SciPy, Numpy, or Pandas. Or, PyPy. Or Cython for that matter. Their stuff is the standard. So don't try to compete against them by writing your own, less performant (and possibly bugged) version which you will then have to maintain as time passes. Instead, maximize your development and run times by using them.
Let's take a look at the implementation e^x in SciPy and give you some code to work with. I know you don't need a graph for what you're at this stage but they're pretty and can help you understand how they Taylor (or Maclaurin, aka Euler-Maclaurin) will work as the order of the approximation changes. It just so happens that SciPy has Taylor approximation built-in.
import scipy
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import approximate_taylor_polynomial
x = np.linspace(-10.0, 10.0, num=100)
plt.plot(x, np.exp(x), label="e^x", color = 'black')
for degree in np.arange(1, 4, step=1):
e_to_the_x_taylor = approximate_taylor_polynomial(np.exp, 0, degree, 1, order=degree + 2)
plt.plot(x, e_to_the_x_taylor(x), label=f"degree={degree}")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.0, shadow=True)
plt.tight_layout()
plt.axis([-10, 10, -10, 10])
plt.show()
That produces this:
But let's say if you're good with 'the maths', so to speak, and are willing to go with something slightly slower if it's more 'mathy' as in it handles symbolic notation well. For that, let me suggest SymPy.
And with that in mind here is a bit of SymPy code with a graph because, well, it looks good AND because we need to go back and hit another point again.
from sympy import series, Symbol, log, E
from sympy.functions import exp
from sympy.plotting import plot
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 13,10
plt.rcParams['lines.linewidth'] = 2
x = Symbol('x')
def taylor(function, x0, n):
""" Defines Taylor approximation of a given function
function -- is our function which we want to approximate
x0 -- point where to approximate
n -- order of approximation
"""
return function.series(x,x0,n).removeO()
# I get eyestain; feel free to get rid of this
plt.rcParams['figure.figsize'] = 10, 8
plt.rcParams['lines.linewidth'] = 1
c = log(1 + pow(E, x))
plt = plot(c, taylor(c,0,1), taylor(c,0,2), taylor(c,0,3), taylor(c,0,4), (x,-5,5),legend=True, show=False)
plt[0].line_color = 'black'
plt[1].line_color = 'red'
plt[2].line_color = 'orange'
plt[3].line_color = 'green'
plt[4].line_color = 'blue'
plt.title = 'Taylor Series Expansion for log(1 +e^x)'
plt.show()
I think either option will get you where you need go.
Ok, now for the other point. You clearly stated after a bit of revision that log(1 +e^x) was your first choice. But the others don't pass the sniff test. e^x vacillates wildly as the degree of the polynomial changes. Because of the opaqueness of algorithms and how few people can conceptually understand this stuff, Data Scientists can screw things up to a degree people can't even imagine. So make sure you're very solid on theory for this.
One last thing, consider looking at the CDF of the Erlang Distribution as an activation function (assuming I'm right and you're looking to roll your own activation function as an area of research). I don't think anyone has looked at that but it strikes as promising. I think you could break out each channel of the RGB as one of the two parameters, with the other being the physical coordinate.
You can use tf.tile and tf.math.pow to generate the elements of the series expansion. Then you can use tf.math.cumsum to compute the partial sums s_i. Eventually you can multiply with the weights w_i and compute the final sum.
Here is a code sample:
import math
import tensorflow as tf
x = tf.keras.Input(shape=(32, 32, 3)) # 3-channel RGB.
# The following is determined by your series expansion and its order.
# For example: log(1 + exp(x)) to 3rd order.
# https://www.wolframalpha.com/input/?i=taylor+series+log%281+%2B+e%5Ex%29
order = 3
alpha = tf.constant([1/2, 1/8, -1/192]) # Series coefficients.
power = tf.constant([1.0, 2.0, 4.0])
offset = math.log(2)
# These are the weights of the network; using a constant for simplicity here.
# The shape must coincide with the above order of series expansion.
w_i = tf.constant([1.0, 1.0, 1.0])
elements = offset + alpha * tf.math.pow(
tf.tile(x[..., None], [1, 1, 1, 1, order]),
power
)
s_i = tf.math.cumsum(elements, axis=-1)
y = tf.math.reduce_sum(w_i * s_i, axis=-1)

equation system with fsolve

I try to find a solution for a system of equations by using scipy.optimize.fsolve in python 2.7. The goal is to calculate equilibrium concentrations for a chemical system. Due to the nature of the problem, some of the constants are very small. Now for some combinations i do get a proper solution. For some parameters i don't find a solution. Either the solutions are negative, which is not reasonable from a physical point of view or fsolve produces:
ier = 3, 'xtol=0.000000 is too small, no further improvement in the approximate\n solution is possible.')
ier = 4, 'The iteration is not making good progress, as measured by the \n improvement from the last five Jacobian evaluations.')
ier = 5, 'The iteration is not making good progress, as measured by the \n improvement from the last ten iterations.')
It seems to me, based on my research, that the failure to find proper solutions of the equation system is connected to the datatype float.64 not being precise enough. As a friend pointed out, the system is not well conditioned with parameters differing in several magnitudes.
So i tried to use fsolve with the mpfr type provided by the gmpy2 module but that resulted in the following error:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
Now here is a small example with parameter which lead to a solution if the randomized starting parameters fit happen to be good. However if the constant C_HCL is chosen to be something like 1e-4 or bigger then i never find a proper solution.
from numpy import *
from scipy.optimize import *
K_1 = 1e-8
K_2 = 1e-8
K_W = 1e-30
C_HCL = 1e-11
C_NAOH = K_W/C_HCL
C_HL = 1e-6
if C_HCL-C_NAOH > 0:
Saeure_Base = C_HCL-C_NAOH+sqrt(K_W)
OH_init = K_W/(Saeure_Base)
elif C_HCL-C_NAOH < 0:
OH_init = C_NAOH-C_HCL+sqrt(K_W)
Saeure_Base = K_W/OH_init
# some randomized start parameters
G1 = random.uniform(0, 2)*Saeure_Base
G2 = random.uniform(0, 2)*OH_init
G3 = random.uniform(1, 2)*C_HL*(sqrt(K_W))/(Saeure_Base+OH_init)
G4 = random.uniform(0.1, 1)*(C_HL - G3)/2
G5 = C_HL - G3 - G4
zGuess = array([G1,G2,G3,G4,G5])
#equation system / 5 variables --> H3O, OH, HL, H2L, L
def myFunction(z):
H3O = z[0]
OH = z[1]
HL = z[2]
H2L = z[3]
L = z[4]
F = empty((5))
F[0] = H3O*L/HL - K_1
F[1] = OH*H2L/HL - K_2
F[2] = K_W - OH*H3O
F[3] = C_HL - HL - H2L - L
F[4] = OH+L+C_HCL-H2L-H3O-C_NAOH
return F
z = fsolve(myFunction,zGuess, maxfev=10000, xtol=1e-15, full_output=1,factor=0.1)
print z
So the questions are. Is this problem based on the precision of float.64 and
if yes , (how) can it be solved with python? Is fsolve the way to go? Would i need to change the fsolve function so it accepts a different data type?
The root of your problem is either theoretical or numerical.
The scipy.optimize.fsolvefunction is based on the MINPACK Fortran solver (http://www.netlib.org/minpack/). This solver use a Newton-Raphson optimisation algorithm to provide the solution.
There are underlying assumptions about the smoothness of the function when you use this algorithm. For example, the jacobian matrix at the solution point x is supposed to be invertible. The one you are more concerned about is the basins of attraction.
In order to converge, the starting point of the algorithm needs to be near the actual solution, i.e. in the basins of attraction. This condition is always met for convex functions, however it is easy to find some functions for which this algorithm behaves badly. Your function is one of this as you have a fraction of your inputs parameters.
To address this issue you should just change the starting point. This starting point becomes also very important for functions with multiple solutions: this picture from the wikipedia article shows you the solution found depending of the starting point (five colours for five solutions); so you should be careful with your solution and actually check the "physical" aspects of your solution.
For the numerical aspects, the Newton-Raphson algorithm needs to have the value of the jacobian matrix (the derivatives matrix). If it is not provided to the MINPACK solver, the jacobian is estimated with a finite-difference formula. The perturbation step for the finite difference formula need to be provided epsfcn=None, the None being here as default value only in the case where fprimeis provided (there is no need for the jacobian estimation in this case). So first you should incorporate that. You could also specify directly the jacobian by derivating your function by hand.
However, the minimum value for the step size will be the machine precision, also called machine epsilon. For your problem, you have very small inputs values which can be a problem. I would suggest multiply everyone of them by the same value (like 10^6), it is equivalent to a change of the units but will avoid rounding up errors and problems with machine precision.
This problem is also important when you look at the parameter xtol=1e-15 you provided. In your error message, it gives xtol=0.000000, as it is below machine precision and cannot be taken into account. Also, if you look at your line F[2] = K_W - OH*H3O, given the machine precision, it does not matter if K_W is 1e-15or 1e-30. 0 is a solution for both of this case compare to the machine precision. To avoid this problem, just multiply everything by a bigger value.
So to sum up:
For the Newton-Raphson algorithm, the initialisation point matters !
For this algorithm, you should specify how you compute the jacobian !
In numerical computation, never work with small values. You can easily change the dimension to something different: it is basic units conversion, like working in gram instead of kilogram.

Integral of Intensity function in python

There is a function which determine the intensity of the Fraunhofer diffraction pattern of a circular aperture... (more information)
Integral of the function in distance x= [-3.8317 , 3.8317] must be about 83.8% ( If assume that I0 is 100) and when you increase the distance to [-13.33 , 13.33] it should be about 95%.
But when I use integral in python, the answer is wrong.. I don't know what's going wrong in my code :(
from scipy.integrate import quad
from scipy import special as sp
I0=100.0
dist=3.8317
I= quad(lambda x:( I0*((2*sp.j1(x)/x)**2)) , -dist, dist)[0]
print I
Result of the integral can't be bigger than 100 (I0) because this is the diffraction of I0 ... I don't know.. may be scaling... may be the method! :(
The problem seems to be in the function's behaviour near zero. If the function is plotted, it looks smooth:
However, scipy.integrate.quad complains about round-off errors, which is very strange with this beautiful curve. However, the function is not defined at 0 (of course, you are dividing by zero!), hence the integration does not go well.
You may use a simpler integration method or do something about your function. You may also be able to integrate it to very close to zero from both sides. However, with these numbers the integral does not look right when looking at your results.
However, I think I have a hunch of what your problem is. As far as I remember, the integral you have shown is actually the intensity (power/area) of Fraunhofer diffraction as a function of distance from the center. If you want to integrate the total power within some radius, you will have to do it in two dimensions.
By simple area integration rules you should multiply your function by 2 pi r before integrating (or x instead of r in your case). Then it becomes:
f = lambda(r): r*(sp.j1(r)/r)**2
or
f = lambda(r): sp.j1(r)**2/r
or even better:
f = lambda(r): r * (sp.j0(r) + sp.jn(2,r))
The last form is best as it does not suffer from any singularities. It is based on Jaime's comment to the original answer (see the comment below this answer!).
(Note that I omitted a couple of constants.) Now you can integrate it from zero to infinity (no negative radii):
fullpower = quad(f, 1e-9, np.inf)[0]
Then you can integrate from some other radius and normalize by the full intensity:
pwr = quad(f, 1e-9, 3.8317)[0] / fullpower
And you get 0.839 (which is quite close to 84 %). If you try the farther radius (13.33):
pwr = quad(f, 1e-9, 13.33)
which gives 0.954.
It should be noted that we introduce a small error by starting the integration from 1e-9 instead of 0. The magnitude of the error can be estimated by trying different values for the starting point. The integration result changes very little between 1e-9 and 1e-12, so they seem to be safe. Of course, you could use, e.g., 1e-30, but then there may be numerical instability in the division. (In this case there isn't, but in general singularities are numerically evil.)
Let us do one thing still:
import matplotlib.pyplot as plt
import numpy as np
x = linspace(0.01, 20, 1000)
intg = np.array([ quad(f, 1e-9, xx)[0] for xx in x])
plt.plot(x, intg/fullpower)
plt.grid('on')
plt.show()
And this is what we get:
At least this looks right, the dark fringes of the Airy disk are clearly visible.
What comes to the last part of the question: I0 defines the maximum intensity (the units may be, e.g. W/m2), whereas the integral gives total power (if the intensity is in W/m2, the total power is in W). Setting the maximum intensity to 100 does not guarantee anything about the total power. That is why it is important to calculate the total power.
There actually exists a closed form equation for the total power radiated onto a circular area:
P(x) = P0 ( 1 - J0(x)^2 - J1(x)^2 ),
where P0 is the total power.
Note that you also can get a closed form solution for your integration using Sympy:
import sympy as sy
sy.init_printing() # LaTeX like pretty printing in IPython
x,d = sy.symbols("x,d", real=True)
I0=100
dist=3.8317
f = I0*((2*sy.besselj(1,x)/x)**2) # the integrand
F = f.integrate((x, -d, d)) # symbolic integration
print(F.evalf(subs={d:dist})) # numeric evalution
F evaluates to:
1600*d*besselj(0, Abs(d))**2/3 + 1600*d*besselj(1, Abs(d))**2/3 - 800*besselj(1, Abs(d))**2/(3*d)
with besselj(0,r) corresponding to sp.j0(r).
They might be a singularity in the integration algorithm when doing the jacobian at x = 0. You can exclude this points from the integration with "points":
f = lambda x:( I0*((2*sp.j1(x)/x)**2))
I = quad(f, -dist, dist, points = [0])
I get then the following result (is this your desired result?)
331.4990321315221

Plotting Jacobi Elliptic Function

I've written program on Python using pygame library for plotting complex functions phase and modulus graphics.
I'm not programmer and don't have any math background. But now I want to know how I could numerically evaluate Jacobi Elliptic Function value in some point z. I've found definition of the function in Wikipedia Jacobi elliptic function and there was integral but I don't understand how I could use it to evaluate function value in point z of complex plane. I know how to numerically evaluate path integral form some point a to b in complex plane, but there are some theta and phi parameters and I don't understand it.
Could you help me?
I don't need Python code (I'll write it myself if I'll understand the principle) but it could be enough if you provide algorithm step by step how to do it.
You could just use mpmath.
from mpmath import ellipfun
print(ellipfun('cd', 1.0 + 2.0j, 0.5))
(1.90652944795345 + 0.225277477847159j)
scipyx, my collection of extensions to SciPy, has support for complex-valued arguments in Jacobi elliptic functions.
Install with
pip install scipyx
and use as
import scipyx as spx
u = 1.0 + 2.0j
m = 0.8
# sn, cn, dn, ph = scipy.special.ellipj(x, m) # not working
sn, cn, dn, ph = spx.ellipj(u, m)
If you're after plotting those, take a look at cplot (also by me):
Having read the article in Wikipedia Jacobi elliptic function and one at http://mysite.du.edu/~jcalvert/math/jacobi.htm I believe this to be an interpretation.
z is a point in the complex plain then z' is its complementary modulus where z'^2 = 1 - z^2
It seems to be the convention that for the Jacobi elliptic function k is used instead of z and that m is used for k^2 and k is such that k^2 is real and 0<k^2<1
the integral is a function u of two parameters k and phi
u(k,phi) = the integral as given
Note then that instead of starting with a z in the complex plane you are starting with a real m 0<k^2<1 and the results relate to the complex solutions of z^2=m
So for a given m you could numerically integrate for a range of values phi (for example 0 to 6π in steps of π/12) giving u
Now for a given m you have a data set plotting for u against phi
The elliptic function sn is the inverse of this ie given u what phi gives this u
So looking in the u data would give the phi results.
Note for a given u there would be more than one phi.

Categories

Resources