The laguerre polynomials don't seem to be converging at some orders as can be demonstrated by running the following code.
import numpy as np
from sympy import mpmath as mp
from scipy.special import genlaguerre as genlag
from sympy.mpmath import laguerre as genlag2
from matplotlib import pyplot as plt
def laguerre(x, r_ord, phi_ord, useArbitraryPrecision=False):
if (r_ord < 30 and phi_ord < 30) and not useArbitraryPrecision:
polyCoef = genlag(r_ord, phi_ord)
out = np.polyval(polyCoef, x)
else:
fun = lambda arg: genlag2(r_ord, phi_ord, arg)
fun2 = np.frompyfunc(genlag2, 3, 1)
# fun2 = np.vectorize(fun)
out = fun2(r_ord, phi_ord, x)
return out
r_ord = 29
phi_ord = 29
f = lambda x, useArb : mp.log10(laguerre(x, 29, 29, useArb))
mp.plot(lambda y : f(y, True) - f(y, False), [0, 200], points = 1e3)
plt.show()
I was wondering if anyone knew what is going on or of any accuracy limitations of the scipy function? Do you recommend I simply use the mpmath function? At first I thought it might be that after a certain order it doesn't work but for (100, 100) it seems to be working just fine.
by running
mp.plot([lambda y : f(y, True), lambda y: f(y, False)], [0, 200], points = 1e3)
you get the following image where the discrepancy becomes pretty clear.
Any help appreciated.
Let me know if anything needs clarification.
Using polyval with high-order polynomials (about n > 20) is in general a bad idea, because evaluating polynomial using the coefficients (in power basis) will start giving large errors in floating point at high orders. The warning in the Scipy documentation tries to tell you that.
You should use scipy.special.eval_genlaguerre(r_ord, phi_ord, float(x)) instead of genlaguerre + polyval; it uses a stabler numerical algorithm for evaluating the polynomial.
Instead of using scipy.special.eval_genlaguerre to evaluate a high-degree polynomial as pv suggested, you can also use numpy.polynomial.Laguerre as explained in the NumPy documentation.
Unfortunately, it doesn't seem to provide a function for generalized Laguerre polynomials.
import numpy as np
from numpy.polynomial import Laguerre
p = Laguerre([1, -2, 1])
x = np.arange(5)
p(x)
NumPy output: 0, 0.5, 2, 4.5, 8
Related
I am trying to implement lsqcurvefit from matlab in Python using curve_fit with no success. Below is the matlab code I am trying to port to Python:
myfun = #(x,xdata)(exp(x(1))./ xdata.^exp(x(2))) - x(3);
xstart = [4, -2, 54];
pX = [2, 3, 13, 12, 38, 39];
pY = [12.7595, 8.7857, -11.8802, -10.9528, -15.4390, -15.3083];
try
fittedmodel = lsqcurvefit(myfun,xstart,double(pX),double(pY), [], [], optimset('Display', 'off'));
disp("fitted model:");
disp(fittedmodel);
catch
end
Below is my matlab output:
fitted model:
4.8389 3.3577 -2.0000
Below is my Python code:
from scipy.optimize import curve_fit
import numpy as np
pX = [2, 3, 13, 12, 38, 39];
pY = [12.7595, 8.7857, -11.8802, -10.9528, -15.4390, -15.3083];
def myfun(x, xdata):
temp_val_1 = np.exp(x[0])
temp_val_2 = np.exp(x[1])
temp_val_3 = x[2]
temp_val_4 = np.power(xdata, temp_val_2)
temp_val_5 = np.divide(temp_val_1, temp_val_4)
temp_val_6 = temp_val_5 - temp_val_3
return temp_val_6
popt, pcov = curve_fit(myfun, pX, pY, p0=([4, -2, 54]))
print(popt, "\n", pcov)
and below is my Python output:
myfun() takes 2 positional arguments but 4 were given
I understand that there is something wrong with the inputs, but I don't understand what to change to solve this and receive the same results as I do with matlab.
Here are a few hints to get you started:
Note that curve_fit expects a function with signature f(xdata, *x), where x is your optimization variable, i.e. the searched coefficients. It's just the other way around compared to Matlab's lsqcurvefit. The notation *x is python specific and denotes a variable number of arguments.
Additionally, you don't need to use the np.power and np.divide functions. The usual mathematical operators are overloaded for np.arrays and are applied elementwise. For example, this means that for two np.arrays a / b is equivalent to Matlab's a ./ b. Consequently, it's more convenient to write (and to read):
def myfun(xdata, *x):
return np.exp(x[0]) / xdata**np.exp(x[1]) - x[2]
I obtain the following coefficients:
[ 4.01234549 -0.47409326 21.70045585]
However, there seems to be an overflow for the term np.exp(x[1]), so it might be worth to reformulate the objective function or increase the floating point precision. i.e. use long doubles dtype=np.float128.
I am using quadpy to integrate a function in python.
Function
import numpy as np
T = 2*np.pi
def ex1(t):
return np.where(np.logical_and((t%T>=0), (t%T<np.pi)), t%T, np.pi)
The function is periodic, this is its plot:
x = np.linspace(0, 6*T, 1000)
plt.plot(x, ex1(x))
plt.grid(True)
plt.show()
Problem
I am trying to integrate this function:
from scipy.integrate import quad
import quadpy
print(quadpy.quad(ex1, 0, 3))
print(quad(ex1, 0, 3))
produces
(array(4.5), array(1.41692995e-19))
(4.5, 4.9960036108132044e-14)
On an interval from 0 to 3, everything works fine.
However If I increae the interval to e.g. 4, scipy still works.
print(quad(ex1, 0, 4))
produces
(7.631568411183528, 1.0717732083155035e-08)
but
print(quadpy.quad(ex1, 0, 4))
produces
IntegrationError: Tolerances (abs: 1.49e-08, rel: 1.49e-08) could not be reached with the given max_num_subintervals (= 50).
Questions
How do I prevent this error? I tried adding an argument called max_num_subintervals but that did not seem to work.
Am I using quadpy correctly for what I am trying to do? I have begun using it since I wanted to take derivatives of complex numbers which scipy does not support, and I would like to have a one-size-fits-all solution, thus using quadpy for these easier examples that scipy would be enough.
This was a quadpy bug after all. Fixed now:
import numpy as np
from scipy.integrate import quad
import quadpy
T = 2 * np.pi
def ex1(t):
return np.where(np.logical_and((t % T >= 0), (t % T < np.pi)), t % T, np.pi)
print(quadpy.__version__)
print(quadpy.quad(ex1, 0, 4))
print(quad(ex1, 0, 4))
0.14.7
(array(7.63156841), array(1.13827355e-16))
(7.631568411183528, 1.0717732083161812e-08)
You're probably using an outdated quadpy version.
After some more experimenting I have found that quadpy's quad method takes the same argument as scipys quad method, which can be found here:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html#scipy.integrate.quad
Using the epsabs, epsrel, limit optional arguments I can thus prevent the error:
print(quadpy.quad(ex1, 0, 4, epsabs=1e-1, epsrel=1e-1, limit=100))
produces
(array(7.6323447), array(0.01666253))
However scipys quad method reaches an error tolerance of 1-08, which I really cant reproduce with quadpy, even when setting the limit to a really high value like 1000000
Why is that?
Update: It seems the behaviour I experienced is a bug, now reported in: https://github.com/nschloe/quadpy/issues/255
However in general this answer answers my initial question.
I am using Scipy's odeint (scipy.integrate.odeint) to solve some ODEs for me, and all is working nice and well. However, I'd now like to include another time-dependent set of data into my calculations, i.e. for t = [0, 1, 2, 3] I've got data z = [0.1, 0.2, 0.25, 0.22] to be included in the calculations. I can pass the vector as an argument, but that gives me the entire vector for every time step. Is there an efficient way of getting the current step (iterator) of the calculation? That way I can obtain z[i] for the i-th time step. Note that z has the length of t, and that both can contain several thousands of elements.
Thanks
A very simple example:
import numpy as np
from scipy.integrate import odeint
def func(y, t, z):
# I'd like to get the i-th element
# of z, corresponding to t[i]
return y+z[i]
result = odeint(func, [0], t, (z,))
The work-around solution for this problem is using the more generic scipy.integrate.ode function. This function has several integration schemes build-in, and you have more control of what happens during each iteration. See the example below:
import numpy as np
from scipy.integrate import ode
def func(t, y, z):
return y+z
t = np.linspace(0, 1.0, 100)
dt = t[1]-t[0]
z = np.random.rand(100)
output = np.empty_like(t)
r = ode(func).set_integrator("dop853")
r.set_initial_value(0, 0).set_f_params(z[0])
for i in xrange(len(t)):
r.set_f_params(z[i])
r.integrate(r.t+dt)
output[i] = r.y
During each iteration, the solver's value of z is updated accordingly.
For ode solvers, you can use an interpolation for time-varying inputs. In this case:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t_arr = np.array([0,1,2,3])
z_arr = np.array([0.1, 0.2, 0.25, 0.22])
finterp = interp1d(t_arr,z_arr,fill_value='extrapolate') #create interpolation function
def func(y,t,z):
print(t)
zt = finterp(t) # call interpolation at time t
return y+zt
result = odeint(func, [0], t_arr, (z_arr,))
However, the solver in odeint may request time instants beyond t_arr (in your case at 3.028), so you have to specify values for z beyond t=3 or allow for extrapolation with fill_value (as shown above). Just be careful when selecting a kind of interpolation, to reflect the behavior that you expect.
I am trying to calculate the definite integral of a function with multiple variables over just one variable in scipy.
This is kind of like what my code looks like-
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
quad(integrand, 1,2, args=())
And it returns this type error:
TypeError: integrand() takes exactly 2 arguments (1 given)
However, it works if I put a number into args. But I don't want to, because I want y to remain as y and not a number. Does anyone know how this can be done?
EDIT: Sorry, don't think I was clear. I want the end result to be a function of y, with y still being a symbol.
Thanks to mdurant, here's what works:
from sympy import integrate, Symbol, exp
from sympy.abc import x
y=Symbol('y')
f=x*exp(x/y)
integrate(f, (x, 1, 2))
Answer:
-(-y**2 + y)*exp(1/y) + (-y**2 + 2*y)*exp(2/y)
You probably just want the result to be a function of y right?:
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
partial_int = lambda y: quad(integrand, 1,2, args=(y,))
print partial_int(5)
#(2.050684698584342, 2.2767173686148355e-14)
The best you can do is use functools.partial, to bind what arguments you have for the moment. But one fundamentally cannot numerically integrate a definite integral if you havnt got the entire domain specified yet; in that case the resulting expression will necessarily still contain symbolic parts, so the intermediate result isn't numerical.
(Assuming that you are talking about computing the definite integral over x given a specific, fixed value of y.)
You could use a lambda:
quad(lambda x:integrand(x, 10), 1, 2, args=())
or functools.partial():
quad(functools.partial(integrand, y=10), 1, 2, args=())
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
vec_int = np.vectorize(integrand)
y = np.linspace(0, 10, 100)
vec_int(y)
I want to integrate the product of two time- and frequency-shifted Hermite functions using scipy.integrate.quad.
However, since large order-polynomials are included, there are numerical errors occuring. Here's my Code:
import numpy as np
import scipy.integrate
import scipy.special as sp
from math import pi
def makeFuncs():
# Create the 0th, 4th, 8th, 12th and 16th order hermite function
return [lambda t, n=n: np.exp(-0.5*t**2)*sp.hermite(n)(t) for n in np.arange(5)*4]
def ambgfun(funcs, i, k, tau, f):
# Integrate f1(t)*f2(t+tau)*exp(-j2pift) over t from -inf to inf
f1 = funcs[i]
f2 = funcs[k]
func = lambda t: np.real(f1(t) * f2(t+tau) * np.exp(-1j*(2*pi)*f*t))
return scipy.integrate.quad(func, -np.inf, np.inf)
def main():
f = makeFuncs()
print "A00(0,0):", ambgfun(f, 0, 0, 0, 0)
print "A01(0,0):", ambgfun(f, 0, 1, 0, 0)
print "A34(0,0):", ambgfun(f, 3, 4, 0, 0)
if __name__ == '__main__':
main()
The hermite functions are orthogonal, thus all integrals should be equal to zero. However, they are not, as the output shows:
A00(0,0): (1.7724538509055159, 1.4202636805184462e-08)
A01(0,0): (8.465450562766819e-16, 8.862237123626351e-09)
A34(0,0): (-10.1875, 26.317246925873935)
How can I make this calculation more accurate? The hermite-function from scipy contain a weights variable which should be used for Gaussian Quadrature, as given in the documentation (http://docs.scipy.org/doc/scipy/reference/special.html#orthogonal-polynomials). However, I have not found a hint in the docs how to use these weights.
I hope you can help :)
Thanks, Max
The answer is that the result you get is numerically as close to zero as it gets. I don' think it's really possible to get much better results if you work with floating point numbers --- you are facing a general problem in numerical integration.
Consider this:
import numpy as np
from scipy import integrate, special
f = lambda t: np.exp(-t**2) * special.eval_hermite(12, t) * special.eval_hermite(16, t)
abs_ig, abs_err = integrate.quad(lambda t: abs(f(t)), -np.inf, np.inf)
ig, err = integrate.quad(f, -np.inf, np.inf)
print ig
# -10.203125
print abs_ig
# 2.22488114805e+15
print ig / abs_ig, err / abs_ig
# -4.58591912155e-15 1.18053770382e-14
The value of the integrand has therefore been computed to an accuracy comparable to the floating point epsilon. Because of the rounding error in subtracting values of a large-magnitude oscillating integrand, it's not really possible to get better results.
So how to proceed? In my experience, what you'd need to do now is to approach the problem not numerically, but analytically. Importantly, the Fourier transform of Hermite polynomials times the weight function is known, so you can work in the Fourier space all the time here.