Calculate Integral over array in Python with output array - python

I'd like to calculate an integral of the form
where I want the results as an array (to eventually plot them as a function of omega). I have
import numpy as np
import pylab as plt
from scipy import integrate
w = np.linspace(-5, 5, 1000)
def g(x):
return np.exp(-2*x)
def complexexponential(x, w):
return np.exp(-1j*w*x)
def integrand(x, w):
return g(x)*complexexponential(x, w)
integrated = np.real(integrate.quad(integrand, 0, np.inf, args = (w)))
which gives me the error "supplied function does not return a valid float". I am not very familiar with the integrate-function of Scipy. Many thanks for your help in advance!

Scipy integrate.quad doesn't seem to support vector output. If you loop over all your values of w and only give one of them at a time as args your code seems to work fine.
Also it doesn't handle complex integration, which you can get around using the procedure outlined in this answer.

Related

python: Plotting and optimizing the same function

Lets say I have the following function:
def f(x):
return log(3*exp(3*x) + 7*exp(7*x))
I want to do two things:
1) plot the function over a range of x-values
2) find the root of the function using the Newton method from scipy
My problem is that it seems that plotting is best done with a numpy array x=np.linspace(-2,2,1000), but then evaluating the function results in erros TypeError: only size-1 arrays can be converted to Python scalars. I can fix this by simply changing log and exp to np.log and np.exp, respectively.
But doing so then makes scipy.optimize.newton unhappy.
It seems like I need to define the function twice, once for use in plotting (with np. ...) and once for optimizing in the form given above.
I can't imagine that this is actually the case. Any hints would be greatly appreciated.
Seems legit, you just need to use numpy functions instead of base math functions:
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
%matplotlib inline
def f(x):
return np.log(3*np.exp(3*x) + 7*np.exp(7*x))
x = np.linspace(-2,2,1000)
y = f(x)
plt.scatter(x, y)
optimize.root(f, 1)

Integrating special function and plotting

I am trying to plot an integration of a special (eg. Bessel) function and my minimal code is the following.
#!/usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
import scipy.integrate as integrate
import scipy.special as sp
from scipy.special import jn
#x = np.arange(0.0, 10.0, 0.1)
U = np.linspace(0,10,1000)
#Delta = U**2
#Delta = U-4+8*integrate.quad(lambda x: sp.jv(1,x)/(x*(1.0+np.exp(U*x*0.5))), 0, 100)
Delta = U-4+8*integrate.quad(lambda x: jn(1,x)/(x*(1.0+np.exp(U*x*0.5))), 0.1, 1000)
plt.plot(U,Delta)
plt.xlabel('U')
plt.ylabel('$\Delta$')
plt.show()
However, this gives me several an error messages saying quadpack.error: Supplied function does not return a valid float whereas the function gets easily plotted in Mathematica. Do Python's Bessel's functions have limitations?
I have used this documentation for my plotting.
It is difficult to provide an answer that solves the problem before understanding what exactly you are trying to do. However, let me list a number of issues and provide an example that may not achieve what you are trying to do but at least it will provide a path forward.
Because your lambda function multiplies x by an array U, it returns an array instead of a number. A function that needs to be integrated should return a single number. You could fix this, for example, by replacing U by u:
f = lambda x, u: jn(1,x)/(x*(1.0+np.exp(u*x*0.5)))
Make Delta a function of u AND make quad pass additional argument u to f (defined in the previous point) AND extract only the value of the integral from the returned tuple from quad (quad returns a tuple of several values: the integral, the error, etc.):
Delta = lambda u: -4+8*integrate.quad(f, 0.1, 1000, args=(u,))[0]
Compute Delta for each u:
deltas = np.array(map(Delta, U))
plot the data:
plt.plot(U, deltas)

Laguerre polynomials in python using scipy, lack of convergence?

The laguerre polynomials don't seem to be converging at some orders as can be demonstrated by running the following code.
import numpy as np
from sympy import mpmath as mp
from scipy.special import genlaguerre as genlag
from sympy.mpmath import laguerre as genlag2
from matplotlib import pyplot as plt
def laguerre(x, r_ord, phi_ord, useArbitraryPrecision=False):
if (r_ord < 30 and phi_ord < 30) and not useArbitraryPrecision:
polyCoef = genlag(r_ord, phi_ord)
out = np.polyval(polyCoef, x)
else:
fun = lambda arg: genlag2(r_ord, phi_ord, arg)
fun2 = np.frompyfunc(genlag2, 3, 1)
# fun2 = np.vectorize(fun)
out = fun2(r_ord, phi_ord, x)
return out
r_ord = 29
phi_ord = 29
f = lambda x, useArb : mp.log10(laguerre(x, 29, 29, useArb))
mp.plot(lambda y : f(y, True) - f(y, False), [0, 200], points = 1e3)
plt.show()
I was wondering if anyone knew what is going on or of any accuracy limitations of the scipy function? Do you recommend I simply use the mpmath function? At first I thought it might be that after a certain order it doesn't work but for (100, 100) it seems to be working just fine.
by running
mp.plot([lambda y : f(y, True), lambda y: f(y, False)], [0, 200], points = 1e3)
you get the following image where the discrepancy becomes pretty clear.
Any help appreciated.
Let me know if anything needs clarification.
Using polyval with high-order polynomials (about n > 20) is in general a bad idea, because evaluating polynomial using the coefficients (in power basis) will start giving large errors in floating point at high orders. The warning in the Scipy documentation tries to tell you that.
You should use scipy.special.eval_genlaguerre(r_ord, phi_ord, float(x)) instead of genlaguerre + polyval; it uses a stabler numerical algorithm for evaluating the polynomial.
Instead of using scipy.special.eval_genlaguerre to evaluate a high-degree polynomial as pv suggested, you can also use numpy.polynomial.Laguerre as explained in the NumPy documentation.
Unfortunately, it doesn't seem to provide a function for generalized Laguerre polynomials.
import numpy as np
from numpy.polynomial import Laguerre
p = Laguerre([1, -2, 1])
x = np.arange(5)
p(x)
NumPy output: 0, 0.5, 2, 4.5, 8

Definite integral over one variable in a function with two variables in Scipy

I am trying to calculate the definite integral of a function with multiple variables over just one variable in scipy.
This is kind of like what my code looks like-
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
quad(integrand, 1,2, args=())
And it returns this type error:
TypeError: integrand() takes exactly 2 arguments (1 given)
However, it works if I put a number into args. But I don't want to, because I want y to remain as y and not a number. Does anyone know how this can be done?
EDIT: Sorry, don't think I was clear. I want the end result to be a function of y, with y still being a symbol.
Thanks to mdurant, here's what works:
from sympy import integrate, Symbol, exp
from sympy.abc import x
y=Symbol('y')
f=x*exp(x/y)
integrate(f, (x, 1, 2))
Answer:
-(-y**2 + y)*exp(1/y) + (-y**2 + 2*y)*exp(2/y)
You probably just want the result to be a function of y right?:
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
partial_int = lambda y: quad(integrand, 1,2, args=(y,))
print partial_int(5)
#(2.050684698584342, 2.2767173686148355e-14)
The best you can do is use functools.partial, to bind what arguments you have for the moment. But one fundamentally cannot numerically integrate a definite integral if you havnt got the entire domain specified yet; in that case the resulting expression will necessarily still contain symbolic parts, so the intermediate result isn't numerical.
(Assuming that you are talking about computing the definite integral over x given a specific, fixed value of y.)
You could use a lambda:
quad(lambda x:integrand(x, 10), 1, 2, args=())
or functools.partial():
quad(functools.partial(integrand, y=10), 1, 2, args=())
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
vec_int = np.vectorize(integrand)
y = np.linspace(0, 10, 100)
vec_int(y)

Method signature for Jacobian of a least squares function in scipy

Can anyone provide an example of providing a Jacobian to a least squares function in scipy?
I can't figure out the method signature they want - they say it should be a function, yet it's very hard to figure out what input parameters in what order this function should accept.
Here's the exponential decay fitting that I got to work with this:
import numpy as np
from scipy.optimize import leastsq
def f(var,xs):
return var[0]*np.exp(-var[1]*xs)+var[2]
def func(var, xs, ys):
return f(var,xs) - ys
def dfunc(var,xs,ys):
v = np.exp(-var[1]*xs)
return [v,-var[0]*xs*v,np.ones(len(xs))]
xs = np.linspace(0,4,50)
ys = f([2.5,1.3,0.5],xs)
yn = ys + 0.2*np.random.normal(size=len(xs))
fit = leastsq(func,[10,10,10],args=(xs,yn),Dfun=dfunc,col_deriv=1)
If I wanted to use col_deriv=0, I think that I would have to basically take the transpose of what I return with dfunc. You're quite right though: the documentation on this isn't so great.

Categories

Resources