the scipy.special.gammainc can not take negative values for the first argument. Are there any other implementations that could in python? I can do a manual integration for sure but I'd like to know if there are good alternatives that already exist.
Correct result: 1 - Gamma[-1,1] = 0.85
Use Scipy: scipy.special.gammainc(-1, 1) = 0
Thanks.
I typically reach for mpmath whenever I need special functions and I'm not too concerned about performance. (Although its performance in many cases is pretty good anyway.)
For example:
>>> import mpmath
>>> mpmath.gammainc(-1,1)
mpf('0.14849550677592205')
>>> 1-mpmath.gammainc(-1,1)
mpf('0.85150449322407795')
>>> mpmath.mp.dps = 50 # arbitrary precision!
>>> 1-mpmath.gammainc(-1,1)
mpf('0.85150449322407795208164000529866078158523616237514084')
I just had the same issue and ended up using the recurrence relations for the function when a<0.
http://en.wikipedia.org/wiki/Incomplete_gamma_function#Properties
Note also that the scipy functions gammainc and gammaincc give the regularized forms Gamma(a,x)/Gamma(a)
Still an issue in 2021, and they still haven't improved this in scipy. Especially it is frustrating that scipy does not even provide unregularised versions of the upper and lower incomplete Gamma functions. I also ended up using mpmath, which uses its own data type (here mpf for mpmath floating - which supports arbitrary precision). In order to cook up something quick for the upper and lower incomplete Gamma function that works with numpy arrays, and that behaves like one would expect from evaluating those integrals I came up with the following:
import numpy as np
from mpmath import gammainc
"""
In both functinos below a is a float and z is a numpy.array.
"""
def gammainc_up(a,z):
return np.asarray([gammainc(a, zi, regularized=False)
for zi in z]).astype(float)
def gammainc_low(a,z):
return np.asarray([gamainc(a, 0, zi, regularized=False)
for zi in z]).astype(float)
Note again, this is for the un-regularised functions (Eq. 8.2.1 and 8.2.2 in the DLMF), the regularised functions (Eq. 8.2.3 and 8.2.4) can be obtined in mpmath by setting the keyword regularized=True.
Related
I am trying to implement the upper incomplete gamma function of order zero in Python. Normally we use gammaincc function but according to the docs,it's defined only for positive a. Is there any way to implement it in python for a=0 case? Thanks.
SciPy implements the regularized incomplete gamma function, the one with division by Gamma(a). This division makes no sense when a=0, but the non-regularized upper gamma still makes sense. Unfortunately there is no flag like regularized=False in SciPy.
However, in the special case a=0 the upper incomplete gamma function agrees with the exponential integral exp1 which is available in SciPy:
>>> from scipy.special import exp1
>>> exp1(1.3)
0.13545095784912914
(Compare to Wolfram Alpha).
Alternatively, the mpmath library computes non-regularized incomplete gammas by default.
>>> import mpmath
>>> mpmath.gammainc(0, 1.3)
mpf('0.13545095784912914')
I'm needing to perform a 2D-integration (one dimension has an infinite bound). In MatLab, I have done it with integral2:
int_x = integral2(fun, 0, inf, 0, a, 'abstol', 0, 'reltol', 1e-6);
In Python, I've tried scipy's dblquad:
int_x = scipy.integrate.dblquad(fun, 0, numpy.inf, lambda x: 0, lambda x: a, epsabs=0, epsrel=1e-6)
and have also tried using nested single quads. Unfortunately, both of the scipy options take ~80x longer than MatLab's.
My question is: is there a different implementation of 2D integrals within Python that might be faster (I've tried "quadpy" without much benefit)? Alternatively, could I compile MatLab's integral2 function and call it from python without needing the MatLab runtime (and is that even kosher)?
Thanks in advance!
Brad
Update:
Turns out that I don't have the "reputation" to post an image of the equation, so please bear with the formatting: fun(N,t) = P(N) N^2 S(N,t), where P(N) is a lognormal probability distribution and S(N,t) is fairly convoluted but is an exponential in its simplest form and a hypergeometric function (truncated series) in its most complex form. N is integrated from 0 to infinity and t from 0 to pi.
First, profile. If the profile tells you that it's evaluations if fun, then your best bet is to either numba.jit it, or rewrite it in Cython.
I created quadpy once because the the scipy quadrature functions were too slow for me. If you can bring your integrand into one of the respective forms (e.g., the 2D plane with weight function exp(-x) or exp(-x^2)), you should take a look.
I encountered a problem when I using Sympy to solve problems
Here is my code:
from math import pi, hypot
from sympy import solve, solveset, sqrt, Symbol
one_x=-0.08
one_y=1.28
second_x=0
second_y=0
second_r=7
one_r=7.3
slopes=-16.0000000000 (maybe more trailing 0s)
intercepts=0.0
x=Symbol('x')
solveset(sqrt((x-second_x)**2+(slope*x+intercept-second_y)**2)+second_r-one_r-sqrt((x-one_x)**2+(slope*x+intercept-one_y)**2),x)
That's only part of my code but it raises a lot of errors
but instead, I replaced all of the variables with its value like
x=Symbol('x')
solveset(sqrt((x)**2+((-16)*x)**2)+7-7.3-sqrt((x+0.08)**2+((-16)*x-1.28)**2),x)
It works nicely and i can get an output {-0.0493567429232771}
I think It's because of the type of slopes(-16 compared with -16.000000), I really wanna know why an equation with float number cannot be calculated, and how I can fix it (cause I need it to be more precise so I cannot just ignore the number after the dot )
Thanks so much!
SymPy + algebraic equation + floating point numbers => trouble. Floating point math does not work like normal math, and SymPy is designed for the latter. Small things like 16 (integer) versus 16.0 (float) make a lot of difference in solving equations with SymPy: ideally, you would have no floating point numbers there, creating exact rational numbers instead, like this.
from sympy import S
one_x = S('-0.08')
However, you have floating point data and are looking for a floating point solution. This makes SymPy the wrong tool for the job. SymPy is for doing math with symbols, not for crunching floating point numbers. The correct solution is to use an appropriate solve from SciPy, such as brentq. It takes a bracketing interval as an input (where the function has different signs at both ends). For example:
from scipy.optimize import brentq
eq = lambda x: np.sqrt((x-second_x)**2 + (slope*x+intercept-second_y)**2) + second_r - one_r - np.sqrt((x-one_x)**2 + (slope*x + intercept - one_y)**2)
brentq(eq, -10, 10) # returns -0.049356742923277075
If you stick with SymPy, that means your equation is going outsourced to mpmath library, which is much more limited in the numerical root finding and optimization. To get a solution to converge with its methods, you'll need a really good starting point: apparently, one_x/2 is such a point.
from sympy import sqrt, Symbol, nsolve
# ... as in your code
nsolve(sqrt((x-second_x)**2+(slope*x+intercept-second_y)**2)+second_r-one_r-sqrt((x-one_x)**2+(slope*x+intercept-one_y)**2), one_x/2)
returns -0.0493567429232771.
By using sympy.solveset, which is intended for symbolic solution, you deprive yourself not only of SciPy's powerful numeric solvers, but also of an opportunity to set a good starting value for the numeric search which sympy.nsolve provides. Hence the lack of convergence in this numerically tricky problem. By the way, this is what makes it numerically tricky: the function is nearly constant most of the time, with one rapid change.
I have a problem when using the MATLAB python engine.
I want to get approximated solutions to ODEs (using something like the ode45 function in MATLAB) from Python, but the problem is that ODE approximation requires an ODE function specification that I can't seem to create from the MATLAB Python engine.
It works fine calling MATLAB functions, such as isprime, from Python but there seems to be no way of specifying a MATLAB function in Python.
My question is therefore;
Are there any way of generating MATLAB function code from Python or is a way to specify MATLAB functions from Python?
odefun passed to ode45, according to docs, has to be a function handle.
Solve the ODE
y' = 2t
Use a time interval of [0,5] and the initial condition y0 = 0.
tspan = [0 5];
y0 = 0;
[t,y] = ode45(#(t,y) 2*t, tspan, y0);
#(t,y) 2*t returns a function handle to anonymous function.
Unfortunately, function handles are listed as one of datatypes unsupported in MATLAB <-> Python conversion:
Unsupported MATLAB Types The following MATLAB data types are not supported by the MATLAB Engine API for Python:
Categorical array
char array (M-by-N)
Cell array (M-by-N)
Function handle
Sparse array
Structure array
Table
MATLAB value objects (for a discussion of handle and value classes see Comparison of Handle and Value Classes)
Non-MATLAB objects (such as Java® objects)
To sum up, it seems like there is no straightforward way of doing it.
Potential workaround may involve some combination of engine.workspace and engine.eval, as shown in Use MATLAB Engine Workspace in Python example.
Workaround with engine.eval (first demo):
import matlab.engine
import matplotlib.pyplot as plt
e = matlab.engine.start_matlab()
tr, yr = e.eval('ode45(#(t,y) 2*t, [0 5], 0)', nargout=2)
plt.plot(tr, yr)
plt.show()
By doing so, you avoid passing function handle via MATLAB/Python barrier. You pass string (bytes) and allow MATLAB to evaluate it there. What's returned is pure numeric arrays. After that, you may operate on result vectors, e.g. plot them.
Since passing arguments as literals would quickly became pain, engine.workspace may be used to avoid it:
import matlab.engine
import matplotlib.pyplot as plt
e = matlab.engine.start_matlab()
e.workspace['tspan'] = matlab.double([0.0, 5.0])
e.workspace['y0'] = 0.0
tr, yr = e.eval('ode45(#(t,y) 2*t, tspan, y0)', nargout=2)
plt.plot(tr, yr)
plt.show()
I have a problem with the hypergeometric confluent function of scipy.
The code is
from scipy import special
print special.hyp1f1(-0.5, 0.5, -705)
print special.hyp1f1(-0.5, 0.5, -706)
and I obtain the output
47.0619041347
inf
I don't understand why the function is divergent. The hypergeometric confluent function has an asymptotic expansion for large x, and there shouldn't be any pole for these values of the parameters. Am I wrong, or this is a bug? Thanks in advance for your help!
A (known) bug: scipy.special.hyp1f1(0.5, 1.5, -1000) fails.
See also pull request hyp1f1: better handling of large negative arguments for reasons (namely, exponent overflow).
Kummer's hypergeometric function has poles only in negative integers, so is well defined for your usecase.
Just FYI: you can use the excellent mpmath library to get a hyp1f1 function which does not have this issue. Without GMP/MPIR + gmpy2 installed the library will be a bit slower than the scipy function but you will have arbitrary precision available.
mpmath example:
In [19]: hyp1f1(-0.5, 0.5, -706)
Out[19]: mpf('47.1')
In [20]: mp.dps = 25
In [21]: hyp1f1(-0.5, 0.5, -706)
Out[21]: mpf('47.09526954413143632617966605')
IMPORTANT NOTE
This scipy function will not always return inf when it can not handle the size of the return value. The values (-0.5, 0.5, 706) simply returns an incorrect answer.