Hypergeometric confluent function divergence with scipy - python

I have a problem with the hypergeometric confluent function of scipy.
The code is
from scipy import special
print special.hyp1f1(-0.5, 0.5, -705)
print special.hyp1f1(-0.5, 0.5, -706)
and I obtain the output
47.0619041347
inf
I don't understand why the function is divergent. The hypergeometric confluent function has an asymptotic expansion for large x, and there shouldn't be any pole for these values of the parameters. Am I wrong, or this is a bug? Thanks in advance for your help!

A (known) bug: scipy.special.hyp1f1(0.5, 1.5, -1000) fails.
See also pull request hyp1f1: better handling of large negative arguments for reasons (namely, exponent overflow).
Kummer's hypergeometric function has poles only in negative integers, so is well defined for your usecase.

Just FYI: you can use the excellent mpmath library to get a hyp1f1 function which does not have this issue. Without GMP/MPIR + gmpy2 installed the library will be a bit slower than the scipy function but you will have arbitrary precision available.
mpmath example:
In [19]: hyp1f1(-0.5, 0.5, -706)
Out[19]: mpf('47.1')
In [20]: mp.dps = 25
In [21]: hyp1f1(-0.5, 0.5, -706)
Out[21]: mpf('47.09526954413143632617966605')
IMPORTANT NOTE
This scipy function will not always return inf when it can not handle the size of the return value. The values (-0.5, 0.5, 706) simply returns an incorrect answer.

Related

Upper Incomplete Gamma Function of order 0 in scipy

I am trying to implement the upper incomplete gamma function of order zero in Python. Normally we use gammaincc function but according to the docs,it's defined only for positive a. Is there any way to implement it in python for a=0 case? Thanks.
SciPy implements the regularized incomplete gamma function, the one with division by Gamma(a). This division makes no sense when a=0, but the non-regularized upper gamma still makes sense. Unfortunately there is no flag like regularized=False in SciPy.
However, in the special case a=0 the upper incomplete gamma function agrees with the exponential integral exp1 which is available in SciPy:
>>> from scipy.special import exp1
>>> exp1(1.3)
0.13545095784912914
(Compare to Wolfram Alpha).
Alternatively, the mpmath library computes non-regularized incomplete gammas by default.
>>> import mpmath
>>> mpmath.gammainc(0, 1.3)
mpf('0.13545095784912914')

Quicker calculation of double integral in python (like MatLab's integral2)

I'm needing to perform a 2D-integration (one dimension has an infinite bound). In MatLab, I have done it with integral2:
int_x = integral2(fun, 0, inf, 0, a, 'abstol', 0, 'reltol', 1e-6);
In Python, I've tried scipy's dblquad:
int_x = scipy.integrate.dblquad(fun, 0, numpy.inf, lambda x: 0, lambda x: a, epsabs=0, epsrel=1e-6)
and have also tried using nested single quads. Unfortunately, both of the scipy options take ~80x longer than MatLab's.
My question is: is there a different implementation of 2D integrals within Python that might be faster (I've tried "quadpy" without much benefit)? Alternatively, could I compile MatLab's integral2 function and call it from python without needing the MatLab runtime (and is that even kosher)?
Thanks in advance!
Brad
Update:
Turns out that I don't have the "reputation" to post an image of the equation, so please bear with the formatting: fun(N,t) = P(N) N^2 S(N,t), where P(N) is a lognormal probability distribution and S(N,t) is fairly convoluted but is an exponential in its simplest form and a hypergeometric function (truncated series) in its most complex form. N is integrated from 0 to infinity and t from 0 to pi.
First, profile. If the profile tells you that it's evaluations if fun, then your best bet is to either numba.jit it, or rewrite it in Cython.
I created quadpy once because the the scipy quadrature functions were too slow for me. If you can bring your integrand into one of the respective forms (e.g., the 2D plane with weight function exp(-x) or exp(-x^2)), you should take a look.

RuntimeWarning: overflow encountered in np.exp(x**2)

I need to calculate exp(x**2) where x = numpy.arange(30,90). This raises the warning:
RuntimeWarning: overflow encountered in exp
inf
I cannot safely ignore this warning, but neither SymPy nor mpmath is a solution and I need to perform array operations so a Numpy solution would be my dream.
Does anyone know how to handle this problem?
You could use a data type that has the necessary range, for example decimal.Decimal:
>>> import numpy as np
>>> from decimal import Decimal
>>> x = np.arange(Decimal(30), Decimal(90))
>>> y = np.exp(x ** 2)
>>> y[-1]
Decimal('1.113246031563799750400684712E+3440')
But what are you using these numbers for? Could you avoid the exponentiation and work with logarithms? More detail about your problem would be helpful.
I think you can use this method to solve this problem:
Normalized
I overcome the problem in this method. Before using this method, my classify accuracy is :86%. After using this method, my classify accuracy is :96%!!!
It's great!
first:
Min-Max scaling
second:
Z-score standardization
These are common methods to implement normalization.
I use the first method. And I alter it. The maximum number is divided by 10.
So the maximum number of the result is 10. Then exp(-10) will be not overflow!
I hope my answer will help you !(^_^)

incomplete gamma function in python?

the scipy.special.gammainc can not take negative values for the first argument. Are there any other implementations that could in python? I can do a manual integration for sure but I'd like to know if there are good alternatives that already exist.
Correct result: 1 - Gamma[-1,1] = 0.85
Use Scipy: scipy.special.gammainc(-1, 1) = 0
Thanks.
I typically reach for mpmath whenever I need special functions and I'm not too concerned about performance. (Although its performance in many cases is pretty good anyway.)
For example:
>>> import mpmath
>>> mpmath.gammainc(-1,1)
mpf('0.14849550677592205')
>>> 1-mpmath.gammainc(-1,1)
mpf('0.85150449322407795')
>>> mpmath.mp.dps = 50 # arbitrary precision!
>>> 1-mpmath.gammainc(-1,1)
mpf('0.85150449322407795208164000529866078158523616237514084')
I just had the same issue and ended up using the recurrence relations for the function when a<0.
http://en.wikipedia.org/wiki/Incomplete_gamma_function#Properties
Note also that the scipy functions gammainc and gammaincc give the regularized forms Gamma(a,x)/Gamma(a)
Still an issue in 2021, and they still haven't improved this in scipy. Especially it is frustrating that scipy does not even provide unregularised versions of the upper and lower incomplete Gamma functions. I also ended up using mpmath, which uses its own data type (here mpf for mpmath floating - which supports arbitrary precision). In order to cook up something quick for the upper and lower incomplete Gamma function that works with numpy arrays, and that behaves like one would expect from evaluating those integrals I came up with the following:
import numpy as np
from mpmath import gammainc
"""
In both functinos below a is a float and z is a numpy.array.
"""
def gammainc_up(a,z):
return np.asarray([gammainc(a, zi, regularized=False)
for zi in z]).astype(float)
def gammainc_low(a,z):
return np.asarray([gamainc(a, 0, zi, regularized=False)
for zi in z]).astype(float)
Note again, this is for the un-regularised functions (Eq. 8.2.1 and 8.2.2 in the DLMF), the regularised functions (Eq. 8.2.3 and 8.2.4) can be obtined in mpmath by setting the keyword regularized=True.

Does Python's random module have a substitute for numpy.random.exponential?

I've been using Numpy's numpy.random.exponential function for a while. I now see that Python's random module has many functions that I didn't know about. Does it have something that replaces numpy.random.exponential? It would be nice to drop the numpy requirement from my project.
If anything about random.expovariate() does not suit your needs, it's also easy to roll your own version:
def exponential(beta):
return -beta * math.log(1.0 - random.random())
It seems a bit of an overkill to have a dependency on NumPy just for this functionality.
Note that this function accepts the mean beta as a parameter, as does the NumPy version, whereas the parameter lambd of random.expovariate() is the inverse of beta.
http://docs.python.org/library/random.html#random.expovariate
random.expovariate(lambd)
Exponential distribution. lambd is 1.0
divided by the desired mean. It should
be nonzero. (The parameter would be
called “lambda”, but that is a
reserved word in Python.) Returned
values range from 0 to positive
infinity if lambd is positive, and
from negative infinity to 0 if lambd
is negative.

Categories

Resources