Dealing with numerical calculations having exponential terms often becomes painful, thanks to overflow errors. For example, suppose you have a probability density, P(x)=C*exp(f(x)/k), where k is very small number, say of the order of 10^(-5).
To find the value of C one has to integrate P(x). Here comes the overflow error. I know it also depends on the form of f(x). But for this moment let us assume, f(x)=sin(x).
How to deal with such problems?
What are the tricks we may use to avoid them?
Is the severity of such problems language dependent? If yes, in which language should one write his code?
As I mentioned in the comments, I strongly advise using analytical methods as far as you can. However, If you want to compute integrals of the form
I=Integral[Exp[f(x)],{x,a,b}]
Where f(x) could potentially overflow the exponent, then you might want to renormalize the system a bit in the following way:
Assume that f(c) is the maximum of f(x) in the domain [a,b], then you can write:
I=Exp[f(c)] Integral[Exp[f(x)-f(c)],{x,a,b}]
It's an ugly trick, but at least your exponents will be small in the integral.
note: just realized this is the comment of roygvib
One of the options is to use GSL - GNU Scientific Library (python and fortran wrappers are available).
There is a function gsl_sf_exp_e10_e that according to the documentation
computes the exponential \exp(x) using the gsl_sf_result_e10 type to return a result with extended range. This function may be useful if the value of \exp(x) would overflow the numeric range of double.
However, I would like to note, that it's slow due to additional checks during evaluation.
P.S. As it was said earlier, it's better to use analytical solutions where possible.
Related
I have a relatively complicated function and I have calculated the analytical form of the Jacobian of this function. However, sometimes, I mess up this Jacobian.
MATLAB has a nice way to check for the accuracy of the Jacobian when using some optimization technique as described here.
The problem though is that it looks like MATLAB solves the optimization problem and then returns if the Jacobian was correct or not. This is extremely time consuming, especially considering that some of my optimization problems take hours or even days to compute.
Python has a somewhat similar function in scipy as described here which just compares the analytical gradient with a finite difference approximation of the gradient for some user provided input.
Is there anything I can do to check the accuracy of the Jacobian in MATLAB without having to solve the entire optimization problem?
A laborious but useful method I've used for this sort of thing is to check that the (numerical) integral of the purported derivative is the difference of the function at the end points. I have found this more convenient than comparing fractions like (f(x+h)-f(x))/h with f'(x) because of the difficulty of choosing h so that on the one hand h is not so small that the fraction is not dominated by rounding error and on the other h is small enough that the fraction should be close to f'(x)
In the case of a function F of a single variable, the assumption is that you have code f to evaluate F and fd say to evaluate F'. Then the test is, for various intervals [a,b] to look at the differences, which the fundamental theorem of calculus says should be 0,
Integral{ 0<=x<=b | fd(x)} - (f(b)-f(a))
with the integral being computed numerically. There is no need for the intervals to be small.
Part of the error will, of course, be due to the error in the numerical approximation to the integral. For this reason I tend to use, for example, and order 40 Gausss Legendre integrator.
For functions of several variables, you can test one variable at a time. For several functions, these can be tested one at a time.
I've found that these tests, which are of course by no means exhaustive, show up the kinds of mistakes that occur in computing derivatives quire readily.
Have you considered the usage of Complex step differentiation to check your gradient? See this description
I am trying to expand a function of the form (X + Y + Z) ^ N where N is sufficiently large so that the expanded product will contain terms with coefficients much greater than 2 ^ 64; for the sake of this discussion let's just say that N is greater than 200. This is an issue because I am hoping to do an analysis of the expanded form of this function, and this analysis requires exact precision for all of the terms and their coefficients.
To expand the function I am using the Python module SymPy, which has seemed very promising thus far and been able to expand functions where N is > 150 in a relatively short amount of time. My concern though is that after looking through some of the expanded functions, I am seeing coefficients with more trailing zeroes than I might expect. I know that I can run everything through mpmath for my analysis after the function has been expanded, but as of now, I am unsure as to whether or not some of the larger coefficients are even exactly correct in the first place.
Under the documentation for SymPy's expand function, there is no clarification of how precise the coefficients of the expansion are when working with very large numbers. I know for a fact that SymPy uses the mpmath module for some of its functions, so I know that it is capable of arbitrary precision, I just don't know if arbitrary precision explicitly applies to this case.
I know that I could also confirm if the expand function is arbitrarily precise or not by summing all of the coefficients of a given function and checking whether or not that sum is equal to N, but I'd rather not spend a few hours coding all the necessary pieces to make that assessment, only to find out that expand is imprecise.
If anyone has any suggestions for easier ways to confirm the precision of expand, then I would appreciate that if direct confirmation of its precision cannot be given.
Although PR 18960 has not yet been merged, you can affirm there that the coefficients are correct:
>>> multinomial(15,16,14)
50151543548788717200
>>> ((x+y+z)**(15+16+14)).expand().coeff(x**15*y**16*z**14)
50151543548788717200
>>> _ > 2**64
True
Since Python supports unlimited integers and the coefficients are integers, I don't know any reason that they would not be accurate.
I have a factorial lookup table that contains the first 30 integer factorials. This table is used in a function that is compiled with numba.njit. The issue is, above 20!, the number is larger than a 64-bit signed integer (9,223,372,036,854,775,807), which causes numba to raise a TypingError. If the table is reduced to only include the first 20 integer factorials the function runs fine.
Is there a way to get around this in numba? Perhaps by declaring larger integer types in the jit compiled function where the lookup table is used?
There may be some way to handle large integers in Numba, but its not a method that I'm aware of.
But, since we know that you're trying to hand-code the evaluation of the Beta distribution in Numba, I have some other suggestions.
First though, we must be careful with our language so we don't confuse the Beta distribution and the Beta function.
What I'd actually recommend is moving all your computations on to the log scale. That is, instead of computing the pdf of the Beta distribution you'd compute the log of the pdf of the Beta distribution.
This trick is commonly used in statistical computing as the log of the pdf is more numerically stable than the pdf. The Stan project, for example, works exclusively to allow the computation of the log posterior density.
From your post history I also know that you're interested in MCMC; it is also common practice to use log pdfs to perform MCMC. In the case of MCMC, instead of having the posterior proportional to the prior times the likelihood, on the log scale you would have the log-posterior proportional to the log-prior plus the log-likelihood.
I'd recommend you use log distributions as this allows you to avoid having to ever compute $\Gamma(n)$ for large n, which is prone to integer overflow. Instead, you compute $\log(\Gamma(n))$. But don't you need to compute $\Gamma(n)$ to compute $\log(\Gamma(n))$? Actually, no. You can take a look at the scipy.special function gammaln which avoids having to compute $\Gamma(n)$ at all. One way forward then would be to look at the source code in scipy.special.gammaln and make your own numba implementation from this.
In your comment you also mention using Spouge's Approximation to approximate the Gamma function. I've not used Spouge's approximation before, but I have had success with Stirling's approximation. If you want to use one of these approximations, working on the log scale you would now take the log of the approximation. You'll want to use the rules of logs to rewrite these approximations.
With all the above considered, what I'd recommend is moving computations from the pdf to the log of the pdf. To compute the log pdf of the Beta distribution I'd make use of this approximation of the Beta function. Using the rules of logs to rewrite this approximation and the Beta pdf. You could then implement this is Numba without having to worry about integer overflow.
Edit
Apologies, I'm not sure how to format maths on stack overflow.
Is it possible to get an approximate solution to a mixed integer linear programming problem with PuLP? My problem is complex and the exact resolution takes too long.
You probably do not mean Linear Programming but rather Mixed Integer Programming. (The original question asked about LPs).
LPs usually solve quite fast and I don't know a good way to find an approximate solution for them. You may want to try an interior point or barrier method and set an iteration or time limit. For Simplex methods this typically does not work very well.
MIP models can take a lot of time to solve. Solvers allow to terminate earlier by setting a gap (gap = 0 means solving to optimality). E.g.
model.solve(GLPK(options=['--mipgap', '0.01']))
Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization