How to deal with exponent overflow of 64float precision in python? - python

I am a newbie in python sorry for the simple question.
In the following code, I want to calculate the exponent and then take the log.
Y=numpy.log(1+numpy.exp(1000))
The problem is that when I take the exponent of 710 or larger numbers the numpy.exp() function returns 'inf' even if I print it with 64float it prints 'inf'.
any help regarding the problem will be appreciated.

You can use the function np.logaddexp() to do such operations. It computes logaddexp(x1, x2) == log(exp(x1) + exp(x2)) without explicitly computing the intermediate exp() values. This avoids the overflow. Since exp(0.0) == 1, you would compute np.logaddexp(0.0, 1000.0) and get the result of 1000.0, as expected.

Check this out:
>>> x = numpy.exp(100)
>>> y = x+1
>>> y==x
True
so even with 100 (which computes all right), adding 1 (or even a very big number), the lowest value is absorbed and has absolutely no effect in the addition. Both values are strictly equal.
Playing with sys.float_info.epsilon I tested that:
>>> numpy.log(1e20+numpy.exp(100))==numpy.log(numpy.exp(100))
True
>>> numpy.log(1e30+numpy.exp(100))==numpy.log(numpy.exp(100))
False
so even a value like 1e20 is absorbed by exp(100) ...
So you would get exactly 1000.0 as your result even if it worked.

Use the decimal library:
>>> import numpy as np
>>> np.exp(1000)
inf
>>> from decimal import Decimal
>>> x = Decimal(1000)
>>> np.exp(x)
Decimal('1.970071114017046993888879352E+434')

Related

Round arithmetic during evaluation

Question
When evaluating arithmetic there are multiple steps (PEMDAS) taken during evaluation. I know you can evaluate an operation then round it, but at times you need to round your data to never exceed a certain precision throughout the evaluation. This brings me to my question: How can you round at every step during the evaluation instead of just at the end?
Examples
For our first example, we will be using the simple operation 0.125/0.375 and rounding to 2 decimals.
# This operation evaluates to 1/3
>>> 0.125/0.375
0.3333333333333333
# If we wanted to round it we could just do
>>> round(0.125/0.375, 2)
0.33
# But if we wanted to round at every step of PEMDAS the following would be necessary
>>> round(round(0.125, 2)/round(0.375, 2), 2)
0.32
# Same equation as above but written as (1/8)/(3/8)
>>> round(round(round(1, 2)/round(8, 2), 2)/round(round(3, 2)/round(8, 2), 2), 2)
0.32
As you can see you get a different result if rounding is performed at every step rather than just at the end.
Although being a bit cumbersome this approach does get the job done. Problems arise though when the equation is not hardcoded but rather received from the user:
# Rounding cannot be applied here in the same way that we did above
>>> eval(input("Arithmetic: "))
Arithmetic: (1/8)/(3/8)
0.3333333333333333
Use cases
This may seem pretty useless at first but can actually be very valuable for many things.
Here is a simple example where rounding at each step would be necessary for finding the holes of a function:
# undefined.py
from math import *
import numpy as np
function = input("Function in terms of x: ")
def is_undefined(x):
x = round(x, 2) # To deal with minor Python inaccuracies (ex: 1.000000000000001)
try:
eval(function)
return False
except ZeroDivisionError:
return True
undefined = [x for x in np.linspace(-5, 5, 1001) if is_undefined(float(x))]
print(undefined)
# Works perfectly!
>>> python undefined.py
Function in terms of x: (x**2)*(x-2)/(x-2)
[2.0]
# Unable to find the hole at x=pi
>>> python undefined.py
Function in terms of x: (x**2)*(2*x - 2*pi)/(x - pi)
[]
The decimal module provides a Decimal type which can be configured so that all arithmetic operations are rounded to a certain number of decimal places:
>>> import decimal as d
>>> d.setcontext(d.Context(prec=2))
>>> x = d.Decimal(0.125)
>>> y = d.Decimal(0.375)
>>> x / y
Decimal('0.33')
You can force rounding of the numbers before the division by using the unary + operation, which normally does nothing, but in this case it applies the precision from the current context, changing the result (to be more inaccurate, of course):
>>> (+x) / (+y)
Decimal('0.32')
So a solution for an expression from user input could be to replace all number literals and instances of the variable x with Decimal objects of the same values: here I've used a regular expression to do that, and to use a unary + to also force rounding before operations.
import decimal as d
import re
d.setcontext(d.Context(prec=2))
function = input("Function in terms of x: ")
function = re.sub(r'([0-9]+(\.[0-9]+)?|x)', r'(+d.Decimal(\1))', function)
# ...
Note there is no longer a need to write x = round(x, 2), because the expression itself forces x to be rounded.
You may be looking for symbolic math instead, such as Sympy, which can probably do what you're really looking for-
Specifically, not aliasing transcendental numbers (like pi and e) or waiting to reduce irreducible fractions into decimal space until asked to evaluate to a decimal
>>> from sympy import *
>>> expr = "(1/8)/(3/8)"
>>> simplify(expr) # precise value
1/3
>>> simplify(expr).evalf() # decimal aliasing
0.333333333333333
>>> N("(1/8)/(3/8)") # using sympy.N()
0.333333333333333
This can also be used to solve equations
>>> x = symbols("x", real=True)
>>> solve(x**2 - 1) # simple solution
[-1, 1]
>>> solve(x**2 - pi) # more complex solution
[-sqrt(pi), sqrt(pi)]
>>> [N(expr) for expr in solve(x**2 - pi)] # decimal approximation
[-1.77245385090552, 1.77245385090552]
This can also be used (perhaps evilly) with Python constructs
>>> [N(x * pi) for x in range(10)] # lots of approximations!
[0, 3.14159265358979, 6.28318530717959, 9.42477796076938, 12.5663706143592, 15.7079632679490, 18.8495559215388, 21.9911485751286, 25.1327412287183, 28.2743338823081]

How to sum very large numbers to 1 in python

So imagine I have
>>> a = 725692137865927813642341235.00
If I do
>>> sum = a + 1
and afterwards
>>> sum == a
True
This is because a is bigger than a certain threshold.
Is there any trick like the logsumexp to perform this?
PS: a is an np.float64.
If a has to be specifically of type float, no, then that's not possible. In fact, the imprecision is much greater:
>>> a = 725692137865927813642341235.00
>>> a + 10000 == a
True
However, there are other data types that can be used to represent (almost) arbitrary precision decimal values or fractions.
>>> d = decimal.Decimal(a)
>>> d + 1 == d
False
>>> f = fractions.Fraction(a)
>>> f + 1 == f
False
(Note: of course, doing Decimal(a) or Fraction(a) does not magically restore the already lost precision of a; if you want to preserve that, you should pass the full value as a string.)
0) import decimal
1) setup appropriate precision of the decimal.getcontext() ( .prec attribute )
2) declare as decimal.Decimal() instance
>>> import decimal
>>> decimal.getcontext().prec
28
>>> decimal.getcontext().prec = 300
>>> dec_a = decimal.Decimal( '725692137865927813642341235.0' )
It is a pleasure to use decimal module for extremely extended numerical precision solvers.
BONUS:
Decimal module has very powerful context-methods, that preserve the decimal-module's strengths .add(), .subtract(), .multiply(), .fma(), .power() so as to indeed build an almost-infinite precision solver methods ...
Definitely worth mastering these decimal.getcontext() methods - your solvers spring into another league in precision and un-degraded convergence levels.
Will dividing a by 100,000 then adding 1 then times it back up again?
Eg.
a=725692137865927813642341235.00
a /= 100000
a += 0.00001
a *= 100000

Summation of large numbers in python yields the maximal parameter

In my program I use numpy to get number's exponents, then I use the sum function to summarize them.
I've noticed that summarizing those large numbers, with or without numpy, results in the largest parameter being returned, unchanged.
exp_joint_probabilities=[ 1.57171938e+81, 1.60451506e+56, 1.00000000e+00]
exp_joint_probabilities.sum()
=> 1.571719381352921e+81
The same with just python:
(1.57171938e+81+1.60451506e+56+1.00000000e+00)==1.57171938e+81
=>True
Is this a problem with approximation? Should I use a larger datatype to represent the numbers?
How can I get a more accurate result for these kind of calculations?
You could use the decimal standard library:
from decimal import Decimal
a = Decimal(1.57171938e+81)
b = Decimal(1.60451506e+56)
d = a + b
print(d)
print(d > a and d > b)
Output:
1.571719379999999945626903708E+81
True
You could convert it back to a float afterwards, but this will cause the same problem as before.
f = float(d)
print(f)
print(f > a and f > b)
Output:
1.57171938e+81
False
Note that if you store Decimals in your numpy arrays, you will lose fast vectorized operations, as numpy does not recognize Decimal objects. Though it does work:
import numpy as np
a = np.array([1.57171938e+81, 1.60451506e+56, 1.00000000e+00])
d = np.vectorize(Decimal)(a) # convert values to Decimal
print(d.sum())
print(d.sum() > d[0]
Output:
1.571719379999999945626903708E+81
True
1.57171938e+81 is a number with 81 digits, of which you only enter the first 9. 1.60451506e+56 is a much much much smaller number, with only 56 digits.
What kind of answer are you expecting? The first utterly dwarfs the second. If you want something of a similar precision to your original numbers (and that's what you get using floats), then the answer is simply correct.
You could use ints:
>>> a = int(1.57171938e+81)
>>> b = int(1.60451506e+56)
>>> a
571719379999999945626903548020224083024251666384876684446269499489505292916359168L
>>> b
160451506000000001855754747064077065047170486040598151168L
>>> a+b
1571719379999999945626903708471730083024253522139623748523334546659991333514510336L
But how useful that is is up to you.
It does seem to be a problem with approximation:
>>> 1.57171938e+81 + 1.60451506e+65 > 1.57171938e+81
<<< True
>>> 1.57171938e+81 + 1.60451506e+64 > 1.57171938e+81
<<< False
You can get arount this by casting to int:
>>> int(1.57171938e+81) + int(1.60451506e+64) > int(1.57171938e+81)
<<< True

Python - Flooring floats

This is a really simple question. Lets denote the following:
>>> x = 1.2876
Now, round has this great optional second parameter that will round at that decimal place:
>>> round(x,3)
1.288
I was wondering if there is a simple way to round down the numbers. math.floor(x,3) returns an error rather than 1.287
This may be the easiest, if by "rounding down" you mean "toward minus infinity" (as floor() does):
>>> x = 1.2876
>>> x - x % .001
1.287
>>> x = -1.1111
>>> x - x % .001
-1.112
This is prone to lots of shallow surprises, though, due to that most decimal values cannot be exactly represented as binary floating-point values. If those bother you, do something similar with decimal.Decimal values instead.
This is just something that appeared in my mind. Why don't we convert it to string, and then floor it?
import math
def floor_float(x, index):
sx = str(x)
sx = sx[:index]+str(math.floor(float(sx[index]+"."+sx[index+1])))
return float(sx)
A little advantage is that it's more representating-error-proof, it's more accurate in representating the numbers (since it's a string):
>>> floor_float(10.8976540981, 8)
10.897654
This maybe not the best pythonic solution though.. But it works quite well :)
Update
In Python 2.x, math.floor returns a float instead of integer. To make this work you'll to convert the result, to an integer:
sx = sx[:index]+str(int(math.floor(float(sx[index]+"."+sx[index+1]))))
Update2
To be honest, the code above is basically nonsense, and too complicated ;)
Since it's flooring, you can just truncate the string, and float it back:
def floor_float(x, i):
return float(str(x)[:i])
There's always floor(x*10**3)*10**-3.
Another approach, building on the decimal module's more elaborate facilities. Like the builtin round(), this also supports negative "digits":
>>> round(1234.5, -1) # builtin behavior for negative `ndigits`
1230.0
>>> round(1234.5, -2)
1200.0
>>> round(1234.5, -3)
1000.0
and you can use any of the 8(!) rounding modes defined in decimal.
from decimal import ROUND_DOWN
def rfloat(x, ndigits=0, rounding=ROUND_DOWN):
from decimal import Decimal as D
proto = D("1e%d" % -ndigits)
return float(D(str(x)).quantize(proto, rounding))
Example:
for i in range(-4, 6):
print i, "->", rfloat(-55555.55555, i)
produces:
-4 -> -50000.0
-3 -> -55000.0
-2 -> -55500.0
-1 -> -55550.0
0 -> -55555.0
1 -> -55555.5
2 -> -55555.55
3 -> -55555.555
4 -> -55555.5555
5 -> -55555.55555
Try to parse strings instead at your own risk ;-)
def roundDown(num, places):
return int(num*(10**places))/float(10**places)

Compare decimals in python

I want to be able to compare Decimals in Python. For the sake of making calculations with money, clever people told me to use Decimals instead of floats, so I did. However, if I want to verify that a calculation produces the expected result, how would I go about it?
>>> a = Decimal(1./3.)
>>> a
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> b = Decimal(2./3.)
>>> b
Decimal('0.66666666666666662965923251249478198587894439697265625')
>>> a == b
False
>>> a == b - a
False
>>> a == b - Decimal(1./3.)
False
so in this example a = 1/3 and b = 2/3, so obviously b-a = 1/3 = a, however, that cannot be done with Decimals.
I guess a way to do it is to say that I expect the result to be 1/3, and in python i write this as
Decimal(1./3.).quantize(...)
and then I can compare it like this:
(b-a).quantize(...) == Decimal(1./3.).quantize(...)
So, my question is: Is there a cleaner way of doing this? How would you write tests for Decimals?
You are not using Decimal the right way.
>>> from decimal import *
>>> Decimal(1./3.) # Your code
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> Decimal("1")/Decimal("3") # My code
Decimal('0.3333333333333333333333333333')
In "your code", you actually perform "classic" floating point division -- then convert the result to a decimal. The error introduced by floats is propagated to your Decimal.
In "my code", I do the Decimal division. Producing a correct (but truncated) result up to the last digit.
Concerning the rounding. If you work with monetary data, you must know the rules to be used for rounding in your business. If not so, using Decimal will not automagically solve all your problems. Here is an example: $100 to be share between 3 shareholders.
>>> TWOPLACES = Decimal(10) ** -2
>>> dividende = Decimal("100.00")
>>> john = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john
Decimal('33.33')
>>> paul = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> georges = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john+paul+georges
Decimal('99.99')
Oups: missing $.01 (free gift for the bank ?)
Your verbiage states you want to to monetary calculations, minding your round off error. Decimals are a good choice, as they yield EXACT results under addition, subtraction, and multiplication with other Decimals.
Oddly, your example shows working with the fraction "1/3". I've never deposited exactly "one-third of a dollar" in my bank... it isn't possible, as there is no such monetary unit!
My point is if you are doing any DIVISION, then you need to understand what you are TRYING to do, what the organization's policies are on this sort of thing... in which case it should be possible to implement what you want with Decimal quantizing.
Now -- if you DO really want to do division of Decimals, and you want to carry arbitrary "exactness" around, you really don't want to use the Decimal object... You want to use the Fraction object.
With that, your example would work like this:
>>> from fractions import Fraction
>>> a = Fraction(1,3)
>>> a
Fraction(1, 3)
>>> b = Fraction(2,3)
>>> b
Fraction(2, 3)
>>> a == b
False
>>> a == b - a
True
>>> a + b == Fraction(1, 1)
True
>>> 2 * a == b
True
OK, well, even a caveat there: Fraction objects are the ratio of two integers, so you'd need to multiply by the right power of 10 and carry that around ad-hoc.
Sound like too much work? Yes... it probably is!
So, head back to the Decimal object; implement quantization/rounding upon Decimal division and Decimal multiplication.
Floating-point arithmetics is not accurate :
Decimal numbers can be represented exactly. In contrast, numbers like
1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as
3.3000000000000003 as it does with binary floating point
You have to choose a resolution and truncate everything past it :
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
You will obviously get some rounding error which will grow with the number of operations so you have to choose your resolution carefully.
There is another approach that may work for you:
Continue to do all your calculations in floating point values
When you need to compare for equality, use round(val, places)
For example:
>>> a = 1./3
>>> a
0.33333333333333331
>>> b = 2./3
>>> b
0.66666666666666663
>>> b-a
0.33333333333333331
>>> round(a,2) == round(b-a, 2)
True
If you'd like, create a function equals_to_the_cent():
>>> def equals_to_the_cent(a, b):
... return round(a, 2) == round(b, 2)
...
>>> equals_to_the_cent(a, b)
False
>>> equals_to_the_cent(a, b-a)
True
>>> equals_to_the_cent(1-a, b)
True

Categories

Resources