Float precision breakdown in python/numpy when adding numbers - python

I have some problems due to really low numbers used with numpy. It took me several weeks to trace back my constant problems with numerical integration to the fact, that when I add up floats in a function the float64 precision gets lost. Performing the mathematically identic calculation with a product instead of a sum leads to values that are alright.
Here is a code sample and a plot of the results:
from matplotlib.pyplot import *
from numpy import vectorize, arange
import math
def func_product(x):
return math.exp(-x)/(1+math.exp(x))
def func_sum(x):
return math.exp(-x)-1/(1+math.exp(x))
#mathematically, both functions are the same
vecfunc_sum = vectorize(func_sum)
vecfunc_product = vectorize(func_product)
x = arange(0.,300.,1.)
y_sum = vecfunc_sum(x)
y_product = vecfunc_product(x)
plot(x,y_sum, 'k.-', label='sum')
plot(x,y_product,'r--',label='product')
yscale('symlog', linthreshy=1E-256)
legend(loc='lower right')
show()
As you can see, the summed values that are quite low are scattered around zero or are exactly zero while the multiplicated values are fine...
Please, could someone help/explain? Thanks a lot!

Floating point precision is pretty sensitive to addition/subtraction due to roundoff error. Eventually, 1+exp(x) gets so big that adding 1 to exp(x) gives the same thing as exp(x). In double precision that's somewhere around exp(x) == 1e16:
>>> (1e16 + 1) == (1e16)
True
>>> (1e15 + 1) == (1e15)
False
Note that math.log(1e16) is approximately 37 -- Which is roughly where things go crazy on your plot.
You can have the same problem, but on different scales:
>>> (1e-16 + 1.) == (1.)
True
>>> (1e-15 + 1.) == (1.)
False
For a vast majority of the points in your regime, your func_product is actually calculating:
exp(-x)/exp(x) == exp(-2*x)
Which is why your graph has a nice slope of -2.
Taking it to the other extreme, you're other version is calculating (at least approximately):
exp(-x) - 1./exp(x)
which is approximately
exp(-x) - exp(-x)

This is an example of catastrophic cancellation.
Let's look at the first point where the calculation goes awry, when x = 36.0
In [42]: np.exp(-x)
Out[42]: 2.3195228302435691e-16
In [43]: - 1/(1+np.exp(x))
Out[43]: -2.3195228302435691e-16
In [44]: np.exp(-x) - 1/(1+np.exp(x))
Out[44]: 0.0
The calculation using func_product does not subtract nearly equal numbers, so it avoids the catastrophic cancellation.
By the way, if you change math.exp to np.exp, you can get rid of np.vectorize (which is slow):
def func_product(x):
return np.exp(-x)/(1+np.exp(x))
def func_sum(x):
return np.exp(-x)-1/(1+np.exp(x))
y_sum = func_sum_sum(x)
y_product = func_product_product(x)

The problem is that your func_sum is numerically unstable because it involves a subtraction between two very close values.
In the calculation of func_sum(200), for example, math.exp(-200) and 1/(1+math.exp(200)) have the same value, because adding 1 to math.exp(200) has no effect, since it is outside the precision of 64-bit floating point:
math.exp(200).hex()
0x1.73f60ea79f5b9p+288
(math.exp(200) + 1).hex()
0x1.73f60ea79f5b9p+288
(1/(math.exp(200) + 1)).hex()
0x1.6061812054cfap-289
math.exp(-200).hex()
0x1.6061812054cfap-289
This explains why func_sum(200) gives zero, but what about the points that lie off the x axis? These are also caused by floating point imprecision; it occasionally happens that math.exp(-x) is not equal to 1/math.exp(x); ideally, math.exp(x) is the closest floating-point value to e^x, and 1/math.exp(x) is the closest floating-point value to the reciprocal of the floating-point number calculated by math.exp(x), not necessarily to e^-x. Indeed, math.exp(-100) and 1/(1+math.exp(100)) are very close and in fact only differ in the last unit:
math.exp(-100).hex()
0x1.a8c1f14e2af5dp-145
(1/math.exp(100)).hex()
0x1.a8c1f14e2af5cp-145
(1/(1+math.exp(100))).hex()
0x1.a8c1f14e2af5cp-145
func_sum(100).hex()
0x1.0000000000000p-197
So what you have actually calculated is the difference, if any, between math.exp(-x) and 1/math.exp(x). You can trace the line of the function math.pow(2, -52) * math.exp(-x) to see that it passes through the positive values of func_sum (recall that 52 is the size of the significand in 64-bit floating point).

Related

Numpy float mean calculation precision

I happen to have a numpy array of floats:
a.dtype, a.shape
#(dtype('float64'), (32769,))
The values are:
a[0]
#3.699822718929953
all(a == a[0])
True
However:
a.mean()
3.6998227189299517
The mean is off by 15th and 16th figure.
Can anybody show how this difference is accumulated over 30K mean and if there is a way to avoid it?
In case it matters my OS is 64 bit.
Here is a rough approximation of a bound on the maximum error. This will not be representative of average error, and it could be improved with more analysis.
Consider calculating a sum using floating-point arithmetic with round-to-nearest ties-to-even:
sum = 0;
for (i = 0; i < n; ++n)
sum += a[i];
where each a[i] is in [0, m).
Let ULP(x) denote the unit of least precision in the floating-point number x. (For example, in the IEEE-754 binary64 format with 53-bit significands, if the largest power of 2 not greater than |x| is 2p, then ULP(x) = 2p−52. With round-to-nearest, the maximum error in any operation with result x is ½ULP(x).
If we neglect rounding errors, the maximum value of sum after i iterations is i•m. Therefore, a bound on the error in the addition in iteration i is ½ULP(i•m). (Actually zero for i=1, since that case adds to zero, which has no error, but we neglect that for this approximation.) Then the total of the bounds on all the additions is the sum of ½ULP(i•m) for i from 1 to n. This is approximately ½•n•(n+1)/2•ULP(m) = ¼•n•(n+1)•ULP(m). (This is an approximation because it moves i outside the ULP function, but ULP is a discontinuous function. It is “approximately linear,“ but there are jumps. Since the jumps are by factors of two, the approximation can be off by at most a factor of two.)
So, with 32,769 elements, we can say the total rounding error will be at most about ¼•32,769•32,770•ULP(m), about 2.7•108 times the ULP of the maximum element value. The ULP is 2−52 times the greatest power of two not less than m, so that is about 2.7•108•2−52 = 6•10−8 times m.
Of course, the likelihood that 32,768 sums (not 32,769 because the first necessarily has no error) all round in the same direction by chance is vanishingly small but I conjecture one might engineer a sequence of values that gets close to that.
An Experiment
Here is a chart of (in blue) the mean error over 10,000 samples of summing arrays with sizes 100 to 32,800 by 100s and elements drawn randomly from a uniform distribution over [0, 1). The error was calculated by comparing the sum calculated with float (IEEE-754 binary32) to that calculated with double (IEEE-754 binary64). (The samples were all multiples of 2−24, and double has enough precision so that the sum for up to 229 such values is exact.)
The green line is c n √n with c set to match the last point of the blue line. We see it tracks the blue line over the long term. At points where the average sum crosses a power of two, the mean error increases faster for a time. At these points, the sum has entered a new binade, and further additions have higher average errors due to the increased ULP. Over the course of the binade, this fixed ULP decreases relative to n, bringing the blue line back to the green line.
This is due to incapability of float64 type to store the sum of your float numbers with correct precision. In order to get around this problem you need to use a larger data type of course*. Numpy has a longdouble dtype that you can use in such cases:
In [23]: np.mean(a, dtype=np.longdouble)
Out[23]: 3.6998227189299530693
Also, note:
In [25]: print(np.longdouble.__doc__)
Extended-precision floating-point number type, compatible with C
``long double`` but not necessarily with IEEE 754 quadruple-precision.
Character code: ``'g'``.
Canonical name: ``np.longdouble``.
Alias: ``np.longfloat``.
Alias *on this platform*: ``np.float128``: 128-bit extended-precision floating-point number type.
* read the comments for more details.
The mean is (by definition):
a.sum()/a.size
Unfortunately, adding all those values up and dividing accumulates floating point errors. They are usually around the magnitude of:
np.finfo(np.float).eps
Out[]: 2.220446049250313e-16
Yeah, e-16, about where you get them. You can make the error smaller by using higher-accuracy floats like float128 (if your system supports it) but they'll always accumulate whenever you're summing a large number of float together. If you truly want the identity, you'll have to hardcode it:
def mean_(arr):
if np.all(arr == arr[0]):
return arr[0]
else:
return arr.mean()
In practice, you never really want to use == between floats. Generally in numpy we use np.isclose or np.allclose to compare floats for exactly this reason. There are ways around it using other packages and leveraging arcane machine-level methods of calculating numbers to get (closer to) exact equality, but it's rarely worth the performance and clarity hit.

How to avoid math.sin(math.pi*2*VERY LARGE NUMBER) having a much larger error margin than math.sin(math.pi*2)?

I've read in other questions that that for example sin(2π) is not zero due to floating point representation, but is very close. This very small error is no issue in my code as I can just round up 5 decimals for example.
However when multiplying 2π with a very large number, the error is magnified a lot. The answer should be zero (or close), but is far from it.
Am I doing something fundamentally wrong in my thinking? If not, how can I avoid the error margin of floating numbers for π to get "magnified" as the number of periods (2*PI*X) → ∞ ?
Notice that all the 3 last results are the same. Can anyone explain why this is even though 5) is exactly PI/2 larger than 4)? Even with a huge offset in the sinus curve, an increase in PI/2 should still produce a different number right?
Checking small number SIN(2*PI)
print math.sin(math.pi*2)
RESULT = -2.44929359829e-16 AS EXPECTED → This error margin is OK for my purpose
Adding PI/2 to code above: SIN(2*PI + PI/2)
print math.sin((math.pi*2)+(math.pi/2))
RESULT: 1.0 AS EXPECTED
Checking very large number SIN(2*PI*VERY LARGE NUMBER) (still expecting close to zero)
print math.sin(math.pi*2*(415926535897932384626433832795028841971693993751))
RESULT:-0.759488037749 NOT AS EXPECTED --> This error margin is NOT OK for my purpose
Adding PI/2 to code above: SIN(2*PI*VERY LARGE NUMBER + PI/2) (expecting close to one)
print math.sin((math.pi*2*(415926535897932384626433832795028841971693993751))+(math.pi/2))
As above but I added PI/2 - expecting to get 1.0 as result
RESULT:-0.759488037749 NOT AS EXPECTED - why the same result as above when I added PI/2 (should go a quarter period on the sinus curve)
Adding random number (8) to the very large number, expecting neither 1 nor 0
print math.sin(math.pi*2*(415926535897932384626433832795028841971693993759))
as above but I added 8 - expecting to get neither 0 nor 1
RESULT:-0.759488037749 NOT AS EXPECTED - why the same result as above when I added 8
This simply isn't going to work with double-precision variables.
The value of math.pi is correct only to about 16 places of decimals (53 bits in binary), so when you multiply it by a number like 415926535897932384626433832795028841971693993751 (159 bits), it would be impossible to get meaningful results.
You need to use an arbitrary precision math library instead. Try using mpmath for example. Tell it you want 1000 bits of precision, and then try your sums again:
>>> import mpmath
>>> mpmath.mp.prec=1000
>>> print(mpmath.sin((mpmath.pi*2*(415926535897932384626433832795028841971693993751))+(mpmath.pi/2)))
1.0
How to avoid math.sin(math.pi*2*VERY LARGE NUMBER) having a much larger error margin than math.sin(math.pi*2)?
You could % 1 that very large number:
>>> math.sin(math.pi*2*(415926535897932384626433832795028841971693993751))
-0.8975818793257183
>>> math.sin(math.pi*2*(415926535897932384626433832795028841971693993751 % 1))
0.0
>>> math.sin((math.pi*2*(415926535897932384626433832795028841971693993751))+(math.pi/2))
-0.8975818793257183
>>> math.sin((math.pi*2*(415926535897932384626433832795028841971693993751 % 1))+(math.pi/2))
1.0
The algorithms used are approximate, and the values (e.g. pi) are approximate. So $\pi \cdot {SomeLargeNumber}$ will have a largeish error (as $\pi$'s value is approximate). The function used (by hardware?) will reduce the argument, perhaps using a slightly different value of $\pi$.
Note that floating point arithmetic does not satisfy the axioms for real arithmetic.

Python thinks Euler has identity issues (cmath returning funky results)

My code:
import math
import cmath
print "E^ln(-1)", cmath.exp(cmath.log(-1))
What it prints:
E^ln(-1) (-1+1.2246467991473532E-16j)
What it should print:
-1
(For Reference, Google checking my calculation)
According to the documentation at python.org cmath.exp(x) returns e^(x), and cmath.log(x) returns ln (x), so unless I'm missing a semicolon or something , this is a pretty straightforward three line program.
When I test cmath.log(-1) it returns πi (technically 3.141592653589793j). Which is right. Euler's identity says e^(πi) = -1, yet Python says when I raise e^(πi), I get some kind of crazy talk (specifically -1+1.2246467991473532E-16j).
Why does Python hate me, and how do I appease it?
Is there a library to include to make it do math right, or a sacrifice I have to offer to van Rossum? Is this some kind of floating point precision issue perhaps?
The big problem I'm having is that the precision is off enough to have other values appear closer to 0 than actual zero in the final function (not shown), so boolean tests are worthless (i.e. if(x==0)) and so are local minimums, etc...
For example, in an iteration below:
X = 2 Y= (-2-1.4708141202500006E-15j)
X = 3 Y= -2.449293598294706E-15j
X = 4 Y= -2.204364238465236E-15j
X = 5 Y= -2.204364238465236E-15j
X = 6 Y= (-2-6.123233995736765E-16j)
X = 7 Y= -2.449293598294706E-15j
3 & 7 are both actually equal to zero, yet they appear to have the largest imaginary parts of the bunch, and 4 and 5 don't have their real parts at all.
Sorry for the tone. Very frustrated.
As you've already demonstrated, cmath.log(-1) doesn't return exactly i*pi. Of course, returning pi exactly is impossible as pi is an irrational number...
Now you raise e to the power of something that isn't exactly i*pi and you expect to get exactly -1. However, if cmath returned that, you would be getting an incorrect result. (After all, exp(i*pi+epsilon) shouldn't equal -1 -- Euler doesn't make that claim!).
For what it's worth, the result is very close to what you expect -- the real part is -1 with an imaginary part close to floating point precision.
It appears to be a rounding issue. While -1+1.22460635382e-16j is not a correct value, 1.22460635382e-16j is pretty close to zero. I don't know how you could fix this but a quick and dirty way could be rounding the number to a certain number of digits after the dot ( 14 maybe ? ).
Anything less than 10^-15 is normally zero. Computer calculations have a certain error that is often in that range. Floating point representations are representations, not exact values.
The problem is inherent to representing irrational numbers (like π) in finite space as floating points.
The best you can do is filter your result and set it to zero if its value is within a given range.
>>> tolerance = 1e-15
>>> def clean_complex(c):
... real,imag = c.real, c.imag
... if -tolerance < real < tolerance:
... real = 0
... if -tolerance < imag < tolerance:
... imag = 0
... return complex(real,imag)
...
>>> clean_complex( cmath.exp(cmath.log(-1)) )
(-1+0j)

Unittest (sometimes) fails because floating-point imprecision

I have a class Vector that represents a point in 3-dimensional space. This vector has a method normalize(self, length = 1) which scales the vector down/up to be length == vec.normalize(length).length.
The unittest for this method sometimes fails because of the imprecision of floating-point numbers. My question is, how can I make sure this test does not fail when the methods are implemented correctly? Is it possible to do it without testing for an approximate value?
Additional information:
def testNormalize(self):
vec = Vector(random.random(), random.random(), random.random())
self.assertEqual(vec.normalize(5).length, 5)
This sometimes results in either AssertionError: 4.999999999999999 != 5 or AssertionError: 5.000000000000001 != 5.
Note: I am aware that the floating-point issue may be in the Vector.length property or in Vector.normalize().
1) How can I make sure the test works?
Use assertAlmostEqual, assertNotAlmostEqual.
From the official documentation:
assertAlmostEqual(first, second, places=7, msg=None, delta=None)
Test that first and second are approximately equal by computing the difference, rounding to the given number of decimal places (default 7), and comparing to zero.
2) Is it possible to do it without testing for an approximate value?
Esentially No.
The floating point issue can't be bypassed, so you have either to "round" the result given by vec.normalize or accept an almost-equal result (each one of the two is an approximation).
By using a floating point value, you accept a small possible imprecision. Therefore, your tests should test if your computed value falls in an acceptable range such as:
theoreticalValue - epsilon < normalizedValue < theoreticalValue + epsilon
where epsilon is a very small value that you define as acceptable for a variation due to floating point imprecision.
I suppose one possibility is to apply the function to test cases for which all inputs, the results of all intermediate calculations, and the output are exactly representable by float.
To illustrate:
In [2]: import math
In [4]: def norm(x, y):
...: return math.sqrt(x*x + y*y)
...:
In [6]: norm(3, 4) == 5
Out[6]: True
Not sure how practical this is though...
In general, you should not assert equality for floats. Instead, ensure that the result is within certain bounds, e.g.:
self.assertTrue(abs(vec.normalize(5).length - 5) < 0.001)

Function to determine if two numbers are nearly equal when rounded to n significant decimal digits

I have been asked to test a library provided by a 3rd party. The library is known to be accurate to n significant figures. Any less-significant errors can safely be ignored. I want to write a function to help me compare the results:
def nearlyequal( a, b, sigfig=5 ):
The purpose of this function is to determine if two floating-point numbers (a and b) are approximately equal. The function will return True if a==b (exact match) or if a and b have the same value when rounded to sigfig significant-figures when written in decimal.
Can anybody suggest a good implementation? I've written a mini unit-test. Unless you can see a bug in my tests then a good implementation should pass the following:
assert nearlyequal(1, 1, 5)
assert nearlyequal(1.0, 1.0, 5)
assert nearlyequal(1.0, 1.0, 5)
assert nearlyequal(-1e-9, 1e-9, 5)
assert nearlyequal(1e9, 1e9 + 1 , 5)
assert not nearlyequal( 1e4, 1e4 + 1, 5)
assert nearlyequal( 0.0, 1e-15, 5 )
assert not nearlyequal( 0.0, 1e-4, 6 )
Additional notes:
Values a and b might be of type int, float or numpy.float64. Values a and b will always be of the same type. It's vital that conversion does not introduce additional error into the function.
Lets keep this numerical, so functions that convert to strings or use non-mathematical tricks are not ideal. This program will be audited by somebody who is a mathematician who will want to be able to prove that the function does what it is supposed to do.
Speed... I've got to compare a lot of numbers so the faster the better.
I've got numpy, scipy and the standard-library. Anything else will be hard for me to get, especially for such a small part of the project.
As of Python 3.5, the standard way to do this (using the standard library) is with the math.isclose function.
It has the following signature:
isclose(a, b, rel_tol=1e-9, abs_tol=0.0)
An example of usage with absolute error tolerance:
from math import isclose
a = 1.0
b = 1.00000001
assert isclose(a, b, abs_tol=1e-8)
If you want it with precision of n significant digits, simply replace the last line with:
assert isclose(a, b, abs_tol=10**-n)
There is a function assert_approx_equal in numpy.testing (source here) which may be a good starting point.
def assert_approx_equal(actual,desired,significant=7,err_msg='',verbose=True):
"""
Raise an assertion if two items are not equal up to significant digits.
.. note:: It is recommended to use one of `assert_allclose`,
`assert_array_almost_equal_nulp` or `assert_array_max_ulp`
instead of this function for more consistent floating point
comparisons.
Given two numbers, check that they are approximately equal.
Approximately equal is defined as the number of significant digits
that agree.
Here's a take.
def nearly_equal(a,b,sig_fig=5):
return ( a==b or
int(a*10**sig_fig) == int(b*10**sig_fig)
)
I believe your question is not defined well enough, and the unit-tests you present prove it:
If by 'round to N sig-fig decimal places' you mean 'N decimal places to the right of the decimal point', then the test assert nearlyequal(1e9, 1e9 + 1 , 5) should fail, because even when you round 1000000000 and 1000000001 to 0.00001 accuracy, they are still different.
And if by 'round to N sig-fig decimal places' you mean 'The N most significant digits, regardless of the decimal point', then the test assert nearlyequal(-1e-9, 1e-9, 5) should fail, because 0.000000001 and -0.000000001 are totally different when viewed this way.
If you meant the first definition, then the first answer on this page (by Triptych) is good.
If you meant the second definition, please say it, I promise to think about it :-)
There are already plenty of great answers, but here's a think:
def closeness(a, b):
"""Returns measure of equality (for two floats), in unit
of decimal significant figures."""
if a == b:
return float("infinity")
difference = abs(a - b)
avg = (a + b)/2
return math.log10( avg / difference )
if closeness(1000, 1000.1) > 3:
print "Joy!"
This is a fairly common issue with floating point numbers. I solve it based on the discussion in Section 1.5 of Demmel[1]. (1) Calculate the roundoff error. (2) Check that the roundoff error is less than some epsilon. I haven't used python in some time and only have version 2.4.3, but I'll try to get this correct.
Step 1. Roundoff error
def roundoff_error(exact, approximate):
return abs(approximate/exact - 1.0)
Step 2. Floating point equality
def float_equal(float1, float2, epsilon=2.0e-9):
return (roundoff_error(float1, float2) < epsilon)
There are a couple obvious deficiencies with this code.
Division by zero error if the exact value is Zero.
Does not verify that the arguments are floating point values.
Revision 1.
def roundoff_error(exact, approximate):
if (exact == 0.0 or approximate == 0.0):
return abs(exact + approximate)
else:
return abs(approximate/exact - 1.0)
def float_equal(float1, float2, epsilon=2.0e-9):
if not isinstance(float1,float):
raise TypeError,"First argument is not a float."
elif not isinstance(float2,float):
raise TypeError,"Second argument is not a float."
else:
return (roundoff_error(float1, float2) < epsilon)
That's a little better. If either the exact or the approximate value is zero, than the error is equal to the value of the other. If something besides a floating point value is provided, a TypeError is raised.
At this point, the only difficult thing is setting the correct value for epsilon. I noticed in the documentation for version 2.6.1 that there is an epsilon attribute in sys.float_info, so I would use twice that value as the default epsilon. But the correct value depends on both your application and your algorithm.
[1] James W. Demmel, Applied Numerical Linear Algebra, SIAM, 1997.
"Significant figures" in decimal is a matter of adjusting the decimal point and truncating to an integer.
>>> int(3.1415926 * 10**3)
3141
>>> int(1234567 * 10**-3)
1234
>>>
Oren Shemesh got part of the problem with the problem as stated but there's more:
assert nearlyequal( 0.0, 1e-15, 5 )
also fails the second definition (and that's the definition I learned in school.)
No matter how many digits you are looking at, 0 will not equal a not-zero. This could prove to be a headache for such tests if you have a case whose correct answer is zero.
There is a interesting solution to this by B. Dawson (with C++ code)
at "Comparing Floating Point Numbers". His approach relies on strict IEEE representation of two numbers and the enforced lexicographical ordering when said numbers are represented as unsigned integers.
I have been asked to test a library provided by a 3rd party
If you are using the default Python unittest framework, you can use assertAlmostEqual
self.assertAlmostEqual(a, b, places=5)
There are lots of ways of comparing two numbers to see if they agree to N significant digits. Roughly speaking you just want to make sure that their difference is less than 10^-N times the largest of the two numbers being compared. That's easy enough.
But, what if one of the numbers is zero? The whole concept of relative-differences or significant-digits falls down when comparing against zero. To handle that case you need to have an absolute-difference as well, which should be specified differently from the relative-difference.
I discuss the problems of comparing floating-point numbers -- including a specific case of handling zero -- in this blog post:
http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

Categories

Resources