Inconsistency in division of large numbers in python3 - python

When I calculated 24! using math library, the result is different compared to 24! calculated by dividing 25! by 25. Why is this?
>>> import math
>>> f=math.factorial(25)
>>> int(f/25)
620448401733239409999872
>>> math.factorial(24)
620448401733239439360000
>>>

/ performs "true division". The result is a floating point number, which does not have enough precision to represent the exact quotient. Calling int cannot reverse the precision loss. The errors in floating point math & rounding are causing the discrepancy.
// is integer division - which is what you want:
>>> f = math.factorial(25)
>>> f/25
6.204484017332394e+23
>>> int(f/25)
620448401733239409999872
>>> math.factorial(24)
620448401733239439360000
>>> f//25
620448401733239439360000 # correct answer

you must not use / operation and int() after division. this code will round the exact division. but when you use factorial for 24 python is using * operations.
>>> from math import factorial
>>> f25 = factorial(25)
>>> f25
# 620448401733239439360000
here you can use // instead of / operation.
see operations explanation here.
>>> f24 = factorial(24)
620448401733239439360000
>>> f25 // 25
620448401733239439360000

Related

Round arithmetic during evaluation

Question
When evaluating arithmetic there are multiple steps (PEMDAS) taken during evaluation. I know you can evaluate an operation then round it, but at times you need to round your data to never exceed a certain precision throughout the evaluation. This brings me to my question: How can you round at every step during the evaluation instead of just at the end?
Examples
For our first example, we will be using the simple operation 0.125/0.375 and rounding to 2 decimals.
# This operation evaluates to 1/3
>>> 0.125/0.375
0.3333333333333333
# If we wanted to round it we could just do
>>> round(0.125/0.375, 2)
0.33
# But if we wanted to round at every step of PEMDAS the following would be necessary
>>> round(round(0.125, 2)/round(0.375, 2), 2)
0.32
# Same equation as above but written as (1/8)/(3/8)
>>> round(round(round(1, 2)/round(8, 2), 2)/round(round(3, 2)/round(8, 2), 2), 2)
0.32
As you can see you get a different result if rounding is performed at every step rather than just at the end.
Although being a bit cumbersome this approach does get the job done. Problems arise though when the equation is not hardcoded but rather received from the user:
# Rounding cannot be applied here in the same way that we did above
>>> eval(input("Arithmetic: "))
Arithmetic: (1/8)/(3/8)
0.3333333333333333
Use cases
This may seem pretty useless at first but can actually be very valuable for many things.
Here is a simple example where rounding at each step would be necessary for finding the holes of a function:
# undefined.py
from math import *
import numpy as np
function = input("Function in terms of x: ")
def is_undefined(x):
x = round(x, 2) # To deal with minor Python inaccuracies (ex: 1.000000000000001)
try:
eval(function)
return False
except ZeroDivisionError:
return True
undefined = [x for x in np.linspace(-5, 5, 1001) if is_undefined(float(x))]
print(undefined)
# Works perfectly!
>>> python undefined.py
Function in terms of x: (x**2)*(x-2)/(x-2)
[2.0]
# Unable to find the hole at x=pi
>>> python undefined.py
Function in terms of x: (x**2)*(2*x - 2*pi)/(x - pi)
[]
The decimal module provides a Decimal type which can be configured so that all arithmetic operations are rounded to a certain number of decimal places:
>>> import decimal as d
>>> d.setcontext(d.Context(prec=2))
>>> x = d.Decimal(0.125)
>>> y = d.Decimal(0.375)
>>> x / y
Decimal('0.33')
You can force rounding of the numbers before the division by using the unary + operation, which normally does nothing, but in this case it applies the precision from the current context, changing the result (to be more inaccurate, of course):
>>> (+x) / (+y)
Decimal('0.32')
So a solution for an expression from user input could be to replace all number literals and instances of the variable x with Decimal objects of the same values: here I've used a regular expression to do that, and to use a unary + to also force rounding before operations.
import decimal as d
import re
d.setcontext(d.Context(prec=2))
function = input("Function in terms of x: ")
function = re.sub(r'([0-9]+(\.[0-9]+)?|x)', r'(+d.Decimal(\1))', function)
# ...
Note there is no longer a need to write x = round(x, 2), because the expression itself forces x to be rounded.
You may be looking for symbolic math instead, such as Sympy, which can probably do what you're really looking for-
Specifically, not aliasing transcendental numbers (like pi and e) or waiting to reduce irreducible fractions into decimal space until asked to evaluate to a decimal
>>> from sympy import *
>>> expr = "(1/8)/(3/8)"
>>> simplify(expr) # precise value
1/3
>>> simplify(expr).evalf() # decimal aliasing
0.333333333333333
>>> N("(1/8)/(3/8)") # using sympy.N()
0.333333333333333
This can also be used to solve equations
>>> x = symbols("x", real=True)
>>> solve(x**2 - 1) # simple solution
[-1, 1]
>>> solve(x**2 - pi) # more complex solution
[-sqrt(pi), sqrt(pi)]
>>> [N(expr) for expr in solve(x**2 - pi)] # decimal approximation
[-1.77245385090552, 1.77245385090552]
This can also be used (perhaps evilly) with Python constructs
>>> [N(x * pi) for x in range(10)] # lots of approximations!
[0, 3.14159265358979, 6.28318530717959, 9.42477796076938, 12.5663706143592, 15.7079632679490, 18.8495559215388, 21.9911485751286, 25.1327412287183, 28.2743338823081]

How to deal with rounding errors in python math.ceil

The following code snippet is giving 6 as a result:
import math
number = (1 - 0.99) * 500
math.ceil(number)
while the (mathematically) correct answer would be 5. Presumably this is a rounding problem - what is the best way to enforce the correct solution?
Presumably this is a rounding problem
Yes:
>>> 1 - 0.99
0.010000000000000009
>>> (1 - 0.99) * 500
5.000000000000004
what is the best way to enforce the correct solution?
You could use a decimal.Decimal instead of a float:
>>> from decimal import Decimal
>>> import math
>>> (1 - Decimal("0.99")) * 500
Decimal('5.00')
>>> math.ceil((1 - Decimal("0.99")) * 500)
5.0
It's a floating-point error since some numbers can't be represented exactly (infinitely many numbers have to be represented using a finite number of bits -- there has to be some trade-offs). This is why you lose some precision with floating point operations:
>>> 1-0.99
0.010000000000000009
Try Decimal:
>>> from decimal import Decimal as d
>>> result = (1 - d("0.99")) * 500
>>> result
Decimal('5.00')
>>> math.ceil(result)
5.0
Edit
It may look like all the numbers have exact representations:
>>> a = 1.0; b = 0.99; c = 0.01
>>> a, b, c
(1.0, 0.99, 0.01)
So this result might seem surprising:
>>> a - b
0.010000000000000009
>>> a - b == c
False
But it's just the precision and rounding errors that accumulate. Here are the same numbers and calculation, but showing more digits:
>>> def o(f): return "%.30f" % f
>>> o(a)
'1.000000000000000000000000000000'
>>> o(b)
'0.989999999999999991118215802999'
>>> o(c)
'0.010000000000000000208166817117'
>>> o(a-b)
'0.010000000000000008881784197001'
Python 2.7 rounds to 17 significant digits. It is a different model from real math.
The given answers are correct, this is a case of rounding error. However, I think it would be useful to include why this happens.
In hardware, floating point numbers are base 2 (AKA binary). The problem is that most decimal fractions cannot be represented exactly as binary fractions. The translation of that is (in general) floating point numbers are only approximated by the binary floating point numbers actually stored in the machine.

Compare decimals in python

I want to be able to compare Decimals in Python. For the sake of making calculations with money, clever people told me to use Decimals instead of floats, so I did. However, if I want to verify that a calculation produces the expected result, how would I go about it?
>>> a = Decimal(1./3.)
>>> a
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> b = Decimal(2./3.)
>>> b
Decimal('0.66666666666666662965923251249478198587894439697265625')
>>> a == b
False
>>> a == b - a
False
>>> a == b - Decimal(1./3.)
False
so in this example a = 1/3 and b = 2/3, so obviously b-a = 1/3 = a, however, that cannot be done with Decimals.
I guess a way to do it is to say that I expect the result to be 1/3, and in python i write this as
Decimal(1./3.).quantize(...)
and then I can compare it like this:
(b-a).quantize(...) == Decimal(1./3.).quantize(...)
So, my question is: Is there a cleaner way of doing this? How would you write tests for Decimals?
You are not using Decimal the right way.
>>> from decimal import *
>>> Decimal(1./3.) # Your code
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> Decimal("1")/Decimal("3") # My code
Decimal('0.3333333333333333333333333333')
In "your code", you actually perform "classic" floating point division -- then convert the result to a decimal. The error introduced by floats is propagated to your Decimal.
In "my code", I do the Decimal division. Producing a correct (but truncated) result up to the last digit.
Concerning the rounding. If you work with monetary data, you must know the rules to be used for rounding in your business. If not so, using Decimal will not automagically solve all your problems. Here is an example: $100 to be share between 3 shareholders.
>>> TWOPLACES = Decimal(10) ** -2
>>> dividende = Decimal("100.00")
>>> john = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john
Decimal('33.33')
>>> paul = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> georges = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john+paul+georges
Decimal('99.99')
Oups: missing $.01 (free gift for the bank ?)
Your verbiage states you want to to monetary calculations, minding your round off error. Decimals are a good choice, as they yield EXACT results under addition, subtraction, and multiplication with other Decimals.
Oddly, your example shows working with the fraction "1/3". I've never deposited exactly "one-third of a dollar" in my bank... it isn't possible, as there is no such monetary unit!
My point is if you are doing any DIVISION, then you need to understand what you are TRYING to do, what the organization's policies are on this sort of thing... in which case it should be possible to implement what you want with Decimal quantizing.
Now -- if you DO really want to do division of Decimals, and you want to carry arbitrary "exactness" around, you really don't want to use the Decimal object... You want to use the Fraction object.
With that, your example would work like this:
>>> from fractions import Fraction
>>> a = Fraction(1,3)
>>> a
Fraction(1, 3)
>>> b = Fraction(2,3)
>>> b
Fraction(2, 3)
>>> a == b
False
>>> a == b - a
True
>>> a + b == Fraction(1, 1)
True
>>> 2 * a == b
True
OK, well, even a caveat there: Fraction objects are the ratio of two integers, so you'd need to multiply by the right power of 10 and carry that around ad-hoc.
Sound like too much work? Yes... it probably is!
So, head back to the Decimal object; implement quantization/rounding upon Decimal division and Decimal multiplication.
Floating-point arithmetics is not accurate :
Decimal numbers can be represented exactly. In contrast, numbers like
1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as
3.3000000000000003 as it does with binary floating point
You have to choose a resolution and truncate everything past it :
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
You will obviously get some rounding error which will grow with the number of operations so you have to choose your resolution carefully.
There is another approach that may work for you:
Continue to do all your calculations in floating point values
When you need to compare for equality, use round(val, places)
For example:
>>> a = 1./3
>>> a
0.33333333333333331
>>> b = 2./3
>>> b
0.66666666666666663
>>> b-a
0.33333333333333331
>>> round(a,2) == round(b-a, 2)
True
If you'd like, create a function equals_to_the_cent():
>>> def equals_to_the_cent(a, b):
... return round(a, 2) == round(b, 2)
...
>>> equals_to_the_cent(a, b)
False
>>> equals_to_the_cent(a, b-a)
True
>>> equals_to_the_cent(1-a, b)
True

How to properly truncate a float/decimal to a specific place after the decimal in python?

In Python 2.7.3, this is the current behavior:
>>> 8./9.
0.8888888888888888
>>> '%.1f' % (8./9.)
'0.9'
Same appears to be true for Decimals:
>>> from decimal import Decimal
>>> Decimal(8) / Decimal(9)
Decimal('0.8888888888888888888888888889')
>>> '%.1f' % (Decimal(8) / Decimal(9))
'0.9'
I would have expected truncation, however, it appears to round. So my options to truncating to the tenths place?
FYI I ask because my current solution seems hacky (but maybe its the best practice?) as it make a string of the result, finds the period and simply finds X digits after the period that I want.
You are looking for the math.floor() function instead:
>>> import math
>>> math.floor(8./9. * 10) / 10
0.8
So my options to truncating to the tenths place?
The Decimal.quantize() method rounds a number to a fixed exponent and it provides control over the rounding mode:
>>> from decimal import Decimal, ROUND_FLOOR
>>> Decimal('0.9876').quantize(Decimal('0.1'), rounding=ROUND_FLOOR)
Decimal('0.9')
Don't use math.floor on Decimal values because it first coerces them to a binary float introducing representation error and lost precision:
>>> x = Decimal('1.999999999999999999998')
>>> x.quantize(Decimal('0.1'), rounding=ROUND_FLOOR)
Decimal('1.9')
>>> math.floor(x * 10) / 10
2.0
Multiply by 10, then floor the value.
In some language:
float f = 1/3;
print(f) //Prints 0,3333333333
float q = Math.floor(f*10)/10
print(q) //Prints 0,3

Why I get this 0 as answer for (1/10) in Python?

I am trying this in Python 2.7 Interpreter.
>>> print 1/10
0
>>> print float(1/10)
0.0
>>> print 1/5
0
>>> a = 1/5
>>> a
0
>>> float(a)
0.0
>>>
I want a value in floating point when I divide. Any idea on the logic behind this zero out put and please let me know how to do it correctly.
The float conversion is happening after the integer division. Try this:
>>> print(1.0/10)
0.1
In Python 2.x, by default / means integer (truncating) division when used with two integer arguments. In Python 3.x, / always means float division, and // always means integer division. You can get the Python 3.x behaviour in 2.x using:
from __future__ import division
See PEP 238 for a full discussion of the division operators.
You need to do something like:
print (1.0/10)
When you do:
print 1/10
It is evaluating it as 0 because it is treating it as an integer.
And when you do:
print float(1/10)
You take the stuff inside the float() and evalute that first so in essence you get:
print float(0)
Which would be 0.0
The integer division 1/10 returns an integer 0. Converting integer 0 to float, just returns 0.0. What you need to do is divide float and an integer to get a float. This will work.
>>> 1/10.0
0.1
or
>>> 1.0/10
0.1
Result of integer division is integer.
From python docs
The / (division) and // (floor division) operators yield the quotient
of their arguments. The numeric arguments are first converted to a
common type. Plain or long integer division yields an integer of the
same type; the result is that of mathematical division with the
‘floor’ function applied to the result
Try
>>> 1.0/5
0.20000000000000001
>>> 1/5.0
0.20000000000000001
>>>
You want to do the below:
(float (1)/10)
Division in Python 2 between integers returns an integer result. (In Python 3, it returns a float result as you might expect.)
To get a float result, convert at least one of the integers into a float:
>>> print(1/10)
0
>>> print(1/float(10))
0.1
>>> print(float(1)/10)
0.1
>>> print(float(1)/float(10))
0.1

Categories

Resources