I am new to python, and in my new trip I have encountered this using the decimal module:
>>> getcontext().prec = 4; print(Decimal(7)/Decimal(9));
0.7778 # everything ok
>>>
>>> getcontext().prec = 4; print(Decimal(2).sqrt());
1.414 # why 3 and not 4?
>>>
>>> getcontext().prec = 10;
>>> print(Decimal(10).log10()/Decimal(2).log10());
3.321928094 # why 9 if precision is set to 10?
Looking from the https://docs.python.org/2/library/decimal.html I didn't find a mention on that.
Why does it happen?
Thanks for the attention!
At a guess: it's the number of significant digits: there is also a digit in front of the decimal point for the second and third example (the 0 in the first example is not significant).
Note that the fourth bullet in the documentation says:
The decimal module incorporates a notion of significant places so that
1.30 + 1.20 is 2.50.
Compare (using your first example, but with a larger number):
>>> getcontext().prec = 8
>>> print Decimal(7000)/Decimal(9)
777.77778
>>> getcontext().prec = 4
>>> print Decimal(7000)/Decimal(9)
777.8
>>> getcontext().prec = 2
>>> print Decimal(7000)/Decimal(9)
7.8E+2
Unfortunately, most examples in the documentation are restricted to numbers of order 1, so this doesn't show clearly.
Related
I have this value :
a = 1.01010101
And i need to take all the numbers after the point,
convert them into an int.
Create a new variable and put this int in an new variable.
So i need an output like this
b = 01010101
I can't make this:
a -= 1
b = a*(10**8)
because I don't know the number before the point.
Is it also possible to do it without writing a new function?
Sorry for my english.
Have a good day
The math.trunc() function will give you the integer part:
>>> import math
>>> math.trunc(1.01010101)
1
you can then subtract, however you'll likely run into ieee floating point issues that may be surprising:
>>> a = 1.01010101
>>> a -= math.trunc(a)
>>> a
0.010101010000000077
>>> b = a * 10**8
>>> b
1010101.0000000077
in many cases you can just truncate the last number to get the expected integer, but I'd suggest reading https://docs.python.org/2/tutorial/floatingpoint.html to get a deeper understanding.
Python has a decimal module that handles base-10 arithmetic more faithfully:
import decimal.Decimal as D
>>> a = D('1.01010101')
>>> a
Decimal('1.01010101')
>>> math.trunc(a)
1
>>> a -= math.trunc(a)
>>> a
Decimal('0.01010101')
>>> a * 10**8
Decimal('1010101.00000000')
>>> b = int(a * 10**8)
>>> b
1010101
in this version there will be no floating point artifacts in the b = ... line.
You can do this:
a = 1.01010101
b = str(a).split('.')[1]
This should give you "01010101".
So imagine I have
>>> a = 725692137865927813642341235.00
If I do
>>> sum = a + 1
and afterwards
>>> sum == a
True
This is because a is bigger than a certain threshold.
Is there any trick like the logsumexp to perform this?
PS: a is an np.float64.
If a has to be specifically of type float, no, then that's not possible. In fact, the imprecision is much greater:
>>> a = 725692137865927813642341235.00
>>> a + 10000 == a
True
However, there are other data types that can be used to represent (almost) arbitrary precision decimal values or fractions.
>>> d = decimal.Decimal(a)
>>> d + 1 == d
False
>>> f = fractions.Fraction(a)
>>> f + 1 == f
False
(Note: of course, doing Decimal(a) or Fraction(a) does not magically restore the already lost precision of a; if you want to preserve that, you should pass the full value as a string.)
0) import decimal
1) setup appropriate precision of the decimal.getcontext() ( .prec attribute )
2) declare as decimal.Decimal() instance
>>> import decimal
>>> decimal.getcontext().prec
28
>>> decimal.getcontext().prec = 300
>>> dec_a = decimal.Decimal( '725692137865927813642341235.0' )
It is a pleasure to use decimal module for extremely extended numerical precision solvers.
BONUS:
Decimal module has very powerful context-methods, that preserve the decimal-module's strengths .add(), .subtract(), .multiply(), .fma(), .power() so as to indeed build an almost-infinite precision solver methods ...
Definitely worth mastering these decimal.getcontext() methods - your solvers spring into another league in precision and un-degraded convergence levels.
Will dividing a by 100,000 then adding 1 then times it back up again?
Eg.
a=725692137865927813642341235.00
a /= 100000
a += 0.00001
a *= 100000
I want to be able to compare Decimals in Python. For the sake of making calculations with money, clever people told me to use Decimals instead of floats, so I did. However, if I want to verify that a calculation produces the expected result, how would I go about it?
>>> a = Decimal(1./3.)
>>> a
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> b = Decimal(2./3.)
>>> b
Decimal('0.66666666666666662965923251249478198587894439697265625')
>>> a == b
False
>>> a == b - a
False
>>> a == b - Decimal(1./3.)
False
so in this example a = 1/3 and b = 2/3, so obviously b-a = 1/3 = a, however, that cannot be done with Decimals.
I guess a way to do it is to say that I expect the result to be 1/3, and in python i write this as
Decimal(1./3.).quantize(...)
and then I can compare it like this:
(b-a).quantize(...) == Decimal(1./3.).quantize(...)
So, my question is: Is there a cleaner way of doing this? How would you write tests for Decimals?
You are not using Decimal the right way.
>>> from decimal import *
>>> Decimal(1./3.) # Your code
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> Decimal("1")/Decimal("3") # My code
Decimal('0.3333333333333333333333333333')
In "your code", you actually perform "classic" floating point division -- then convert the result to a decimal. The error introduced by floats is propagated to your Decimal.
In "my code", I do the Decimal division. Producing a correct (but truncated) result up to the last digit.
Concerning the rounding. If you work with monetary data, you must know the rules to be used for rounding in your business. If not so, using Decimal will not automagically solve all your problems. Here is an example: $100 to be share between 3 shareholders.
>>> TWOPLACES = Decimal(10) ** -2
>>> dividende = Decimal("100.00")
>>> john = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john
Decimal('33.33')
>>> paul = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> georges = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john+paul+georges
Decimal('99.99')
Oups: missing $.01 (free gift for the bank ?)
Your verbiage states you want to to monetary calculations, minding your round off error. Decimals are a good choice, as they yield EXACT results under addition, subtraction, and multiplication with other Decimals.
Oddly, your example shows working with the fraction "1/3". I've never deposited exactly "one-third of a dollar" in my bank... it isn't possible, as there is no such monetary unit!
My point is if you are doing any DIVISION, then you need to understand what you are TRYING to do, what the organization's policies are on this sort of thing... in which case it should be possible to implement what you want with Decimal quantizing.
Now -- if you DO really want to do division of Decimals, and you want to carry arbitrary "exactness" around, you really don't want to use the Decimal object... You want to use the Fraction object.
With that, your example would work like this:
>>> from fractions import Fraction
>>> a = Fraction(1,3)
>>> a
Fraction(1, 3)
>>> b = Fraction(2,3)
>>> b
Fraction(2, 3)
>>> a == b
False
>>> a == b - a
True
>>> a + b == Fraction(1, 1)
True
>>> 2 * a == b
True
OK, well, even a caveat there: Fraction objects are the ratio of two integers, so you'd need to multiply by the right power of 10 and carry that around ad-hoc.
Sound like too much work? Yes... it probably is!
So, head back to the Decimal object; implement quantization/rounding upon Decimal division and Decimal multiplication.
Floating-point arithmetics is not accurate :
Decimal numbers can be represented exactly. In contrast, numbers like
1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as
3.3000000000000003 as it does with binary floating point
You have to choose a resolution and truncate everything past it :
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
You will obviously get some rounding error which will grow with the number of operations so you have to choose your resolution carefully.
There is another approach that may work for you:
Continue to do all your calculations in floating point values
When you need to compare for equality, use round(val, places)
For example:
>>> a = 1./3
>>> a
0.33333333333333331
>>> b = 2./3
>>> b
0.66666666666666663
>>> b-a
0.33333333333333331
>>> round(a,2) == round(b-a, 2)
True
If you'd like, create a function equals_to_the_cent():
>>> def equals_to_the_cent(a, b):
... return round(a, 2) == round(b, 2)
...
>>> equals_to_the_cent(a, b)
False
>>> equals_to_the_cent(a, b-a)
True
>>> equals_to_the_cent(1-a, b)
True
In Python 2.7.3, this is the current behavior:
>>> 8./9.
0.8888888888888888
>>> '%.1f' % (8./9.)
'0.9'
Same appears to be true for Decimals:
>>> from decimal import Decimal
>>> Decimal(8) / Decimal(9)
Decimal('0.8888888888888888888888888889')
>>> '%.1f' % (Decimal(8) / Decimal(9))
'0.9'
I would have expected truncation, however, it appears to round. So my options to truncating to the tenths place?
FYI I ask because my current solution seems hacky (but maybe its the best practice?) as it make a string of the result, finds the period and simply finds X digits after the period that I want.
You are looking for the math.floor() function instead:
>>> import math
>>> math.floor(8./9. * 10) / 10
0.8
So my options to truncating to the tenths place?
The Decimal.quantize() method rounds a number to a fixed exponent and it provides control over the rounding mode:
>>> from decimal import Decimal, ROUND_FLOOR
>>> Decimal('0.9876').quantize(Decimal('0.1'), rounding=ROUND_FLOOR)
Decimal('0.9')
Don't use math.floor on Decimal values because it first coerces them to a binary float introducing representation error and lost precision:
>>> x = Decimal('1.999999999999999999998')
>>> x.quantize(Decimal('0.1'), rounding=ROUND_FLOOR)
Decimal('1.9')
>>> math.floor(x * 10) / 10
2.0
Multiply by 10, then floor the value.
In some language:
float f = 1/3;
print(f) //Prints 0,3333333333
float q = Math.floor(f*10)/10
print(q) //Prints 0,3
I have 3 questions pertaining to decimal arithmetic in Python, all 3 of which are best asked inline:
1)
>>> from decimal import getcontext, Decimal
>>> getcontext().prec = 6
>>> Decimal('50.567898491579878') * 1
Decimal('50.5679')
>>> # How is this a precision of 6? If the decimal counts whole numbers as
>>> # part of the precision, is that actually still precision?
>>>
and
2)
>>> from decimal import getcontext, Decimal
>>> getcontext().prec = 6
>>> Decimal('50.567898491579878')
Decimal('50.567898491579878')
>>> # Shouldn't that have been rounded to 6 digits on instantiation?
>>> Decimal('50.567898491579878') * 1
Decimal('50.5679')
>>> # Instead, it only follows my precision setting set when operated on.
>>>
3)
>>> # Now I want to save the value to my database as a "total" with 2 places.
>>> from decimal import Decimal
>>> # Is the following the correct way to get the value into 2 decimal places,
>>> # or is there a "better" way?
>>> x = Decimal('50.5679').quantize(Decimal('0.00'))
>>> x # Just wanted to see what the value was
Decimal('50.57')
>>> foo_save_value_to_db(x)
>>>
Precision follows sig figs, not fractional digits. The former is more useful in scientific applications.
Raw data should never be mangled. Instead it does the mangling when operated upon.
This is how it's done.