Python Decimal vs C# decimal precision [duplicate] - python

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...

The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')

I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')

Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

Related

What is the meaning of this following piece of code({:.3f}) used in a Decision Tree tutorial? [duplicate]

I don't understand why, by formatting a string containing a float value, the precision of this last one is not respected. Ie:
'%f' % 38.2551994324
returns:
'38.255199'
(4 signs lost!)
At the moment I solved specifying:
'%.10f' % 38.2551994324
which returns '38.2551994324' as expected… but should I really force manually how many decimal numbers I want? Is there a way to simply tell to python to keep all of them?! (what should I do for example if I don't know how many decimals my number has?)
but should I really force manually how many decimal numbers I want? Yes.
And even with specifying 10 decimal digits, you are still not printing all of them. Floating point numbers don't have that kind of precision anyway, they are mostly approximations of decimal numbers (they are really binary fractions added up). Try this:
>>> format(38.2551994324, '.32f')
'38.25519943239999776096738060005009'
there are many more decimals there that you didn't even specify.
When formatting a floating point number (be it with '%f' % number, '{:f}'.format(number) or format(number, 'f')), a default number of decimal places is displayed. This is no different from when using str() (or '%s' % number, '{}'.format(number) or format(number), which essentially use str() under the hood), only the number of decimals included by default differs; Python versions prior to 3.2 use 12 digits for the whole number when using str().
If you expect your rational number calculations to work with a specific, precise number of digits, then don't use floating point numbers. Use the decimal.Decimal type instead:
Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.
I would use the modern str.format() method:
>>> '{}'.format(38.2551994324)
'38.2551994324'
The modulo method for string formatting is now deprecated as per PEP-3101

python Decimal() with extreme precision acting funky [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

Python - round a float to 2 digits

I would need to have a float variable rounded to 2 significant digits and store the result into a new variable (or the same of before, it doesn't matter) but this is what happens:
>>> a
981.32000000000005
>>> b= round(a,2)
>>> b
981.32000000000005
I would need this result, but into a variable that cannot be a string since I need to insert it as a float...
>>> print b
981.32
Actually truncate would also work I don't need extreme precision in this case.
What you are trying to do is in fact impossible. That's because 981.32 is not exactly representable as a binary floating point value. The closest double precision binary floating point value is:
981.3200000000000500222085975110530853271484375
I suspect that this may come as something of a shock to you. If so, then I suggest that you read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You might choose to tackle your problem in one of the following ways:
Accept that binary floating point numbers cannot represent such values exactly, and continue to use them. Don't do any rounding at all, and keep the full value. When you wish to display the value as text, format it so that only two decimal places are emitted.
Use a data type that can represent your number exactly. That means a decimal rather than binary type. In Python you would use decimal.
Try this :
Round = lambda x, n: eval('"%.' + str(int(n)) + 'f" % ' + repr(x))
print Round(0.1, 2)
0.10
print Round(0.1, 4)
0.1000
print Round(981,32000000000005, 2)
981,32
Just indicate the number of digits you want as a second kwarg
I wrote a solution of this problem.
Plz try
from decimal import *
from autorounddecimal.core import adround,decimal_round_digit
decimal_round_digit(Decimal("981.32000000000005")) #=> Decimal("981.32")
adround(981.32000000000005) # just wrap decimal_round_digit
More detail can be found in https://github.com/niitsuma/autorounddecimal
There is a difference between the way Python prints floats and the way it stores floats. For example:
>>> a = 1.0/5.0
>>> a
0.20000000000000001
>>> print a
0.2
It's not actually possible to store an exact representation of many floats, as David Heffernan points out. It can be done if, looking at the float as a fraction, the denominator is a power of 2 (such as 1/4, 3/8, 5/64). Otherwise, due to the inherent limitations of binary, it has to make do with an approximation.
Python recognizes this, and when you use the print function, it will use the nicer representation seen above. This may make you think that Python is storing the float exactly, when in fact it is not, because it's not possible with the IEEE standard float representation. The difference in calculation is pretty insignificant, though, so for most practical purposes it isn't a problem. If you really really need those significant digits, though, use the decimal package.

Inaccuracy in decimals [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 8 years ago.
I'm in the process of converting a programme I've made from using floats to decimals.
Obviously the main reason I'm doing this is for accuracy.
I haven't used decimal before so thought I'd have a play first. The first thing I did was this:
>>> x = Decimal(7.2)
>>> x
Decimal('7.20000000000000017763568394002504646778106689453125')
Now considering decimals meant to be accurate and avoid long trailing numbers like floats, I was pretty surprised to see that happen. It's also gone to 50 D.P. despite the standard preset of 28 (and doesn't matter what you set the preset too.
Is this a bug (|feature)? And why is it happening?
Decimal(7.2) will create a decimal from the exact value of the float 7.2. Since the float is not precise, while Decimal is, creating the decimal will carry over the inaccuracies from the float into the decimal, yielding the result you see there.
To create the exact decimal of 7.2, you need to specify it as a string:
Decimal('7.2')
This happens, because you feed a float literal, that cannot be represented accurately in binary. You should provide a string:
Decimal('7.2')
or use integers:
Decimal(72) / 10

How can I make numbers more precise in Python? [duplicate]

This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I'm just learning the basics of Python at the moment and I thought that, as a learning exercise, I'd try writing something that would approximate the number "e". Anyway, it always gives the answer to 11 decimal places and I want it to give something more like 1000 decimal places. How do I do I do this?
Are you sure you need to make them "more precise"? Or do you just need to see more digits than Python shows by default?
>>> import math
>>> math.pi
3.141592653589793
>>>
>>> '{0:0.2f}'.format(math.pi)
'3.14'
>>>
>>> '{0:0.30f}'.format(math.pi)
'3.141592653589793115997963468544'
>>>
>>> '{0:0.60f}'.format(math.pi)
'3.141592653589793115997963468544185161590576171875000000000000'
However, note that
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info
I assure you that pi doesn't go to zero after 48 digits :-)
Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
A IEEE-754 double has 64 bits (8 bytes), with the 52 bits of the fraction significand appearing in the memory format, the total precision is approximately 16 decimal digits.
So to represent a float number have a higher precise than that, you should use Decimal.
import decimal
decimal.getcontext().prec = 100
If you want it to be a number, with a precision of a thousand digits, the short answer is you can't..
A workaround is, you can use the decimal module. Here is an example:
import decimal
a = decimal.Decimal('2387324895172987120570935712093570921579217509185712093')
In this case, however, a is not a number anymore. It's just an instance of the decimal.Decimal class. Well, you can still do some math operations with it.

Categories

Resources