Inaccuracy in decimals [duplicate] - python

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 8 years ago.
I'm in the process of converting a programme I've made from using floats to decimals.
Obviously the main reason I'm doing this is for accuracy.
I haven't used decimal before so thought I'd have a play first. The first thing I did was this:
>>> x = Decimal(7.2)
>>> x
Decimal('7.20000000000000017763568394002504646778106689453125')
Now considering decimals meant to be accurate and avoid long trailing numbers like floats, I was pretty surprised to see that happen. It's also gone to 50 D.P. despite the standard preset of 28 (and doesn't matter what you set the preset too.
Is this a bug (|feature)? And why is it happening?

Decimal(7.2) will create a decimal from the exact value of the float 7.2. Since the float is not precise, while Decimal is, creating the decimal will carry over the inaccuracies from the float into the decimal, yielding the result you see there.
To create the exact decimal of 7.2, you need to specify it as a string:
Decimal('7.2')

This happens, because you feed a float literal, that cannot be represented accurately in binary. You should provide a string:
Decimal('7.2')
or use integers:
Decimal(72) / 10

Related

How to convert all floats to two decimal places regardless of number of decimal places float has without repeating convert code?

Goal
I want to convert all floats to two decimal places regardless of number of decimal places float has without repeating convert code.
For example, I want to convert
50 to 50.00
50.5 to 50.50
without repeating the convert code again and again. What I mean is explained in the following section - research.
Not what this question is about
This question is NOT about:
Only limiting floats to two decimal points - if there is less than two decimal points then I want it to have two decimal points with zeros for the unused spaces.
Flooring the decimal or ceiling it.
Rounding the decimal off.
This question is not a duplicate of this question.
That question only answers the first part of my question - convert floats to two decimal places regardless of number of decimal places float has, not the second part - without repeating convert code.
Nor this question.
That is just how to add units before the decimal place. My question
is: how to convert all floats to two decimal places regardless of
number of decimal places float has without repeating convert code.
Research
I found two ways I can achieve the convert. One is using the decimal module:
from decimal import *
TWOPLACES = Decimal(10) ** -2
print(Decimal('9.9').quantize(TWOPLACES))
Another, without using any other modules:
print(f"{9.9:.2f}")
However, that does not fully answer my question. Realise that the code to convert keeps being needed to repeat itself? I keep having to repeat the code to convert again and again. Sadly, my whole program is already almost completed and it will be quite a waste of time to add this code here and there so the format will be correct. Is there any way to convert all floats to two decimal places regardless of number of decimal places float has without repeating convert code?
Clarification
What I mean by convert is, something like what Dmytro Chasovskyi said, that I want all places with floats in my program without extra changes to start to operate like decimals. For example, if I had the operation 1.2345 + 2.7 + 3 + 56.1223183 it should be 1.23 + 2.70 + 3.00 + 56.12.
Also, float is a number, not a function.
The bad news is: there is no "float" with "two decimal places".
Floating point numbers are represented internally with a fixed number of digits in base 2. https://floating-point-gui.de/basic/ .
And these are both efficient and accurate enough for almost all calculations we perform with any modern program.
What we normally want is that the human-readable text representation of a number, in all outputs of a program, shows only two digits. And this is controlled at wherever your program is either writting the value to a text file, to the screen, or rendering it to an HTML template (which is "writing it to a text file", again).
So, it happens that the same syntaxes that will convert a number to text, embedded in another string, allows additionally to control the exact output of the number. You put as an example print(f"{9.9:.2f}"). The only thing that looks impractical there is due to you hardcoding your number along with its conversion. Typically, the number will be in a variable.
Them, all you have to do is writting, wherever you output the number:
print(f"The value is: {myvar:.02f}")
instead of
print(f"The value is: {myvar}")
Or in whatever function you are calling that will need the rendered version of the number instead of print. Notice that the use of the word "rendered" here is deliberate: while your program is running, the number is stored in an efficient way in memory, directly usable by the CPU, that is not human readable. At any point you want to "see" the number, you have to convert it into text. It is just that some calls to it implicitly, like print(myvar). Then, just resort to explicitly converting it in these places - `print(f"{myvar:.02f}").
really having 2 decimal places in memory
If you use decimal.Decimal, then yes, there are ways to keep the internal representation of the number with 2 decimal digits,
but them, instead of just converting the number on output, you must convert it into a "2 decimal place" value on all inputs as well
That means that whenever ingesting a number into your program, be it typed by the user, read from a binary file or database, or received via wire from a sensor, you have to apply a similar transform to the one used in the output as detailed above. More precisely: you convert your float to a properly formatted string, and then convert that to a decimal.Decimal.
And this will prevent your program of accumulating errors due to base conversion, but you will still need to force the format to 2 decimal places on every output, just like above.
Use this function.
def cvt_decimal(input):
number = float(input)
return ("%.2f" % number)
print(cvt_decimal(50))
print(cvt_decimal(50.5))
Output is :
50.00
50.50
** Process exited - Return Code: 0 **
Press Enter to exit terminal
you can modify the decimal precision, even if you do any operation between 2 decimal types
import decimal
from decimal import Decimal
decimal.getcontext().prec = 2
a = Decimal('0.12345')
b = Decimal('0.12345')
print(a + b)
Decimal calculations are precise but it takes more time to do calculations, keep that in mind.

How to get the long version of a float?

I am trying to solve problem 26 from Project Euler and I am wondering how to show the long version of a floating-point number. For example if we have 1/19 how do we get 64, 128, or more digits of that float in python? An even more useful builtin function would be that returns the numbers after the decimal until it repeats? I know that floats technically store decimal points up until a certain point and then round of to keep things efficient, memory-wise, but is there a way to overload that until you get the repeating part of it? I would guess that such a function would give an exception to an irrational number but is there a function that works for at least rational numbers?
See the Decimal datatype.
from decimal import *
getcontext().prec = 64
print(Decimal(1) / Decimal(19))
https://docs.python.org/3/library/decimal.html

python Decimal() with extreme precision acting funky [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

Python Decimal vs C# decimal precision [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

How can I make numbers more precise in Python? [duplicate]

This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I'm just learning the basics of Python at the moment and I thought that, as a learning exercise, I'd try writing something that would approximate the number "e". Anyway, it always gives the answer to 11 decimal places and I want it to give something more like 1000 decimal places. How do I do I do this?
Are you sure you need to make them "more precise"? Or do you just need to see more digits than Python shows by default?
>>> import math
>>> math.pi
3.141592653589793
>>>
>>> '{0:0.2f}'.format(math.pi)
'3.14'
>>>
>>> '{0:0.30f}'.format(math.pi)
'3.141592653589793115997963468544'
>>>
>>> '{0:0.60f}'.format(math.pi)
'3.141592653589793115997963468544185161590576171875000000000000'
However, note that
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info
I assure you that pi doesn't go to zero after 48 digits :-)
Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
A IEEE-754 double has 64 bits (8 bytes), with the 52 bits of the fraction significand appearing in the memory format, the total precision is approximately 16 decimal digits.
So to represent a float number have a higher precise than that, you should use Decimal.
import decimal
decimal.getcontext().prec = 100
If you want it to be a number, with a precision of a thousand digits, the short answer is you can't..
A workaround is, you can use the decimal module. Here is an example:
import decimal
a = decimal.Decimal('2387324895172987120570935712093570921579217509185712093')
In this case, however, a is not a number anymore. It's just an instance of the decimal.Decimal class. Well, you can still do some math operations with it.

Categories

Resources