Currently I have written a program in python that calculates currencies and many other things. Now I noticed the IEEE-754 problem on floating point numbers.
Is there a way to do correct mathematical operations like addition in python with the correct value?
As example:
0.1 + 0.1 + 0.1 = 0.3
I already tried the decimal library but it seems that, it is not giving me correct rounded digits in rounded places.
Greetings.
When using the decimal module, you need to initialize with strings; initializing with floats will preserve in high resolution the imprecision of the underlying float type. For example, using Decimal(0.1) actually produces a value:
Decimal('0.1000000000000000055511151231257827021181583404541015625')
because that's the 30 digit extrapolation of the value represented by the float 0.1.
So instead of testing Decimal(0.1) + Decimal(0.1) + Decimal(0.1) == Decimal(0.3) (where you compute in fixed decimal precision, but initialize with non-decimal values), use Decimal("0.1") + Decimal("0.1") + Decimal("0.1") == Decimal("0.3") and you'll get the expected results.
Aside from that, you'll want to adjust the decimal context's .prec attribute to ensure you're preserving the intended amount of precision, but usually it's okay to preserve the full precision and just round to the correct precision later with the quantize method.
Did you try setting the context? For instance, the code below returns 0.3 as expected:
import decimal
decimal.getcontext().prec = 1
a = decimal.Decimal(0.1)
print(a + a + a)
Related
I know that most decimals don't have an exact floating point representation (Is floating point math broken?).
But I don't see why 4*0.1 is printed nicely as 0.4, but 3*0.1 isn't, when
both values actually have ugly decimal representations:
>>> 3*0.1
0.30000000000000004
>>> 4*0.1
0.4
>>> from decimal import Decimal
>>> Decimal(3*0.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(4*0.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
The simple answer is because 3*0.1 != 0.3 due to quantization (roundoff) error (whereas 4*0.1 == 0.4 because multiplying by a power of two is usually an "exact" operation). Python tries to find the shortest string that would round to the desired value, so it can display 4*0.1 as 0.4 as these are equal, but it cannot display 3*0.1 as 0.3 because these are not equal.
You can use the .hex method in Python to view the internal representation of a number (basically, the exact binary floating point value, rather than the base-10 approximation). This can help to explain what's going on under the hood.
>>> (0.1).hex()
'0x1.999999999999ap-4'
>>> (0.3).hex()
'0x1.3333333333333p-2'
>>> (0.1*3).hex()
'0x1.3333333333334p-2'
>>> (0.4).hex()
'0x1.999999999999ap-2'
>>> (0.1*4).hex()
'0x1.999999999999ap-2'
0.1 is 0x1.999999999999a times 2^-4. The "a" at the end means the digit 10 - in other words, 0.1 in binary floating point is very slightly larger than the "exact" value of 0.1 (because the final 0x0.99 is rounded up to 0x0.a). When you multiply this by 4, a power of two, the exponent shifts up (from 2^-4 to 2^-2) but the number is otherwise unchanged, so 4*0.1 == 0.4.
However, when you multiply by 3, the tiny little difference between 0x0.99 and 0x0.a0 (0x0.07) magnifies into a 0x0.15 error, which shows up as a one-digit error in the last position. This causes 0.1*3 to be very slightly larger than the rounded value of 0.3.
Python 3's float repr is designed to be round-trippable, that is, the value shown should be exactly convertible into the original value (float(repr(f)) == f for all floats f). Therefore, it cannot display 0.3 and 0.1*3 exactly the same way, or the two different numbers would end up the same after round-tripping. Consequently, Python 3's repr engine chooses to display one with a slight apparent error.
repr (and str in Python 3) will put out as many digits as required to make the value unambiguous. In this case the result of the multiplication 3*0.1 isn't the closest value to 0.3 (0x1.3333333333333p-2 in hex), it's actually one LSB higher (0x1.3333333333334p-2) so it needs more digits to distinguish it from 0.3.
On the other hand, the multiplication 4*0.1 does get the closest value to 0.4 (0x1.999999999999ap-2 in hex), so it doesn't need any additional digits.
You can verify this quite easily:
>>> 3*0.1 == 0.3
False
>>> 4*0.1 == 0.4
True
I used hex notation above because it's nice and compact and shows the bit difference between the two values. You can do this yourself using e.g. (3*0.1).hex(). If you'd rather see them in all their decimal glory, here you go:
>>> Decimal(3*0.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(0.3)
Decimal('0.299999999999999988897769753748434595763683319091796875')
>>> Decimal(4*0.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
>>> Decimal(0.4)
Decimal('0.40000000000000002220446049250313080847263336181640625')
Here's a simplified conclusion from other answers.
If you check a float on Python's command line or print it, it goes through function repr which creates its string representation.
Starting with version 3.2, Python's str and repr use a complex rounding scheme, which prefers
nice-looking decimals if possible, but uses more digits where
necessary to guarantee bijective (one-to-one) mapping between floats
and their string representations.
This scheme guarantees that value of repr(float(s)) looks nice for simple
decimals, even if they can't be
represented precisely as floats (eg. when s = "0.1").
At the same time it guarantees that float(repr(x)) == x holds for every float x
Not really specific to Python's implementation but should apply to any float to decimal string functions.
A floating point number is essentially a binary number, but in scientific notation with a fixed limit of significant figures.
The inverse of any number that has a prime number factor that is not shared with the base will always result in a recurring dot point representation. For example 1/7 has a prime factor, 7, that is not shared with 10, and therefore has a recurring decimal representation, and the same is true for 1/10 with prime factors 2 and 5, the latter not being shared with 2; this means that 0.1 cannot be exactly represented by a finite number of bits after the dot point.
Since 0.1 has no exact representation, a function that converts the approximation to a decimal point string will usually try to approximate certain values so that they don't get unintuitive results like 0.1000000000004121.
Since the floating point is in scientific notation, any multiplication by a power of the base only affects the exponent part of the number. For example 1.231e+2 * 100 = 1.231e+4 for decimal notation, and likewise, 1.00101010e11 * 100 = 1.00101010e101 in binary notation. If I multiply by a non-power of the base, the significant digits will also be affected. For example 1.2e1 * 3 = 3.6e1
Depending on the algorithm used, it may try to guess common decimals based on the significant figures only. Both 0.1 and 0.4 have the same significant figures in binary, because their floats are essentially truncations of (8/5)(2^-4) and (8/5)(2^-6) respectively. If the algorithm identifies the 8/5 sigfig pattern as the decimal 1.6, then it will work on 0.1, 0.2, 0.4, 0.8, etc. It may also have magic sigfig patterns for other combinations, such as the float 3 divided by float 10 and other magic patterns statistically likely to be formed by division by 10.
In the case of 3*0.1, the last few significant figures will likely be different from dividing a float 3 by float 10, causing the algorithm to fail to recognize the magic number for the 0.3 constant depending on its tolerance for precision loss.
Edit:
https://docs.python.org/3.1/tutorial/floatingpoint.html
Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction. For example, the numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231257827021181583404541015625 are all approximated by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same approximation, any one of them could be displayed while still preserving the invariant eval(repr(x)) == x.
There is no tolerance for precision loss, if float x (0.3) is not exactly equal to float y (0.1*3), then repr(x) is not exactly equal to repr(y).
>>> .1+.1+.1+.1 ==.4
True
>>> .1+.1+.1 ==.3
False
>>>
The above is an output from python interpreter. I understand the fact that
floating point arithmetic is done using base 2 and is stored as binary in the
and so the difference in calculations like above results.
Now I found that .4 = .011(0011) [The number inside () repeats infinitely this is a binary
representation of this fraction] since this cannot be stored exactly an approximate value
will be stored.
Similary 0.3 = .01(0011)
So both 0.4 and 0.3 cannot be stored exactly internally.
But then what's the reason for python to return first as True and the second as False
As both cannot be compared
_______________________________________________________________________________
I did some research and found the following :
>>> Decimal(.4)
Decimal('0.40000000000000002220446049250313080847263336181640625')
>>> Decimal(.1+.1+.1+.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
>>> Decimal(.1+.1+.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(.3)
Decimal('0.299999999999999988897769753748434595763683319091796875')
>>> Decimal(.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
This probably explains why the additions are happening the way they are
assuming that Decimal is giving the exact ouput of the number stored underneath
But then what's the reason for python to return first as True and the second as False
As both cannot be compared
Floating-point numbers absolutely can be compared for equality. Problems arise only when you expect exact equality to be preserved by an approximate computation. But the semantics of floating-point equality comparison is perfectly well defined.
When you write 0.1 in a program, this is rounded to the nearest IEEE 754 binary64 floating-point number, which is the real number 0.1000000000000000055511151231257827021181583404541015625, or 0x1.999999999999ap−4 in hexadecimal notation (the ‘p−4’ part means × 2⁻⁴). Every (normal) binary64 floating-point number is a real number of the form ±2ⁿ × (1 + 𝑓/2⁵³), where 𝑛 and 𝑓 are integers with −1022 ≤ 𝑛 ≤ 1023 and 0 ≤ 𝑓 < 2⁵³; this one is the nearest such number to 0.1.
When you add that to itself three times in floating-point arithmetic, the exact result 0.3000000000000000166533453693773481063544750213623046875 is rounded to 0.3000000000000000444089209850062616169452667236328125 or 0x1.3333333333334p−2 (since there are only 53 bits of precision available), but when you write 0.3, you get 0.299999999999999988897769753748434595763683319091796875 or 0x1.3333333333333p−2 which is slightly closer to 0.3.
However, four times 0.1000000000000000055511151231257827021181583404541015625 or 0x1.999999999999ap−4 is 0.4000000000000000222044604925031308084726333618164062500 or 0x1.999999999999ap−2, which is also the closest floating-point number to 0.4 and hence is what you get when you write 0.4 in a program. So when you write 4*0.1, the result is exactly the same floating-point number as when you write 0.4.
Now, you didn't write 4*0.1—instead you wrote .1 + .1 + .1 + .1. But it turns out there is a theorem in binary floating-point arithmetic that x + x + x + x—that is, fl(fl(fl(𝑥 + 𝑥) + 𝑥) + 𝑥)—always yields exactly 4𝑥 without rounding (except when it overflows), in spite of the fact that x + x + x or fl(fl(𝑥 + 𝑥) + 𝑥) = fl(3𝑥) may be rounded and not exactly equal to 3𝑥. (Note that fl(𝑥 + 𝑥) = fl(2𝑥) is always equal to 2𝑥, again ignoring overflow, because it's just a matter of adjusting the exponent.)
It just happens that any rounding error committed by adding the fourth term cancels out whatever rounding error may have been committed by adding the third!
This question already has answers here:
Limiting floats to two decimal points
(35 answers)
Closed 2 years ago.
I want to round the number below to two decimal places. The result must be 33.39 but Python gives 33.38 because the 5 rounds to even and hence to 8.
round(33.385, 2)
This actually has nothing to do with round-to-nearest/even. 33.385 is a decimal number, but is represented in your hardware as an approximation in binary floating point. The decimal module can show you the exact decimal value of that binary approximation:
>>> import decimal
>>> decimal.Decimal(33.385)
Decimal('33.38499999999999801048033987171947956085205078125')
That's why it rounds to 33.38: the exact value stored is slightly closer to 33.38 than to 33.39.
If you need exact decimal results, don't use your hardware's binary floating point. For example, you could use the decimal module for this with the ROUND_HALF_UP rounding mode.
For example,
>>> from decimal import Decimal as D
>>> x = D("33.385")
>>> x
Decimal('33.385')
>>> twodigits = D("0.01")
>>> x.quantize(twodigits) # nearest/even is the default
Decimal('33.38')
>>> x.quantize(twodigits, rounding=decimal.ROUND_HALF_EVEN) # same thing
Decimal('33.38')
>>> x.quantize(twodigits, rounding=decimal.ROUND_HALF_UP) # what you want
Decimal('33.39')
>>> float(_) # back to binary float
33.39
>>> D(_) # but what the binary 33.39 _really_ is
Decimal('33.3900000000000005684341886080801486968994140625')
Actually round() rounds to the closest decimal point you specified. There are some incosistencies with floating point numbers in programming languages, because the computer can't create specific enough decimals with the given bits, so the programming language already shows you what you need to see, but not the real number. You can see the real number by using the decimal library. That way you can see why sometimes numbers that end in 5 round either up or down.
If you want to round down then you can use the math module to do it easily.
import math
math.floor(33.385 * 100) / 100
And if you want to round up then you can do the same with math.ceil
import math
math.ceil(33.385 * 100) / 100
Or if you wanted you could still use the round() function, but change it a bit
Round up:
decimal_point = 2
change = 0.3 / 10**decimal_point
round(33.385 + change, decimal_point)
Round down:
decimal_point = 2
change = 0.3 / 10**decimal_point
round(33.385 - change, decimal_point)
Just a result of floating-point approximation by computers.
Computers understand binary and not all floating point values have accurate binary rep.
A famous example:
print(0.1 + 0.2 == 0.3)
False
Wonder why?
The result is 0.30000000000000004 and it's because 0.2 in binary goes as in 0.00110011001100...
Use #Rfroes87's suggestion to convert your numbers into an integer-precision scale and then round off to nearest integer)
You can do the following:
import math
math.ceil(33.385 * 100.0) / 100.0
Reference: Round up to Second Decimal Place in Python
I know that questions about rounding in python have been asked multiple times already, but the answers did not help me. I'm looking for a method that is rounding a float number half up and returns a float number. The method should also accept a parameter that defines the decimal place to round to. I wrote a method that implements this kind of rounding. However, I think it does not look elegant at all.
def round_half_up(number, dec_places):
s = str(number)
d = decimal.Decimal(s).quantize(
decimal.Decimal(10) ** -dec_places,
rounding=decimal.ROUND_HALF_UP)
return float(d)
I don't like it, that I have to convert float to a string (to avoid floating point inaccuracy) and then work with the decimal module.
Do you have any better solutions?
Edit: As pointed out in the answers below, the solution to my problem is not that obvious as correct rounding requires correct representation of numbers in the first place and this is not the case with float. So I would expect that the following code
def round_half_up(number, dec_places):
d = decimal.Decimal(number).quantize(
decimal.Decimal(10) ** -dec_places,
rounding=decimal.ROUND_HALF_UP)
return float(d)
(that differs from the code above just by the fact that the float number is directly converted into a decimal number and not to a string first) to return 2.18 when used like this: round_half_up(2.175, 2) But it doesn't because Decimal(2.175) will return Decimal('2.17499999999999982236431605997495353221893310546875'), the way the float number is represented by the computer.
Suprisingly, the first code returns 2.18 because the float number is converted to string first. It seems that the str() function conducts an implicit rounding to the number that was initially meant to be rounded. So there are two roundings taking place. Even though this is the result that I would expect, it is technically wrong.
Rounding is surprisingly hard to do right, because you have to handle floating-point calculations very carefully. If you are looking for an elegant solution (short, easy to understand), what you have like like a good starting point. To be correct, you should replace decimal.Decimal(str(number)) with creating the decimal from the number itself, which will give you a decimal version of its exact representation:
d = Decimal(number).quantize(...)...
Decimal(str(number)) effectively rounds twice, as formatting the float into the string representation performs its own rounding. This is because str(float value) won't try to print the full decimal representation of the float, it will only print enough digits to ensure that you get the same float back if you pass those exact digits to the float constructor.
If you want to retain correct rounding, but avoid depending on the big and complex decimal module, you can certainly do it, but you'll still need some way to implement the exact arithmetics needed for correct rounding. For example, you can use fractions:
import fractions, math
def round_half_up(number, dec_places=0):
sign = math.copysign(1, number)
number_exact = abs(fractions.Fraction(number))
shifted = number_exact * 10**dec_places
shifted_trunc = int(shifted)
if shifted - shifted_trunc >= fractions.Fraction(1, 2):
result = (shifted_trunc + 1) / 10**dec_places
else:
result = shifted_trunc / 10**dec_places
return sign * float(result)
assert round_half_up(1.49) == 1
assert round_half_up(1.5) == 2
assert round_half_up(1.51) == 2
assert round_half_up(2.49) == 2
assert round_half_up(2.5) == 3
assert round_half_up(2.51) == 3
Note that the only tricky part in the above code is the precise conversion of a floating-point to a fraction, and that can be off-loaded to the as_integer_ratio() float method, which is what both decimals and fractions do internally. So if you really want to remove the dependency on fractions, you can reduce the fractional arithmetic to pure integer arithmetic; you stay within the same line count at the expense of some legibility:
def round_half_up(number, dec_places=0):
sign = math.copysign(1, number)
exact = abs(number).as_integer_ratio()
shifted = (exact[0] * 10**dec_places), exact[1]
shifted_trunc = shifted[0] // shifted[1]
difference = (shifted[0] - shifted_trunc * shifted[1]), shifted[1]
if difference[0] * 2 >= difference[1]: # difference >= 1/2
shifted_trunc += 1
return sign * (shifted_trunc / 10**dec_places)
Note that testing these functions brings to spotlight the approximations performed when creating floating-point numbers. For example, print(round_half_up(2.175, 2)) prints 2.17 because the decimal number 2.175 cannot be represented exactly in binary, so it is replaced by an approximation that happens to be slightly smaller than the 2.175 decimal. The function receives that value, finds it smaller than the actual fraction corresponding to the 2.175 decimal, and decides to round it down. This is not a quirk of the implementation; the behavior derives from properties of floating-point numbers and is also present in the round built-in of Python 3 and 2.
I don't like it, that I have to convert float to a string (to avoid
floating point inaccuracy) and then work with the decimal module. Do
you have any better solutions?
Yes; use Decimal to represent your numbers throughout your whole program, if you need to represent numbers such as 2.675 exactly and have them round to 2.68 instead of 2.67.
There is no other way. The floating point number which is shown on your screen as 2.675 is not the real number 2.675; in fact, it is very slightly less than 2.675, which is why it gets rounded down to 2.67:
>>> 2.675 - 2
0.6749999999999998
It only shows in string form as '2.675' because that happens to be the shortest string such that float(s) == 2.6749999999999998. Note that this longer representation (with lots of 9s) isn't exact either.
However you write your rounding function, it is not possible for my_round(2.675, 2) to round up to 2.68 and also for my_round(2 + 0.6749999999999998, 2) to round down to 2.67; because the inputs are actually the same floating point number.
So if your number 2.675 ever gets converted to a float and back again, you have already lost the information about whether it should round up or down. The solution is not to make it float in the first place.
After trying for a very long time to produce an elegant one-line function, I ended up getting something that is comparable to a dictionary in size.
I would say the simplest way to do this is just to
def round_half_up(inp,dec_places):
return round(inp+0.0000001,dec_places)
i would acknowledge that this is not accurate in every cases, but should work if you just want a simple quick workaround.
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3