When using Python's decimal.Decimal class we now have a need to drop extraneous decimal places. So for instance '0.00' becomes '0' and '0.50' becomes '0.5'. Is there a cleaner way of doing this than converting to a string and manually dropping trailing zeros and full stops?
To clarify we need to be able to dynamically round the result without knowing the number of decimal places in advance and potentially output an integer (or a string representation of one) if no decimal places are needed... is this already built-in to Python?
Use Decimal.normalize:
>>> Decimal('0.00').normalize()
Decimal('0')
>>> Decimal('0.50').normalize()
Decimal('0.5')
Related
I want to convert some floats to Decimal retaining 5 digits after decimal place regardless of how many digits before the decimal place. Is using string formatting the most efficient way to do this?
I see in the docs:
The significance of a new Decimal is determined solely by the number of digits input. Context precision and rounding only come into play during arithmetic operations.
So that means I need to add 0 to force it to use the specified prec but the prec is total digits not after decimal so it doesn't actually help.
The best thing I can come up with is
a=[1.132434, 22.2334,99.33999434]
[Decimal("%.5f" % round(x,5)) for x in a]
to get [Decimal('1.13243'), Decimal('22.23340'), Decimal('99.33999')]
Is there a better way? It feels like turning floats into strings just to convert them back to a number format isn't very good although I can't articulate why.
Do all the formatting on the way out from your code, inside the print and write statements. There is no reason I can think of to lose precision (and convert the numbers to some fixed format) while doing numeric calculations inside the code.
How can I format a float with fstrings such that the output doesn't change, unless there is a trailing .0, then simply drop it ?
I am asking because most answers simply tell us to use the g string formatting but that is not the correct answer. When using the g if the number gets too large, the representation can leave out some information and/or change to the scientific notation as you can see here :
>>> f"{202105.35}"
'202105.35'
>>> f"{202105.35:g}"
'202105'
>>> f"{202105.0}"
'202105.0'
>>> f"{202105.0:g}"
'202105'
I want the best of both world with the use of the format parameters but without losing precision.
In my research I found that you can specify a precision with digit+ but I don't want to set a specific precision but if that's the only way to do it, then I'll have to set one.
This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)
This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)
I am depending on some code that uses the Decimal class because it needs precision to a certain number of decimal places. Some of the functions allow inputs to be floats because of the way that it interfaces with other parts of the codebase. To convert them to decimal objects, it uses things like
mydec = decimal.Decimal(str(x))
where x is the float taken as input. My question is, does anyone know what the standard is for the 'str' method as applied to floats?
For example, take the number 2.1234512. It is stored internally as 2.12345119999999999 because of how floats are represented.
>>> x = 2.12345119999999999
>>> x
2.1234511999999999
>>> str(x)
'2.1234512'
Ok, str(x) in this case is doing something like '%.6f' % x. This is a problem with the way my code converts to decimals. Take the following:
>>> d = decimal.Decimal('2.12345119999999999')
>>> ds = decimal.Decimal(str(2.12345119999999999))
>>> d - ds
Decimal('-1E-17')
So if I have the float, 2.12345119999999999, and I want to pass it to Decimal, converting it to a string using str() gets me the wrong answer. I need to know what are the rules for str(x) that determine what the formatting will be, because I need to determine whether this code needs to be re-written to avoid this error (note that it might be OK, because, for example, the code might round to the 10th decimal place once we have a decimal object)
There must be some set of rules in python's docs that hopefully someone here can point me to. Thanks!
In the Python source, look in "Include/floatobject.h". The precision for the string conversion is set a few lines from the top after an comment with some explanation of the choice:
/* The str() precision PyFloat_STR_PRECISION is chosen so that in most cases,
the rounding noise created by various operations is suppressed, while
giving plenty of precision for practical use. */
#define PyFloat_STR_PRECISION 12
You have the option of rebuilding, if you need something different. Any changes will change formatting of floats and complex numbers. See ./Objects/complexobject.c and ./Objects/floatobject.c. Also, you can compare the difference between how repr and str convert doubles in these two files.
There's a couple of issues worth discussing here, but the summary is: you cannot extract information that is not stored on your system already.
If you've taken a decimal number and stored it as a floating point, you'll have lost information, since most decimal (base 10) numbers with a finite number of digits cannot be stored using a finite number of digits in base 2 (binary).
As was mentioned, str(a_float) will really call a_float.__str__(). As the documentation states, the purpose of that method is to
return a string containing a nicely printable representation of an object
There's no particular definition for the float case. My opinion is that, for your purposes, you should consider __str__'s behavior to be undefined, since there's no official documentation on it - the current implementation can change anytime.
If you don't have the original strings, there's no way to extract the missing digits of the decimal representation from the float objects. All you can do is round predictably, using string formatting (which you mention):
Decimal( "{0:.5f}".format(a_float) )
You can also remove 0s on the right with resulting_string.rstrip("0").
Again, this method does not recover the information that has been lost.