Removing extra decimals in Python operations [duplicate] - python

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 months ago.
I want to do operations like
500.55%10 and get a value of 0.55 in return.
But instead Python sometimes returns 0.5500000000000114 for example (which in terms of magnitude is basically the same), I'm guessing this is because of the numerical way these calculations are done.
When I input a value like 500.55 I want it to be seen as 500.55000000000000.... with an infinite amount of zeros. So basically I want to get rid of ...00114 at the end.
print(500.55%10)
0.5500000000000114
Thanks.

Try the decimal module:
>>> from decimal import Decimal
>>> float(Decimal('500.55') % 10)
0.55
Documentation here.

you can use the round function to round. Alternatively you need to use the decimal module which handles extra decimals in the way that you ask...
here are both methods:
import decimal
# normal way
print('normal way:', 500.55%10)
# do some rounding
print('rounding:', round(500.55%10, 10) )
# use the decimal
decimal.getcontext().prec = 10
print('decimal module:', decimal.Decimal(500.55)%decimal.Decimal(10))
result:
normal way: 0.5500000000000114
rounding: 0.55
decimal module: 0.5500000000

you can use the built-in decimal module and their decimal.Decimal object.
decimal.Decimal(value='0', context=None)
from the documentation:
Construct a new Decimal object based from value.
value can be an integer, string, tuple, float, or another Decimal object. If no value is given, returns Decimal('0'). If value is a string, it should conform to the decimal numeric string syntax after leading and trailing whitespace characters, as well as underscores throughout, are removed
Example implementation:
>>> import decimal
>>> float(decimal.Decimal('500.55') % 10)
0.55

Related

Python Round not working as expected for some values [duplicate]

This question already has answers here:
How to properly round-up half float numbers?
(21 answers)
Closed 7 months ago.
In Python 3, I'm trying to round the value 4800.5, so I was expecting it to 4801
but it's giving me 4800. I'm not able to track why this is happening.
Any help will be appreciated.
That's by design.
If you have a look at round function documentation (https://docs.python.org/3/library/functions.html#round) you will find that:
For the built-in types supporting round(), values are rounded to
the closest multiple of 10 to the power minus ndigits; if two
multiples are equally close, rounding is done toward the even choice
(so, for example, both round(0.5) and round(-0.5) are 0, and
round(1.5) is 2).
In simple words, 0.5 is a special case which is always rounded toward an even number.
But there're more interesting things. Please have a look at that example:
The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float.
What you might want to do is to use Decimal for more conventional rounding logic: https://docs.python.org/3/library/decimal.html
For example:
>>> Decimal('7.325').quantize(Decimal('.01'), rounding=ROUND_DOWN)
Decimal('7.32')
>>> Decimal('7.325').quantize(Decimal('.01'), rounding=ROUND_UP)
Decimal('7.33')
There are lot of ways to round a number. round() behaves according to a particular rounding strategy which may or may not be the one you need for a given situation (see the first comment to your question).
If you want to round your number to the upper, you can try this:
import math
n = 4800.5
print(math.ceil(n))
you can do something like this:
from decimal import Decimal, ROUND_HALF_UP
def round_half_up(decimal_number, places=0):
if places == 0:
exp = Decimal('1')
else:
exp_str = '0' * places
exp_str = exp_str[:-1] + '1'
exp = Decimal('.{}'.format(exp_str))
return Decimal(decimal_number).quantize(exp, rounding=ROUND_HALF_UP)
print(round_half_up(4800.5)) -> 4801
print(round_half_up(4800.555, 2)) -> 4800.56
Round() function will round up to next value, if decimal is >.5
upto .5 it would round up to just the integer part.

What is the difference between rounding Decimals with quantize vs the built in round function?

When working with the built in decimal module in python I can round decimals as follows.
Decimal(50.212345).quantize(Decimal('0.01'))
> Decimal('50.21')
But I can also round the same number with the built in round function
round(Decimal(50.212345), 2)
> Decimal('50.21')
Why would I use one instead of the other when rounding Decimals? In previous answers about rounding decimals, users suggested to use quantize because the built in round function would return a value of type float. Based on my testing, these both return a Decimal. Other than syntax, is there a reason to choose one over the other?
The return types aren't always the same. round() used with a single argument actually returns an int:
>>> round(5.3)
5
>>> round(decimal.Decimal("5.3"))
5
Other than that, suit yourself. quantize() is especially handy if you want a deoimal rounded to "the same" precision as another decimal you already have.
>>> x = decimal.Decimal("123.456")
>>> x*x
Decimal('15241.383936')
>>> (x*x).quantize(x)
Decimal('15241.384')
See? The code doing this doesn't have to know that x originally had 3 digits after the decimal point. Just passing x to quantize() forces the function to round back to the same precision as the original x, regardless of what that may be.
quantize() is also necessary if you want to use a rounding mode other than the default nearest/even.
>>> (x*x).quantize(x, decimal.ROUND_FLOOR)
Decimal('15241.383')

In python, is there hidden rules to control how to display the precision of decimal number

For python, do read this link: https://docs.python.org/3/tutorial/floatingpoint.html, "Floating Point Arithmetic: Issues and Limitations"
I do understand that there is mismatch(tiny difference) between a binary-represented float & exact-decimal represented float, ex.
exact-decimal represented float:: 1.005
python binary-represented float:: 1.00499999999999989341858963598497211933135986328125
here is what I typed in python:
>>> 1.005
1.005
>>> from decimal import Decimal
>>> Decimal(1.005)
Decimal('1.00499999999999989341858963598497211933135986328125')
Here is my question:
why python showed 1.005 when I type in 1.005? why it is not 1.00499999999999989341858963598497211933135986328125?
if you tell me that python round result to some digits after decimal point, then what is rounding rule for my situation? it looks there is default rounding rule when start python, if this default rounding rule exists, how to change it?
Thanks
When asked to convert the float value 1.0049999999999999 to string, Python displays it with rounding:
>>> x = 1.0049999999999999; print(x)
1.005
According to the post that juanpa linked, Python uses the David Gay algorithm to decide how many digits to show when printing a float. Usually around 16 digits are shown, which makes sense, since 64-bit floats can represent 15 to 17 digits of significance.
If you want to print a float with some other number of digits shown, use an f-string or string interpolation with a precision specifier (see e.g. Input and Output - The Python Tutorial). For instance to print x with 20 digits:
>>> print(f'{x:.20}')
1.0049999999999998934
>>> print('%.20g' % x)
1.0049999999999998934

Rounding logic in Python? [duplicate]

This question already has answers here:
Python float to int conversion
(6 answers)
Closed 8 years ago.
In my original code I was trying to compute some indices out of some float values and I faced the following problem:
>>> print int((1.40-.3)/.05)
21
But:
>>> print ((1.40-.3)/.05)
22.0
I am speechless about what is going on. Can somebody please explain?
This is caused by floating point inaccuracy:
>>> print repr((1.40-.3)/.05)
21.999999999999996
You could try using the Decimal type instead:
>>> from decimal import Decimal
>>> Decimal
<class 'decimal.Decimal'>
and then
>>> (Decimal('1.40') - Decimal('.3')) / Decimal('.05')
Decimal('22')
The fractions.Fraction class would work too. Or, you could just round:
>>> round((1.40-.3)/.05, 10) # round to 10 decimal places
22.0
Drop the print and you'll see that the actual value is:
>>> (1.40-.3)/.05
21.999999999999996
Python 2 print() (more accurately, float.__str__) lies to you by rounding to a couple of decimal digits. Python 3 print() (again, actually float.__str__) doesn't do that, it always gives a faithful representation of the actual value (it abbreviates, but only when it doesn't change the value).
This inaccuracy is inherent to floating point numbers (including Decimal, though its inaccuracies occur different cases). This is a fundamental problem, representing arbitrary real numbers is not possible. See Is floating point math broken? for explanations.
I think this explains it straightforwardly:
import decimal
>>> (decimal.Decimal(1.40) -decimal.Decimal(.3))/decimal.Decimal(.05)
Decimal('21.99999999999999722444243843')
>>> (decimal.Decimal('1.40') -decimal.Decimal('.3'))/decimal.Decimal('.05')
Decimal('22')

Is there a more readable or Pythonic way to format a Decimal to 2 places?

What the heck is going on with the syntax to fix a Decimal to two places?
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> num.quantize(Decimal(10) ** -2) # seriously?!
Decimal('1.00')
Is there a better way that doesn't look so esoteric at a glance? 'Quantizing a decimal' sounds like technobabble from an episode of Star Trek!
Use string formatting:
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> format(num, '.2f')
'1.00'
The format() function applies string formatting to values. Decimal() objects can be formatted like floating point values.
You can also use this to interpolate the formatted decimal value is a larger string:
>>> 'Value of num: {:.2f}'.format(num)
'Value of num: 1.00'
See the format string syntax documentation.
Unless you know exactly what you are doing, expanding the number of significant digits through quantisation is not the way to go; quantisation is the privy of accountancy packages and normally has the aim to round results to fewer significant digits instead.
Quantize is used to set the number of places that are actually held internally within the value, before it is converted to a string. As Martijn points out this is usually done to reduce the number of digits via rounding, but it works just as well going the other way. By specifying the target as a decimal number rather than a number of places, you can make two values match without knowing specifically how many places are in them.
It looks a little less esoteric if you use a decimal value directly instead of trying to calculate it:
num.quantize(Decimal('0.01'))
You can set up some constants to hide the complexity:
places = [Decimal('0.1') ** n for n in range(16)]
num.quantize(places[2])

Categories

Resources