Python decimal.Decimal producing result in scientific notation - python

I'm dividing a very long into much smaller number. Both are of type decimal.Decimal().
The result is coming out in scientific notation. How do I stop this? I need to print the number in full.
>>> decimal.getcontext().prec
50
>>> val
Decimal('1000000000000000000000000')
>>> units
Decimal('1500000000')
>>> units / val
Decimal('1.5E-15')

The precision is kept internally - you just have to explicitly call for the number of decimal places you want at the point you are exporting your decimal value to a string.
So, if you are going a print, or inserting the value in an HTML template, the first step is to use the string format method (or f-strings), to ensure the number is encompassed:
In [29]: print(f"{units/val:.50f}")
0.00000000000000150000000000000000000000000000000000
Unfortunatelly, the string-format minilanguage has no way to eliminate by itself the redundant zeroes on the right hand side. (the left side can be padded with "0", " ", custom characters, whatever one want, but all the precision after the decimal separator is converted to trailing 0s).
Since finding the least significant non-zero digit is complicated - otherwiser we could use a parameter extracted from the number instead of the "50" for precision in the format expression, the simpler thing is to remove those zeros after formatting take place, with the string .rstrip method:
In [30]: print(f"{units/val:.50f}".rstrip("0"))
0.0000000000000015
In short: this seems to be the only way to go: in all interface points, where the number is leaving the core to an output where it is representd as a string, you format it with an excess of precision with the fixed point notation, and strip out the tailing zeros with f-string:
return template.render(number=f"{number:.50f}".rstrip("0"), ...)

Render the decimal into a formatted string with a float type-indicator {:,f}, and it will display just the right number of digits to express the whole number, regardless of whether it is a very large integer or a very large decimal.
>>> val
Decimal('1000000000000000000000000')
>>> units
Decimal('1500000000')
>>> "{:,f}".format(units / val)
'0.0000000000000015'
# very large decimal integer, formatted as float-type string, appears without any decimal places at all when it has none! Nice!
>>> "{:,f}".format(units * val)
'1,500,000,000,000,000,000,000,000,000,000,000'
You don't need to specify the decimal places. It will display only as many as required to express the number, omitting that trail of useless zeros that appear after the final decimal digit when the decimal is shorter than a fixed format width. And you don't get any decimal places if the number has no fraction part.
Very large numbers are therefore accommodated without having to second guess how large they will be. And you don't have to second guess whether they will be have decimal places either.
Any specified thousands separator {:,f} will likewise only have effect if it turns out that the number is a large integer instead of a long decimal.
Proviso
Decimal(), however, has this idea of significant places, by which it will add trailing zeros if it thinks you want them.
The idea is that it intelligently handles situations where you might be dealing with currency digits such as £ 10.15. To use the example from the documentation:
>>> decimal.Decimal('1.30') + decimal.Decimal('1.20')
Decimal('2.50')
It makes no difference if you format the Decimal() - you still get the trailing zero if the Decimal() deems it to be significant:
>>> "{:,f}".format( decimal.Decimal('1.30') + decimal.Decimal('1.20'))
'2.50'
The same thing happens (perhaps for some good reason?) when you treat thousands and fractions together:
>>> decimal.Decimal(2500) * decimal.Decimal('0.001')
Decimal('2.500')
Remove significant trailing zeros with the Decimal().normalize() method:
>>> (2500 * decimal.Decimal('0.001')).normalize()
Decimal('2.5')

Related

Python f-string surprising results on floats

I am trying to format float numbers in a fixed point notation: x.xxx, three digits following the decimal point regardless of the value of the number. I am getting surprising results. The first in particular would suggest that it is giving me three significant places rather than three digits after the decimal point. How do I tell it what I really want?
>>> print(f"{.0987:5.03}")
0.0987
*expected: 0.099*
>>> print(f"{0.0:05.03}")
000.0
*expected: 0.000*
>>> print(f"{0.0:5.3}")
0.0
# added "3f" to specify decimals places
print(f"{.0987:5.3f}")
#expected: 0.099*
print(f"{0.9687:05.3f}")
#expected: 0.000*
print(f"{0.0:5.3f}")

How to dynamically format string representation of float number in python?

Hi I would like to dynamically adjust the displayed decimal places of a string representation of a floating point number, but i couldn't find any information on how to do it.
E.g:
precision = 8
n = 7.12345678911
str_n = '{0:.{precision}}'.format(n)
print(str_n) should display -> 7.12345678
But instead i'm getting a "KeyError". What am i missing?
You need to specify where precision in your format string comes from:
precision = 8
n = 7.12345678911
print('{0:.{precision}}'.format(n, precision=precision))
The first time, you specified which argument you'd like to be the number using an index ({0}), so the formatting function knows where to get the argument from, but when you specify a placeholder by some key, you have to explicitly specify that key.
It's a little unusual to mix these two systems, i'd recommend staying with one:
print('{number:.{precision}}'.format(number=n, precision=precision)) # most readable
print('{0:.{1}}'.format(n, precision))
print('{:.{}}'.format(n, precision)) # automatic indexing, least obvious
It is notable that these precision values will include the numbers before the point, so
>>> f"{123.45:.3}"
'1.23e+02'
will give drop drop the decimals and only give the first three digits of the number.
Instead, the f can be supplied to the type of the format (See the documentation) to get fixed-point formatting with precision decimal digits.
print('{number:.{precision}f}'.format(number=n, precision=precision)) # most readable
print('{0:.{1}f}'.format(n, precision))
print('{:.{}f}'.format(n, precision)) # automatic indexing, least obvious
In addition to #Talon, for those interested in f-strings, this also works.
precision = 8
n = 7.12345678911
print(f'{n:.{precision}f}')

Python Format Strings and Floating Point Representation

I'm working with some in-place code dealing with formatting user-stored floating point numbers for human display.
The current implementation does this:
"{0:.24f}".format(some_floating_point).rstrip('0')
which makes sense and works just fine for the most part. But when faced with a value of such as 0.0003 things don't go as well.
>>> "{0:.24f}".format(0.0003).rstrip('0')
'0.000299999999999999973719'
Some further investigation indicates that Python seems to change the underlying representation based on the number of digits requested?
>>> "{0:.15f}".format(0.0003)
'0.000300000000000'
>>> "{0:.20f}".format(0.0003)
'0.00029999999999999997'
My assumption is single precision vs double.
The user enters these values where they are stored in the database as a double, and when the form is rendered again later the same value is prepopulated in the field. Therefore I need a 1:1 mapping of these representations.
My question is therefore: What is an elegant, and more importantly safe way to deal with this behavior? My best efforts so far have involved log10 and are less than ideal to put it nicely.
EDIT: As Prune points out the value is not actually changing, but rather the rounding done by format will carry over causing a set of 9s to become 0s (d'oh). The behavior makes sense then, but the solution is still escaping me.
You are receiving the number as stored. 0.0003 cannot be stored exactly as a binary fraction. To illustrate:
>>> 0.00029999999999999997 == 0.0003
True
Print formatting rounds the number at the least significant digit. Double precision merely pushes the problem farther to the right. To fully "solve" the problem to base-10 eyes, you need to switch to decimal arithmetic, or perhaps build your own string handler for numbers that are sufficiently close to a simpler value (a suspicious string of 9's or 0's in the fractional part).
Here's the start of a function for you. I tested it with 0.0004, which stores as a hair more than 0.0004; the 9's case is left as an exercise :-) .
def str_round(x):
size = 6
nines = '9'*size
zeros = '0'*size
str = "{0:.24f}".format(x).rstrip('0')
str_len = len(str)
print str, str_len
if nines in str:
# replace leading digit with one more
pos = str.index(nines)
# ADD CODE HERE
# Turn the leading portion into an integer;
# increment and convert back to zero-leading string.
# Fill out the rest with zeros.
elif zeros in str:
# Change all trailing digits to 0
pos = str.index(zeros)
str = str[:pos] + '0'*(str_len - pos)
return str
print str_round(0.0004)

What is difference between {:.4e} and {:2.4} in Python scientific notation

I can't quite understand what the difference is between the two print statements below for the number I am trying to express in scientific notation. I thought the the bottom one is supposed to allow 2 spaces for the printed result, and move the decimal place 4 times, but the result I get does not corroborate that understanding. As far as the first one, What does 4e mean?
>>> print('{:.4e}'.format(3454356.7))
3.4544e+06
>>> print('{:2.4}'.format(3454356.7))
3.454e+06
All help greatly appreciated.
In the first example , 4e means, 4 decimal places in scientific notation. You can come to know that by doing
>>> print('{:.4e}'.format(3454356.7))
3.4544e+06
>>> print('{:.5e}'.format(3454356.7))
3.45436e+06
>>> print('{:.6e}'.format(3454356.7))
3.454357e+06
In the second example, .4 , means 4 significant figures. And 2 means to fit the whole data into 2 characters
>>> print('{:2.4}'.format(3454356.7))
3.454e+06
>>> print('{:2.5}'.format(3454356.7))
3.4544e+06
>>> print('{:2.6}'.format(3454356.7))
3.45436e+06
Testing with different value of 2
>>> print('-{:20.6}'.format(3454356.7))
- 3.45436e+06
You can learn more from the python documentation on format
If you want to produce a float, you will have to specify the float type:
>>> '{:2.4f}'.format(3454356.7)
'3454356.7000'
Otherwise, if you don’t specify a type, Python will choose g as the type for which precision will mean the precision based on its significant figures, the digits before and after the decimal point. And since you have a precision of 4, it will only display 4 digits, falling back to scientific notation so it doesn’t add false precision.
The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating point value formatted with 'f' and 'F', or before and after the decimal point for a floating point value formatted with 'g' or 'G'. For non-number types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The precision is not allowed for integer values.
(source, emphasis mine)
Finally, note that the width (the 2 in above format string) includes the full width, including digits before the decimal point, digits after it, the decimal point itself, and the components of the scientific notation. The above result would have a width of 12, so in this case, the width of the format string is simply ignored.

Is there a more readable or Pythonic way to format a Decimal to 2 places?

What the heck is going on with the syntax to fix a Decimal to two places?
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> num.quantize(Decimal(10) ** -2) # seriously?!
Decimal('1.00')
Is there a better way that doesn't look so esoteric at a glance? 'Quantizing a decimal' sounds like technobabble from an episode of Star Trek!
Use string formatting:
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> format(num, '.2f')
'1.00'
The format() function applies string formatting to values. Decimal() objects can be formatted like floating point values.
You can also use this to interpolate the formatted decimal value is a larger string:
>>> 'Value of num: {:.2f}'.format(num)
'Value of num: 1.00'
See the format string syntax documentation.
Unless you know exactly what you are doing, expanding the number of significant digits through quantisation is not the way to go; quantisation is the privy of accountancy packages and normally has the aim to round results to fewer significant digits instead.
Quantize is used to set the number of places that are actually held internally within the value, before it is converted to a string. As Martijn points out this is usually done to reduce the number of digits via rounding, but it works just as well going the other way. By specifying the target as a decimal number rather than a number of places, you can make two values match without knowing specifically how many places are in them.
It looks a little less esoteric if you use a decimal value directly instead of trying to calculate it:
num.quantize(Decimal('0.01'))
You can set up some constants to hide the complexity:
places = [Decimal('0.1') ** n for n in range(16)]
num.quantize(places[2])

Categories

Resources