I'm writing a program where I need to pass in very accurate decimal representations of fractions (i.e. accurate to over 200 decimal places). However simply telling python to include more decimal places (using %.50f, for instance) often simply adds a bunch of 0s to the ends of certain decimals.
Is there a way to get python to display accurately an arbitrary number of decimal places for a fraction? Do I need to install a package/module?
Python 2.7 and above can do the following,
Using the Decimal Library from the python standard libraries by using from decimal import * and do this:
from decimal import *
with localcontext() as context:
context.prec = #your_precision_here
print(Decimal(#calculation_here))
Check the decimal module, which is included with Python. It can show the exact decimal representation of any value stored in a float or decimal variable.
This is not quite the same as showing arbitrarily many decimal places for a fraction, but see if it meets your needs.
Related
Goal
I want to convert all floats to two decimal places regardless of number of decimal places float has without repeating convert code.
For example, I want to convert
50 to 50.00
50.5 to 50.50
without repeating the convert code again and again. What I mean is explained in the following section - research.
Not what this question is about
This question is NOT about:
Only limiting floats to two decimal points - if there is less than two decimal points then I want it to have two decimal points with zeros for the unused spaces.
Flooring the decimal or ceiling it.
Rounding the decimal off.
This question is not a duplicate of this question.
That question only answers the first part of my question - convert floats to two decimal places regardless of number of decimal places float has, not the second part - without repeating convert code.
Nor this question.
That is just how to add units before the decimal place. My question
is: how to convert all floats to two decimal places regardless of
number of decimal places float has without repeating convert code.
Research
I found two ways I can achieve the convert. One is using the decimal module:
from decimal import *
TWOPLACES = Decimal(10) ** -2
print(Decimal('9.9').quantize(TWOPLACES))
Another, without using any other modules:
print(f"{9.9:.2f}")
However, that does not fully answer my question. Realise that the code to convert keeps being needed to repeat itself? I keep having to repeat the code to convert again and again. Sadly, my whole program is already almost completed and it will be quite a waste of time to add this code here and there so the format will be correct. Is there any way to convert all floats to two decimal places regardless of number of decimal places float has without repeating convert code?
Clarification
What I mean by convert is, something like what Dmytro Chasovskyi said, that I want all places with floats in my program without extra changes to start to operate like decimals. For example, if I had the operation 1.2345 + 2.7 + 3 + 56.1223183 it should be 1.23 + 2.70 + 3.00 + 56.12.
Also, float is a number, not a function.
The bad news is: there is no "float" with "two decimal places".
Floating point numbers are represented internally with a fixed number of digits in base 2. https://floating-point-gui.de/basic/ .
And these are both efficient and accurate enough for almost all calculations we perform with any modern program.
What we normally want is that the human-readable text representation of a number, in all outputs of a program, shows only two digits. And this is controlled at wherever your program is either writting the value to a text file, to the screen, or rendering it to an HTML template (which is "writing it to a text file", again).
So, it happens that the same syntaxes that will convert a number to text, embedded in another string, allows additionally to control the exact output of the number. You put as an example print(f"{9.9:.2f}"). The only thing that looks impractical there is due to you hardcoding your number along with its conversion. Typically, the number will be in a variable.
Them, all you have to do is writting, wherever you output the number:
print(f"The value is: {myvar:.02f}")
instead of
print(f"The value is: {myvar}")
Or in whatever function you are calling that will need the rendered version of the number instead of print. Notice that the use of the word "rendered" here is deliberate: while your program is running, the number is stored in an efficient way in memory, directly usable by the CPU, that is not human readable. At any point you want to "see" the number, you have to convert it into text. It is just that some calls to it implicitly, like print(myvar). Then, just resort to explicitly converting it in these places - `print(f"{myvar:.02f}").
really having 2 decimal places in memory
If you use decimal.Decimal, then yes, there are ways to keep the internal representation of the number with 2 decimal digits,
but them, instead of just converting the number on output, you must convert it into a "2 decimal place" value on all inputs as well
That means that whenever ingesting a number into your program, be it typed by the user, read from a binary file or database, or received via wire from a sensor, you have to apply a similar transform to the one used in the output as detailed above. More precisely: you convert your float to a properly formatted string, and then convert that to a decimal.Decimal.
And this will prevent your program of accumulating errors due to base conversion, but you will still need to force the format to 2 decimal places on every output, just like above.
Use this function.
def cvt_decimal(input):
number = float(input)
return ("%.2f" % number)
print(cvt_decimal(50))
print(cvt_decimal(50.5))
Output is :
50.00
50.50
** Process exited - Return Code: 0 **
Press Enter to exit terminal
you can modify the decimal precision, even if you do any operation between 2 decimal types
import decimal
from decimal import Decimal
decimal.getcontext().prec = 2
a = Decimal('0.12345')
b = Decimal('0.12345')
print(a + b)
Decimal calculations are precise but it takes more time to do calculations, keep that in mind.
I am still a beginner at Python, and using the in-built decimal method in Python, it's "slightly" off.
Here's the code:
import decimal
print(decimal.Decimal(0.02))
And here's the output:
0.0200000000000000004163336342344337026588618755340576171875
I begin to wonder, how is this possible? 0.02 is not 0.0200000000000000004163336342344337026588618755340576171875, but the decimal module perceives 0.02 as 0.0200000000000000004163336342344337026588618755340576171875. Is the decimal module bugged, or am I doing something wrong?
The Decimal method can't do anything if your input is already a float, it won't round it.
The only way for Decimal to not do that, it needs a string:
print(decimal.Decimal("0.02"))
The decimal module implements fixed and floating point arithmetic using the model familiar to most people, rather than the IEEE floating point version implemented by most computer hardware. A Decimal instance can represent any number exactly, round up or down, and apply a limit to the number of significant digits.
Decimal values are represented as instances of the Decimal class. The constructor takes as argument an integer, or a string. Floating point numbers must be converted to a string before being used to create a Decimal, letting the caller explicitly deal with the number of digits for values that cannot be expressed exactly using hardware floating point representations.
So, in your case, please pass the string as an input for your Decimal method. Hope this helps.
import decimal
print(decimal.Decimal('0.02'))
Check the documentation here
Is there some type like long in Python 2 that can handle very small number like 8.5e-350?
Or is there any way around in this situation because python supresses floats after 320 decimal places to 0?
The standard library comes with the decimal module
The decimal module provides support for fast correctly-rounded decimal
floating point arithmetic. It offers several advantages over the float
datatype.
...
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as
large as needed for a given problem:
>>> from decimal import *
>>> getcontext().prec = 500
>>> Decimal(10) / Decimal(3)
Decimal('3.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333')
>>> len('3.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333')
501
Quick start tutorial
Python uses IEEE 745 so you don't get 350 decimal places, you get about 15 significant figures and a limit on the exponent to +/-350. If that number of significant figures is OK, and you don't need more dynamic range (i.e. all your values are small), express your quantities in different units so your values are in a suitable range.
I don't understand why, by formatting a string containing a float value, the precision of this last one is not respected. Ie:
'%f' % 38.2551994324
returns:
'38.255199'
(4 signs lost!)
At the moment I solved specifying:
'%.10f' % 38.2551994324
which returns '38.2551994324' as expected… but should I really force manually how many decimal numbers I want? Is there a way to simply tell to python to keep all of them?! (what should I do for example if I don't know how many decimals my number has?)
but should I really force manually how many decimal numbers I want? Yes.
And even with specifying 10 decimal digits, you are still not printing all of them. Floating point numbers don't have that kind of precision anyway, they are mostly approximations of decimal numbers (they are really binary fractions added up). Try this:
>>> format(38.2551994324, '.32f')
'38.25519943239999776096738060005009'
there are many more decimals there that you didn't even specify.
When formatting a floating point number (be it with '%f' % number, '{:f}'.format(number) or format(number, 'f')), a default number of decimal places is displayed. This is no different from when using str() (or '%s' % number, '{}'.format(number) or format(number), which essentially use str() under the hood), only the number of decimals included by default differs; Python versions prior to 3.2 use 12 digits for the whole number when using str().
If you expect your rational number calculations to work with a specific, precise number of digits, then don't use floating point numbers. Use the decimal.Decimal type instead:
Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.
I would use the modern str.format() method:
>>> '{}'.format(38.2551994324)
'38.2551994324'
The modulo method for string formatting is now deprecated as per PEP-3101
Does anyone know of a faster decimal implementation in python?
As the example below demonstrates, the standard library's decimal module is ~100 times slower than float.
from timeit import Timer
def run(val, the_class):
test = the_class(1)
for c in xrange(10000):
d = the_class(val)
d + test
d - test
d * test
d / test
d ** test
str(d)
abs(d)
if __name__ == "__main__":
a = Timer("run(123.345, float)", "from decimal_benchmark import run")
print "FLOAT", a.timeit(1)
a = Timer("run('123.345', Decimal)", "from decimal_benchmark import run; from decimal import Decimal")
print "DECIMAL", a.timeit(1)
Outputs:
FLOAT 0.040635041427
DECIMAL 3.39666790146
You can try cdecimal:
from cdecimal import Decimal
As of Python 3.3, the cdecimal implementation is now the built-in implementation of the decimal standard library module, so you don't need to install anything. Just use decimal.
For Python 2.7, installing cdecimal and using it instead of decimal should provide a speedup similar to what Python 3 gets by default.
The GMP library is one of the best arbitrary precision math libraries around, and there is a Python binding available at GMPY. I would try that method.
You should compare Decimal to Long Integer performance, not floating point. Floating point is mostly hardware these days. Decimal is used for decimal precision, while Floating Point is for wider range. Use the decimal package for monetary calculations.
To quote the decimal package manual:
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 do not have an exact representation in binary floating point. End users typically would not expect 1.1 to display as 1.1000000000000001 as it does with binary floating point.
The exactness carries over into arithmetic. In decimal floating point, "0.1 + 0.1 + 0.1 - 0.3" is exactly equal to zero. In binary floating point, result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal would be preferred in accounting applications which have strict equality invariants.
Use cDecimal.
Adding the following to your benchmark:
a = Timer("run('123.345', Decimal)", "import sys; import cdecimal; sys.modules['decimal'] = cdecimal; from decimal_benchmark import run; from decimal import Decimal")
print "CDECIMAL", a.timeit(1)
My results are:
FLOAT 0.0257983528473
DECIMAL 2.45782495288
CDECIMAL 0.0687125069413
(Python 2.7.6/32, Win7/64, AMD Athlon II 2.1GHz)
python Decimal is very slow, one can use float or a faster implementation of Decimal cDecimal.