How can I make numbers more precise in Python? [duplicate] - python

This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I'm just learning the basics of Python at the moment and I thought that, as a learning exercise, I'd try writing something that would approximate the number "e". Anyway, it always gives the answer to 11 decimal places and I want it to give something more like 1000 decimal places. How do I do I do this?

Are you sure you need to make them "more precise"? Or do you just need to see more digits than Python shows by default?
>>> import math
>>> math.pi
3.141592653589793
>>>
>>> '{0:0.2f}'.format(math.pi)
'3.14'
>>>
>>> '{0:0.30f}'.format(math.pi)
'3.141592653589793115997963468544'
>>>
>>> '{0:0.60f}'.format(math.pi)
'3.141592653589793115997963468544185161590576171875000000000000'
However, note that
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info
I assure you that pi doesn't go to zero after 48 digits :-)

Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
A IEEE-754 double has 64 bits (8 bytes), with the 52 bits of the fraction significand appearing in the memory format, the total precision is approximately 16 decimal digits.
So to represent a float number have a higher precise than that, you should use Decimal.
import decimal
decimal.getcontext().prec = 100

If you want it to be a number, with a precision of a thousand digits, the short answer is you can't..
A workaround is, you can use the decimal module. Here is an example:
import decimal
a = decimal.Decimal('2387324895172987120570935712093570921579217509185712093')
In this case, however, a is not a number anymore. It's just an instance of the decimal.Decimal class. Well, you can still do some math operations with it.

Related

How to work with very small float numbers in python3 (e.g. 8.5e-350)

Is there some type like long in Python 2 that can handle very small number like 8.5e-350?
Or is there any way around in this situation because python supresses floats after 320 decimal places to 0?
The standard library comes with the decimal module
The decimal module provides support for fast correctly-rounded decimal
floating point arithmetic. It offers several advantages over the float
datatype.
...
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as
large as needed for a given problem:
>>> from decimal import *
>>> getcontext().prec = 500
>>> Decimal(10) / Decimal(3)
Decimal('3.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333')
>>> len('3.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333')
501
Quick start tutorial
Python uses IEEE 745 so you don't get 350 decimal places, you get about 15 significant figures and a limit on the exponent to +/-350. If that number of significant figures is OK, and you don't need more dynamic range (i.e. all your values are small), express your quantities in different units so your values are in a suitable range.

What is the meaning of this following piece of code({:.3f}) used in a Decision Tree tutorial? [duplicate]

I don't understand why, by formatting a string containing a float value, the precision of this last one is not respected. Ie:
'%f' % 38.2551994324
returns:
'38.255199'
(4 signs lost!)
At the moment I solved specifying:
'%.10f' % 38.2551994324
which returns '38.2551994324' as expected… but should I really force manually how many decimal numbers I want? Is there a way to simply tell to python to keep all of them?! (what should I do for example if I don't know how many decimals my number has?)
but should I really force manually how many decimal numbers I want? Yes.
And even with specifying 10 decimal digits, you are still not printing all of them. Floating point numbers don't have that kind of precision anyway, they are mostly approximations of decimal numbers (they are really binary fractions added up). Try this:
>>> format(38.2551994324, '.32f')
'38.25519943239999776096738060005009'
there are many more decimals there that you didn't even specify.
When formatting a floating point number (be it with '%f' % number, '{:f}'.format(number) or format(number, 'f')), a default number of decimal places is displayed. This is no different from when using str() (or '%s' % number, '{}'.format(number) or format(number), which essentially use str() under the hood), only the number of decimals included by default differs; Python versions prior to 3.2 use 12 digits for the whole number when using str().
If you expect your rational number calculations to work with a specific, precise number of digits, then don't use floating point numbers. Use the decimal.Decimal type instead:
Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.
I would use the modern str.format() method:
>>> '{}'.format(38.2551994324)
'38.2551994324'
The modulo method for string formatting is now deprecated as per PEP-3101

python Decimal() with extreme precision acting funky [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

Python Decimal vs C# decimal precision [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

Inaccuracy in decimals [duplicate]

This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 8 years ago.
I'm in the process of converting a programme I've made from using floats to decimals.
Obviously the main reason I'm doing this is for accuracy.
I haven't used decimal before so thought I'd have a play first. The first thing I did was this:
>>> x = Decimal(7.2)
>>> x
Decimal('7.20000000000000017763568394002504646778106689453125')
Now considering decimals meant to be accurate and avoid long trailing numbers like floats, I was pretty surprised to see that happen. It's also gone to 50 D.P. despite the standard preset of 28 (and doesn't matter what you set the preset too.
Is this a bug (|feature)? And why is it happening?
Decimal(7.2) will create a decimal from the exact value of the float 7.2. Since the float is not precise, while Decimal is, creating the decimal will carry over the inaccuracies from the float into the decimal, yielding the result you see there.
To create the exact decimal of 7.2, you need to specify it as a string:
Decimal('7.2')
This happens, because you feed a float literal, that cannot be represented accurately in binary. You should provide a string:
Decimal('7.2')
or use integers:
Decimal(72) / 10

Categories

Resources