Maximum decimal place in python - python

I know that python's integer is unbounded, but how many significant digits can a floating point number contain

I think is what you want.
import sys
sys.float_info.max
I get 1.7976931348623157e+308
The docs: https://docs.python.org/3/library/sys.html#sys.float_info
Part 2:
The answer is tricky as several factors may get in the way and the practical use of float in an application is also a limiting factor (what do you want to do?). Rounding and OS may change this, but that is a whole different discussion.
Here is a pedestrian means to get a simple and imperfect answer:
>>> import math
>>> format(math.pi, '.48g')
'3.14159265358979311599796346854418516159057617188'
>>> format(math.pi, '.49g')
'3.141592653589793115997963468544185161590576171875'
>>> format(math.pi, '.50g')
'3.141592653589793115997963468544185161590576171875'
>>> format(math.pi, '.51g')
'3.141592653589793115997963468544185161590576171875'
>>> len(str(141592653589793115997963468544185161590576171875))
48
Some info to look at:
Use of float in various systems: https://press.rebus.community/programmingfundamentals/chapter/floating-point-data-type/
Intro to the standard: https://stackoverflow.com/a/36720298/860715
The standard: https://en.wikipedia.org/wiki/IEEE_754
There are several ways I could see attacking question, but any result would be easy for a comp-sci or serious math person to bust. In PHP I have been burned by OS and float many times. Python is much better, but it still extends a lot of the underlying operating system's operations and that is something that can get unpredictable when pushing the limits.
I hope this helps as any answer provided is sure to be riddled with debate. In the long run you probably need to get something running and want to know what you can use reliably. I hope my proof gets you closer to that conclusion.

python [...] how many significant digits can a floating point number contain
767 is the most (when it uses double-precision, which I think it does for pretty much everyone).
One such number is (253 - 1) / 21074.
Demo:
>>> len(('%.2000f' % ((2**53 - 1) / 2**1074)).strip('0.'))
767
Got it from this article which goes into more detail.
The number in its full beauty (broken into lines for readability:
>>> print('\n'.join(textwrap.wrap(('%.2000f' % ((2**53 - 1) / 2**1074)).rstrip('0'))))
0.00000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000044501477170144022721148195934182639518696
3909270329129604685221944964444404215389103305904781627017582829831782
6079242213740172877389189291055314414815641243486759976282126534658507
1045737627442980259622449029037796981144446145705102663115100318287949
5279596682360399864792509657803421416370138126133331198987655154514403
1526125381326665295130600018491776632866075559583739224098994780755659
4098101021612198814605258742579179000071675999344145086087205681577915
4359230189103349648694206140521828924314457976051636509036065141403772
1744226256159024466852576737244643007551333245007965068671949137768847
8005309963967709758965844137894433796621993967316936280457084866613206
7970177289160800206986794085513437288676754097207572324554347709124613
17493580281734466552734375

Related

Replicating C Fixed-Point Math in Python

I am attempting to replicate a DSP algorithm in Python that was originally written in C. The trick is I also need to retain the same behavior of the 32 bit fixed point variables from the C version, including any numerical errors that the limited precision would introduce.
The current options I think are available are:
I know the python Decimal type can be used for fixed-point arithmetic, however from what I can tell there is no way to adjust the size of a Decimal variable. To my knowledge numpy does not support doing fixed point operations.
I did a quick experiment to see how fiddling with the Decimal precision affected things:
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> sys.getsizeof(a)
104
>>> dc.getcontext().prec = 16
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.1999999999999999555910790149937383830547332763671875')
>>> sys.getsizeof(a)
104
There is a change before/after the precision change, however there are still a large number of decimal places. The variable is still the same size, and has quite a few decimal places after it.
How can I best go about achieving the original objective? I do know that Python ctypes has the C language float, but I do not know if that will be useful in this case. I do not know if there is even a way to accurately mimic C type fixed point math in Python.
Thanks!
I recommend fxpmath module for fixed-point operations in python. Maybe with that you can emulate the fixed point arithmetic defining precision in bits (fractional part). It supports arrays and some arithmetic operations.
Repo at: https://github.com/francof2a/fxpmath
Here an example:
from fxpmath import Fxp
x = Fxp(1.1, True, 32, 16) # (val, signed, n_word, n_frac)
print(x)
print(x.precision)
results in:
1.0999908447265625
1.52587890625e-05

Why isn't "getcontext().prec = 2" actually setting it so the use of Decimal() goes to two decimal places?

I am writing a program that deals with monetary values, as such I want it to output to two decimal places using decimal, unfortunately, this isn't working and it is outputting much longer numbers.
I've looked around online but so far it doesn't look like there are any answers for this specific issue, in case it was something happening in the rest of the program I ran a test piece of code using it just on a basic number and the same result occurred so it isn't the rest of my code. test below.
from decimal import *
getcontext().prec = 2
test = Decimal(5.859)
print(test)
I expected this to print the output of 5.86 or in the worst case at least 5.85 if it didn't round in the desired direction.
Instead, it printed.
5.8589999999999999857891452847979962825775146484375
the same output as though getcontext().prec = 2 was never used
Please be aware moderately new to python so please don't just assume I'll know exactly what you mean if you throw random code at me without an explanation.
As per this documentation, Context precision and rounding only come into play during arithmetic operations. So, in order to achieve your goal, you can do something like this:
from decimal import *
getcontext().prec = 3
test = Decimal('5.859')/Decimal('1')
print(test)
output:
Decimal('5.86')

Dealing with large numbers in R [Inf] and Python

I am learning Python these days, and this is probably my first post on Python. I am relatively new to R as well, and have been using R for about a year. I am comparing both the languages while learning Python. I apologize if this question is too basic.
I am unsure why R outputs Inf for something python doesn't. Let's take 2^1500 as an example.
In R:
nchar(2^1500)
[1] 3
2^1500
[1] Inf
In Python:
len(str(2**1500))
Out[7]: 452
2**1500
Out[8]: 3507466211043403874...
I have two questions:
a) Why is it that R provides Inf when Python doesn't.
b) I researched How to work with large numbers in R? thread. It seems that Brobdingnag could help us out with dealing with large numbers. However, even in such case, I am unable to compute nchar. How do I compute above expression i.e. 2^1500 in R
2^Brobdingnag::as.brob(500)
[1] +exp(346.57)
> nchar(2^Brobdingnag::as.brob(500))
Error in nchar(2^Brobdingnag::as.brob(500)) :
no method for coercing this S4 class to a vector
In answer to your questions:
a) They use different representations for numbers. Most numbers in R are represented as double precision floating point values. These are all 64 bits long, and give about 15 digit precision throughout the range, which goes from -double.xmax to double.xmax, then switches to signed infinite values. R also uses 32 bit integer values sometimes. These cover the range of roughly +/- 2 billion. R chooses these types because it is geared towards statistical and numerical methods, and those rarely need more precision than double precision gives. (They often need a bigger range, but usually taking logs solves that problem.)
Python is more of a general purpose platform, and it has types discussed in MichaelChirico's comment.
b) Besides Brobdingnag, the gmp package can handle arbitrarily large integers. For example,
> as.bigz(2)^1500
Big Integer ('bigz') :
[1] 35074662110434038747627587960280857993524015880330828824075798024790963850563322203657080886584969261653150406795437517399294548941469959754171038918004700847889956485329097264486802711583462946536682184340138629451355458264946342525383619389314960644665052551751442335509249173361130355796109709885580674313954210217657847432626760733004753275317192133674703563372783297041993227052663333668509952000175053355529058880434182538386715523683713208549376
> nchar(as.character(as.bigz(2)^1500))
[1] 452
I imagine the as.character() call would also be needed with Brobdingnag.
Apparently python uses arbitrary precision integers by default when needed. R does not. However, there are many useful R packages to perform arbitrary precision arithmetic. Which package to pick depends on the use case.
To bring up a package that hasn't been discussed yet, consider the Rmpfr package:
> library(Rmpfr)
> a <- 2^mpfr(1500, 10000)
> a
1 'mpfr' number of precision 10000 bits
[1] 35074662110434038747627587960280857993524015880330828824075798024790963850563322203657080886584969261653150406795437517399294548941469959754171038918004700847889956485329097264486802711583462946536682184340138629451355458264946342525383619389314960644665052551751442335509249173361130355796109709885580674313954210217657847432626760733004753275317192133674703563372783297041993227052663333668509952000175053355529058880434182538386715523683713208549376
It requires you to set a precision, but if you make it large enough it can hold 2^1500 as integer.
However, it also doesn't seem to define an as.character() function:
> as.character(a)
[1] "<S4 object of class \"mpfr1\">"
So if your problem is specifically to count digits, then the gmp package as discussed in this answer is probably the way to go. On the other hand, if you're interested in arbitrary precision floating point arithmetic, Rmpfr might be a better choice.

How to avoid floating point arithmetics issues?

Python (and almost anything else) has known limitations while working with floating point numbers (nice overview provided here).
While problem is described well in the documentation it avoids providing any approach to fixing it. And with this question I am seeking to find a more or less robust way to avoid situations like the following:
print(math.floor(0.09/0.015)) # >> 6
print(math.floor(0.009/0.0015)) # >> 5
print(99.99-99.973) # >> 0.016999999999825377
print(.99-.973) # >> 0.017000000000000015
var = 0.009
step = 0.0015
print(var < math.floor(var/step)*step+step) # False
print(var < (math.floor(var/step)+1)*step) # True
And unlike suggested in this question, their solution does not help to fix a problem like next peace of code failing randomly:
total_bins = math.ceil((data_max - data_min) / width) # round to upper
new_max = data_min + total_bins * width
assert new_max >= data_max
# fails. because for example 1.9459999999999997 < 1.946
If you deal in discrete quantities, use int.
Sometimes people use float in places where they definitely shouldn't. If you're counting something (like number of cars in the world) as opposed to measuring something (like how much gasoline is used per day), floating-point is probably the wrong choice. Currency is another example where floating point numbers are often abused: if you're storing your bank account balance in a database, it's really not 123.45 dollars, it's 12345 cents. (But also see below about Decimal.)
Most of the rest of the time, use float.
Floating-point numbers are general-purpose. They're extremely accurate; they just can't represent certain fractions, like finite decimal numbers can't represent the number 1/3. Floats are generally suited for any kind of analog quantity where the measurement has error bars: length, mass, frequency, energy -- if there's uncertainty on the order of 2^(-52) or greater, there's probably no good reason not to use float.
If you need human-readable numbers, use float but format it.
"This number looks weird" is a bad reason not to use float. But that doesn't mean you have to display the number to arbitrary precision. If a number with only three significant figures comes out to 19.99909997918947, format it to one decimal place and be done with it.
>>> print('{:0.1f}'.format(e**pi - pi))
20.0
If you need precise decimal representation, use Decimal.
Sraw's answer refers to the decimal module, which is part of the standard library. I already mentioned currency as a discrete quantity, but you may need to do calculations on amounts of currency in which not all numbers are discrete, for example calculating interest. If you're writing code for an accounting system, there will be rules that say when rounding is applied and to what accuracy various calculations are done, and those specifications will be written in terms of decimal places. In this situation and others where the decimal representation is inherent to the problem specification, you'll want to use a decimal type.
>>> from decimal import Decimal
>>> rate = Decimal('0.0345')
>>> principal = Decimal('3412.65')
>>> interest = rate*principal
>>> interest
Decimal('117.736425')
>>> interest.quantize(Decimal('0.01'))
Decimal('117.74')
But most importantly, use data types and operations that make sense in context.
Several of your examples use math.floor, which takes a float and chops off the fractional part. In any situation where you should use math.floor, floating-point error doesn't matter. (If you want to round to the nearest integer, use round instead.) Yes, there are ways to use floating-point operations that have wrong results from a mathematical standpoint. But real-world quantities usually fall into one of these categories:
Exact, and therefore should not be put in a float;
Imprecise to a degree far exceeding the likely accumulation of floating-point error.
As a programmer, it's part of your job to know the quantities you're dealing with and choose appropriate data types. So there's no "fix" for floating point numbers, because there's no "problem" really -- just people using the wrong type for the wrong thing.
Let's talk about decimal. Actually, this library converts number into a string-like object, and then do any arithmetical operation based on chars.
So in this case, it can handle significantly huge number with almost perfect precision.
But, as it calculate number based on chars, it cost much more.
Further, if you want to use decimal, to ensure precision, you need consistently use it. If you mix decimal with normal types such as float, it may cause unexpected problems.
Finally, when you construct a Decimal object, it is better to pass a string but not a number.
>>> print(Decimal(99.99) - Decimal(99.973))
0.01699999999999590727384202182
>>> print(Decimal("99.99") - Decimal("99.973"))
0.017
It depends what your end goal is - there is no way to "perfectly" store floating point numbers. Only "good enough".
If you are working with money for example (dollars and cents) it is common practice to not store dollars - and only cents. (dollar = 100 cents) - this is how paypal stores your account balance on their servers.
There is also the python Decimal class for fixed point arithmetic.

Exponential of very small number in python

I am trying to calculate the exponential of -1200 in python (it's an example, I don't need -1200 in particular but a collection of numbers that are around -1200).
>>> math.exp(-1200)
0.0
It is giving me an underflow; How may I go around this problem?
Thanks for any help :)
In the standard library, you can look at the decimal module:
>>> import decimal
>>> decimal.Decimal(-1200)
Decimal('-1200')
>>> decimal.Decimal(-1200).exp()
Decimal('7.024601888177132554529322758E-522')
If you need more functions than decimal supports, you could look at the library mpmath, which I use and like a lot:
>>> import mpmath
>>> mpmath.exp(-1200)
mpf('7.0246018881771323e-522')
>>> mpmath.mp.dps = 200
>>> mpmath.exp(-1200)
mpf('7.0246018881771325545293227583680003334372949620241053728126200964731446389957280922886658181655138626308272350874157946618434229308939128146439669946631241632494494046687627223476088395986988628688095132e-522')
but if possible, you should see if you can recast your equations to work entirely in the log space.
Try calculating in logarithmic domain as long as possible. I.e. avoid calculating the exact value but keep working with exponents.
exp(-1200) IS a very very small number (just as exp(1200) is a very very big one), so maybe the exact value is not really what you are interested in. If you only need to compare these numbers then logarithmic space should be enough.

Categories

Resources