Exponential of very small number in python - python

I am trying to calculate the exponential of -1200 in python (it's an example, I don't need -1200 in particular but a collection of numbers that are around -1200).
>>> math.exp(-1200)
0.0
It is giving me an underflow; How may I go around this problem?
Thanks for any help :)

In the standard library, you can look at the decimal module:
>>> import decimal
>>> decimal.Decimal(-1200)
Decimal('-1200')
>>> decimal.Decimal(-1200).exp()
Decimal('7.024601888177132554529322758E-522')
If you need more functions than decimal supports, you could look at the library mpmath, which I use and like a lot:
>>> import mpmath
>>> mpmath.exp(-1200)
mpf('7.0246018881771323e-522')
>>> mpmath.mp.dps = 200
>>> mpmath.exp(-1200)
mpf('7.0246018881771325545293227583680003334372949620241053728126200964731446389957280922886658181655138626308272350874157946618434229308939128146439669946631241632494494046687627223476088395986988628688095132e-522')
but if possible, you should see if you can recast your equations to work entirely in the log space.

Try calculating in logarithmic domain as long as possible. I.e. avoid calculating the exact value but keep working with exponents.
exp(-1200) IS a very very small number (just as exp(1200) is a very very big one), so maybe the exact value is not really what you are interested in. If you only need to compare these numbers then logarithmic space should be enough.

Related

Maximum decimal place in python

I know that python's integer is unbounded, but how many significant digits can a floating point number contain
I think is what you want.
import sys
sys.float_info.max
I get 1.7976931348623157e+308
The docs: https://docs.python.org/3/library/sys.html#sys.float_info
Part 2:
The answer is tricky as several factors may get in the way and the practical use of float in an application is also a limiting factor (what do you want to do?). Rounding and OS may change this, but that is a whole different discussion.
Here is a pedestrian means to get a simple and imperfect answer:
>>> import math
>>> format(math.pi, '.48g')
'3.14159265358979311599796346854418516159057617188'
>>> format(math.pi, '.49g')
'3.141592653589793115997963468544185161590576171875'
>>> format(math.pi, '.50g')
'3.141592653589793115997963468544185161590576171875'
>>> format(math.pi, '.51g')
'3.141592653589793115997963468544185161590576171875'
>>> len(str(141592653589793115997963468544185161590576171875))
48
Some info to look at:
Use of float in various systems: https://press.rebus.community/programmingfundamentals/chapter/floating-point-data-type/
Intro to the standard: https://stackoverflow.com/a/36720298/860715
The standard: https://en.wikipedia.org/wiki/IEEE_754
There are several ways I could see attacking question, but any result would be easy for a comp-sci or serious math person to bust. In PHP I have been burned by OS and float many times. Python is much better, but it still extends a lot of the underlying operating system's operations and that is something that can get unpredictable when pushing the limits.
I hope this helps as any answer provided is sure to be riddled with debate. In the long run you probably need to get something running and want to know what you can use reliably. I hope my proof gets you closer to that conclusion.
python [...] how many significant digits can a floating point number contain
767 is the most (when it uses double-precision, which I think it does for pretty much everyone).
One such number is (253 - 1) / 21074.
Demo:
>>> len(('%.2000f' % ((2**53 - 1) / 2**1074)).strip('0.'))
767
Got it from this article which goes into more detail.
The number in its full beauty (broken into lines for readability:
>>> print('\n'.join(textwrap.wrap(('%.2000f' % ((2**53 - 1) / 2**1074)).rstrip('0'))))
0.00000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000044501477170144022721148195934182639518696
3909270329129604685221944964444404215389103305904781627017582829831782
6079242213740172877389189291055314414815641243486759976282126534658507
1045737627442980259622449029037796981144446145705102663115100318287949
5279596682360399864792509657803421416370138126133331198987655154514403
1526125381326665295130600018491776632866075559583739224098994780755659
4098101021612198814605258742579179000071675999344145086087205681577915
4359230189103349648694206140521828924314457976051636509036065141403772
1744226256159024466852576737244643007551333245007965068671949137768847
8005309963967709758965844137894433796621993967316936280457084866613206
7970177289160800206986794085513437288676754097207572324554347709124613
17493580281734466552734375

Replicating C Fixed-Point Math in Python

I am attempting to replicate a DSP algorithm in Python that was originally written in C. The trick is I also need to retain the same behavior of the 32 bit fixed point variables from the C version, including any numerical errors that the limited precision would introduce.
The current options I think are available are:
I know the python Decimal type can be used for fixed-point arithmetic, however from what I can tell there is no way to adjust the size of a Decimal variable. To my knowledge numpy does not support doing fixed point operations.
I did a quick experiment to see how fiddling with the Decimal precision affected things:
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> sys.getsizeof(a)
104
>>> dc.getcontext().prec = 16
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.1999999999999999555910790149937383830547332763671875')
>>> sys.getsizeof(a)
104
There is a change before/after the precision change, however there are still a large number of decimal places. The variable is still the same size, and has quite a few decimal places after it.
How can I best go about achieving the original objective? I do know that Python ctypes has the C language float, but I do not know if that will be useful in this case. I do not know if there is even a way to accurately mimic C type fixed point math in Python.
Thanks!
I recommend fxpmath module for fixed-point operations in python. Maybe with that you can emulate the fixed point arithmetic defining precision in bits (fractional part). It supports arrays and some arithmetic operations.
Repo at: https://github.com/francof2a/fxpmath
Here an example:
from fxpmath import Fxp
x = Fxp(1.1, True, 32, 16) # (val, signed, n_word, n_frac)
print(x)
print(x.precision)
results in:
1.0999908447265625
1.52587890625e-05

Most efficient way to store scientific notation in Python and MySQL

So I'm trying to store a LOT of numbers, and I want to optimize storage space.
A lot of the numbers generated have pretty high precision floating points, so:
0.000000213213 or 323224.23125523 - long, high memory floats.
I want to figure out the best way, either in Python with MySQL(MariaDB) - to store the number with smallest data size.
So 2.132e-7 or 3.232e5, just to basically store it as with as little footprint as possible, with a decimal range that I can specify - but removing the information after n decimals.
I assume storing as a DOUBLE is the way to go, but can I truncate the precision and save on space too?
I'm thinking some number formating / truncating in Python followed by just normal storage as a DOUBLE would work - but would that actually save any space as opposed to just immediately storing the double with N decimals attached.
Thanks!
All python floats have the same precision and take the same amount of storage. If you want to reduce overall storage numpy arrays should do the trick.
if, on the other hand, you are trying to minimize the representation of numbers for say transmission via json or xml, you could use f-strings.
>>> from math import pi
>>> pi
3.141592653589793
>>> f'{pi:3.2}.'
'3.1.'
>>> bigpi = pi*10e+100
>>> bigpi
3.141592653589793e+101
>>> f'{bigpi:3.2}'
'3.1e+101'

Increase Accuracy of float division (python)

I'm writing a bit of code in PyCharm, and I want the division to be much more accurate than it currently is (40-50 numbers instead of about 15). How Can I accomplish this?
Thanks.
Check out the decimal module:
>>> from decimal import *
>>> getcontext().prec = 50
>>> Decimal(1)/Decimal(7)
Decimal('0.14285714285714285714285714285714285714285714285714')
If you're interested in more sophisticated operations than decimal provides, you can also look at libraries like bigfloat, or mpmath (which I use, and like a lot.)

Python: Decimals with trigonometric functions

I'm having a little problem, take a look:
>>> import math
>>> math.sin(math.pi)
1.2246467991473532e-16
This is not what I learnt in my Calculus class (It was 0, actually)
So, now, my question:
I need to perform some heavy trigonometric calculus with Python. What library can I use to get correct values?
Can I use Decimal?
EDIT:
Sorry, What I mean is other thing.
What I want is some way to do:
>>> awesome_lib.sin(180)
0
or this:
>>> awesome_lib.sin(Decimal("180"))
0
I need a libraray that performs good trigonometric calculus. Everybody knows that sin 180° is 0, I need a library that can do that too.
1.2246467991473532e-16 is close to 0 -- there are 16 zeroes between the decimal point and the first significant digit -- much as 3.1415926535897931 (the value of math.pi) is close to pi. The answer is correct to sixteen decimal places!
So if you want sin(pi) to equal 0, simply round it to a reasonable number of decimal places. 15 looks good to me and should be plenty for any application:
print round(math.sin(math.pi), 15)
Pi is an irrational number so it can't be represented exactly using a finite number of bits. However, you can use some library for symbolic computation such as sympy.
>>> sympy.sin(sympy.pi)
0
Regarding the second part of you question, if you want to use degrees instead of radians you can define a simple conversion function
def radians(x):
return x * sympy.pi / 180
and use it as follows:
>>> sympy.sin(radians(180))
0
If you find the result unexpected, I dare suggesting that you have a look at this text:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
It's really worth it.
you can also try gmpy or real
in gmpy you can specify the precision explicitly:
gmpy.pi(256)
in real.py you could use the pa() function:
from real import pa,pi
pa(pi)
Short Answer -
Decimal.cos() and Decimal.sin() can both be implemented from Decimal.exp() implementation by splitting all even terms into the cos() function and all the odd terms into the sin() function and alternating the sign of each term between positive and negative in both of those series. No change needed in the loop which only computes N terms based on configured precision (Decimal.getcontext().prec).
Long Answer -
Python decimal.Decimal supports exp() function that takes only a real number argument (unlike exp() in R language) and computes the infinite series only up to the number of terms based on configured precision (decimal.Decimal.getcontext().prec).
Currently the even terms compute cosh() and the odd terms compute sinh(). Their sum is returned as the result of exp(). If the sign of each term was modified to alternate between positive and negative within each series, the even-terms-series will compute cos() and the odd-terms-series would compute sin().
Additionally, like R language, this change could enable Decimal.exp() to support complex arguments, so that exp(1j*x) could return Decimal.cos(x) + 1j * Decimal.sin(x).

Categories

Resources