I'm writing a bit of code in PyCharm, and I want the division to be much more accurate than it currently is (40-50 numbers instead of about 15). How Can I accomplish this?
Thanks.
Check out the decimal module:
>>> from decimal import *
>>> getcontext().prec = 50
>>> Decimal(1)/Decimal(7)
Decimal('0.14285714285714285714285714285714285714285714285714')
If you're interested in more sophisticated operations than decimal provides, you can also look at libraries like bigfloat, or mpmath (which I use, and like a lot.)
Related
I am attempting to replicate a DSP algorithm in Python that was originally written in C. The trick is I also need to retain the same behavior of the 32 bit fixed point variables from the C version, including any numerical errors that the limited precision would introduce.
The current options I think are available are:
I know the python Decimal type can be used for fixed-point arithmetic, however from what I can tell there is no way to adjust the size of a Decimal variable. To my knowledge numpy does not support doing fixed point operations.
I did a quick experiment to see how fiddling with the Decimal precision affected things:
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> sys.getsizeof(a)
104
>>> dc.getcontext().prec = 16
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.1999999999999999555910790149937383830547332763671875')
>>> sys.getsizeof(a)
104
There is a change before/after the precision change, however there are still a large number of decimal places. The variable is still the same size, and has quite a few decimal places after it.
How can I best go about achieving the original objective? I do know that Python ctypes has the C language float, but I do not know if that will be useful in this case. I do not know if there is even a way to accurately mimic C type fixed point math in Python.
Thanks!
I recommend fxpmath module for fixed-point operations in python. Maybe with that you can emulate the fixed point arithmetic defining precision in bits (fractional part). It supports arrays and some arithmetic operations.
Repo at: https://github.com/francof2a/fxpmath
Here an example:
from fxpmath import Fxp
x = Fxp(1.1, True, 32, 16) # (val, signed, n_word, n_frac)
print(x)
print(x.precision)
results in:
1.0999908447265625
1.52587890625e-05
Look at these:
>>>from mpmath import mp
>>> 20988936657440586486151264256610222593863921 % 2623617082180073274358906876623916797788160
291280009243618888211558641
>>> 20988936657440586486151264256610222593863921 % 5247234164360146548717813753247833595576320
291280009243618888211558641 <<<HOW CAN THIS BE THE SAME ANSWER AS ABOVE?
>>> mp.fdiv(20988936657440586486151264256610222593863921, 5247234164360146548717813753247833595576320)
mpf('4.00000000000000005551115123125782779)
>>> mp.fdiv(20988936657440586486151264256610222593863921, 2623617082180073274358906876623916797788160)
mpf('8.00000000000000011102230246251565558)
I am trying to write a program that returns true if a large number is prime.
As you probably already know, python has its limits. Not just with floating point numbers but also with large numbers above a decillion. If my Python 3.7 (64-bit) can support large integers up to:
import sys
int(sys.float_info.max)
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
...then why can I not do a simple modulo on the numbers above and not get inconsistencies? Can anyone refer me to some other tool to help me or is there something I'm just missing?
Hi I'm wondering how to get python to output an answer to many more decimal places than the default value.
For example:
Currently print(1/7) outputs: 0.14285714285
I want to be able to output:0.142857142857142857142857142857142857142857142857
If you want completely arbitrary precision (which you will pretty much need to get at that level of precision), I recommend looking at the mpmath module.
>>> from mpmath import mp
>>> mp.dps = 100
>>> mp.fdiv(1.0,7.0)
mpf('0.1428571428571428571428571428571428571428571428571428571428571428571428571428571428571428571428571428579')
I suppose if all you want is to be able to do very simple arithmetic, the builtin decimal module will suffice. I would still recommend mpmath for anything more complex. You could try something like this:
>>> import decimal
>>> decimal.setcontext(decimal.Context(prec=100))
>>> decimal.Decimal(1.0) / decimal.Decimal(7.0)
Decimal('0.1428571428571428571428571428571428571428571428571428571428571428571428571428571428571428571428571429')
You need to use the decimal module: https://docs.python.org/3.5/library/decimal.html
>>> from decimal import Decimal
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
I am trying to calculate the exponential of -1200 in python (it's an example, I don't need -1200 in particular but a collection of numbers that are around -1200).
>>> math.exp(-1200)
0.0
It is giving me an underflow; How may I go around this problem?
Thanks for any help :)
In the standard library, you can look at the decimal module:
>>> import decimal
>>> decimal.Decimal(-1200)
Decimal('-1200')
>>> decimal.Decimal(-1200).exp()
Decimal('7.024601888177132554529322758E-522')
If you need more functions than decimal supports, you could look at the library mpmath, which I use and like a lot:
>>> import mpmath
>>> mpmath.exp(-1200)
mpf('7.0246018881771323e-522')
>>> mpmath.mp.dps = 200
>>> mpmath.exp(-1200)
mpf('7.0246018881771325545293227583680003334372949620241053728126200964731446389957280922886658181655138626308272350874157946618434229308939128146439669946631241632494494046687627223476088395986988628688095132e-522')
but if possible, you should see if you can recast your equations to work entirely in the log space.
Try calculating in logarithmic domain as long as possible. I.e. avoid calculating the exact value but keep working with exponents.
exp(-1200) IS a very very small number (just as exp(1200) is a very very big one), so maybe the exact value is not really what you are interested in. If you only need to compare these numbers then logarithmic space should be enough.
Does anyone know of a faster decimal implementation in python?
As the example below demonstrates, the standard library's decimal module is ~100 times slower than float.
from timeit import Timer
def run(val, the_class):
test = the_class(1)
for c in xrange(10000):
d = the_class(val)
d + test
d - test
d * test
d / test
d ** test
str(d)
abs(d)
if __name__ == "__main__":
a = Timer("run(123.345, float)", "from decimal_benchmark import run")
print "FLOAT", a.timeit(1)
a = Timer("run('123.345', Decimal)", "from decimal_benchmark import run; from decimal import Decimal")
print "DECIMAL", a.timeit(1)
Outputs:
FLOAT 0.040635041427
DECIMAL 3.39666790146
You can try cdecimal:
from cdecimal import Decimal
As of Python 3.3, the cdecimal implementation is now the built-in implementation of the decimal standard library module, so you don't need to install anything. Just use decimal.
For Python 2.7, installing cdecimal and using it instead of decimal should provide a speedup similar to what Python 3 gets by default.
The GMP library is one of the best arbitrary precision math libraries around, and there is a Python binding available at GMPY. I would try that method.
You should compare Decimal to Long Integer performance, not floating point. Floating point is mostly hardware these days. Decimal is used for decimal precision, while Floating Point is for wider range. Use the decimal package for monetary calculations.
To quote the decimal package manual:
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 do not have an exact representation in binary floating point. End users typically would not expect 1.1 to display as 1.1000000000000001 as it does with binary floating point.
The exactness carries over into arithmetic. In decimal floating point, "0.1 + 0.1 + 0.1 - 0.3" is exactly equal to zero. In binary floating point, result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal would be preferred in accounting applications which have strict equality invariants.
Use cDecimal.
Adding the following to your benchmark:
a = Timer("run('123.345', Decimal)", "import sys; import cdecimal; sys.modules['decimal'] = cdecimal; from decimal_benchmark import run; from decimal import Decimal")
print "CDECIMAL", a.timeit(1)
My results are:
FLOAT 0.0257983528473
DECIMAL 2.45782495288
CDECIMAL 0.0687125069413
(Python 2.7.6/32, Win7/64, AMD Athlon II 2.1GHz)
python Decimal is very slow, one can use float or a faster implementation of Decimal cDecimal.