I have a question towards the pure math support of Python. As far as I know, at some point the calculation is not precise enough in Python, given that the operands are large enough.
Is there some kind of unlimited precision support in Python the language itself, rather than importing a library like numpy? I'm asking for something like BigDecimal in Java, which supports unlimited precision of decimal calculation.
For python native decimal.Decimal & fractions.Fraction performance once harnessed inside numpy dense-matrix ( sparse-matrix est'd ) computations -- quantitative performance comparison -- kindly ref. >>> https://stackoverflow.com/a/26248202/3666197
Have you looked at the decimal module? https://docs.python.org/2/library/decimal.html
Related
With Python's math library, an expression for an irrational number like
math.sin(math.pi/3)
will return a decimal approximation.
I've been looking at the documentation for a way to keep it at the exact value (i.e. returning sqrt(3)/2 instead of 0.8660254037844386), but I haven't found anything. I'm pretty new to Python, so I apologize if I missed it.
Symbolic mathematics is not something that most general-purpose programming languages do. Python works with integers and (real and complex) floating point values. For example the symbol math.pi is not a representation of the mathematical constant π, but just a Python variable containing a finite-precision floating point approximation of π.
However, some people do sometimes want to do symbolic mathematics, so tools exist as third-party libraries. For example, you can do symbolic maths in Python by installing the SymPy package.
Here is your answer using sympy:
import sympy
sympy.sqrt(sympy.pi/3)
#> sqrt(pi)/3
I'm working in Python with NumPy arrays of complex numbers that extend well past the normal floating point limits of NumPy’s default Complex type (numbers greater than 10^500). I wanted to know if there was some way I could extend NumPy so that it will be able to handle complex numbers with this sort of magnitude. For example, is there a way to make a NumPy complex type that uses functionality from the Decimal module?
I know there are resources available such as mpmath that could probably do what I need. However, it is a requirement of my project that I use NumPy.
For anyone that's interested in why I need these enormous numbers, it's because I'm working on a numerical relativity simulation of the early universe.
Depending on your platform, you may have support for complex192 and/or complex256. These are generally not available on Intel platforms under Windows, but they are on some others—if your code is running on Solaris or on a supercomputer cluster, it may support one of these types. On Linux, you may even find them available or you could create your array using dtype=object and then use bigfloat or gmpy2.mpc.
complex256 numbers up to 104932 + 104932j
complex192 ditto, but with less precision
I have even seen mention of NumPy and complex512...
I am doing convolution operations involving some very small numbers, and am encountering a lot of underflow on the way.
You can take a look at decimal package coming in standard library. If it doesn't work for you, there are other alternatives like mpmath or bigfloat.
I was wondering if there is a type in Numpy that allows numbers with around 20 decimal places, besides the type "decimal".
If not, do you have a suggestion to achieve the speed I would get performing calculations, say as when using floats?
Thanks,
Blaise
mpmath at https://code.google.com/p/mpmath/
mpmath is a pure-Python library for multiprecision floating-point arithmetic. It provides an extensive set of transcendental functions,
unlimited exponent sizes, complex numbers, interval arithmetic,
numerical integration and differentiation, root-finding, linear
algebra, and much more. Almost any calculation can be performed just
as well at 10-digit or 1000-digit precision, and in many cases
mpmath implements asymptotically fast algorithms that scale well for extremely high precision work. mpmath internally uses Python's
builtin long integers by default, but automatically switches to
GMP/MPIR for much faster high-precision arithmetic if gmpy is
installed or if mpmath is imported from within Sage.
I was looking at the Golden Ratio formula for finding the nth Fibonacci number, and it made me curious.
I know Python handles arbitrarily large integers, but what sort of precision do you get with decimals? Is it just straight on top of a C double or something, or does it use a a more accurate modified implementation too? (Obviously not with arbitrary accuracy. ;D)
almost all platforms map Python floats to IEEE-754 “double precision”.
http://docs.python.org/tutorial/floatingpoint.html#representation-error
there's also the decimal module for arbitrary precision floating point math
Python floats use the double type of the underlying C compiler. As Bwmat says, this is generally IEEE-754 double precision.
However if you need more precision than that you can use the Python decimal module which was added in Python 2.4.
Python 2.6 also added the fraction module which may be a better fit for some problems.
Both of these are going to be slower than using the float type, but that is the price for more precision.