With Python's math library, an expression for an irrational number like
math.sin(math.pi/3)
will return a decimal approximation.
I've been looking at the documentation for a way to keep it at the exact value (i.e. returning sqrt(3)/2 instead of 0.8660254037844386), but I haven't found anything. I'm pretty new to Python, so I apologize if I missed it.
Symbolic mathematics is not something that most general-purpose programming languages do. Python works with integers and (real and complex) floating point values. For example the symbol math.pi is not a representation of the mathematical constant π, but just a Python variable containing a finite-precision floating point approximation of π.
However, some people do sometimes want to do symbolic mathematics, so tools exist as third-party libraries. For example, you can do symbolic maths in Python by installing the SymPy package.
Here is your answer using sympy:
import sympy
sympy.sqrt(sympy.pi/3)
#> sqrt(pi)/3
Related
math is a Python module used by many to do a bit more advanced mathematical functions and using the decimal module. One can calculate stuff correctly 1.2-1.1=0.0999~, but using the decimal type it's 0.1.
My problem is that these two modules don't work well with each other. For example, log(1000, 10)=2.9999~, but using a decimal type gives the same result. How can I make these two work with each other? Do I need to implement my own functions? Isn't there any way?
You have Decimal.log10, Decimal.ln and Decimal.logb methods of each Decimal instance, and many more (max, sqrt):
from decimal import Decimal
print(Decimal('0.001').log10())
# Decimal('-3')
print(Decimal('0.001').ln())
# Decimal('-6.907755278982137052053974364')
There are no trigonometry functions, though.
More advanced alternative for arbitrary-precision calculations is mpmath, available at PyPI. It won't provide you with sin or tan as well, of course, because sin(x) is irrational for many x values, and so storing it as Decimal doesn't make sense. However, given fixed precision you can compute these functions via Tailor series etc. with mpmath help. You can also do Decimal(math.sin(0.17)) to get some decimal holding something close to sine of 0.17 radians.
Also refer to official Decimal recipes.
There is a log10 method in Decimal.
from decimal import *
a = Decimal('1000')
print(a.log10())
However, it would make more sense to me to use a function that calculates the exact value of a logarithm if you're trying to solve logarithms with exact integer solutions. Logarithm functions are generally expected to output some irrational result in typical usage. You could instead use a for loop and repeated division.
I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong.
To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).
yes, but no. `
The bigfloat package is a Python wrapper for the GNU MPFR library for
arbitrary-precision floating-point reliable arithmetic. The MPFR
library is a well-known portable C library for arbitrary-precision
arithmetic on floating-point numbers. It provides precise control over
precisions and rounding modes and gives correctly-rounded reproducible
platform-independent results.`
Blockquote
https://pythonhosted.org/bigfloat
You would then need to coerce the builtin float to be bigfloat everywhere, which would likely be non-trivial.
LSODA exposed through scipy.integrate is double precision only.
You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity.
EDIT. In the comments, you indicated
As I've stated three times, I am open to rewriting to avoid LSODA
Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.
The python floating point docs (eg https://docs.python.org/3/tutorial/floatingpoint.html) state
Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction. For example, the numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231257827021181583404541015625 are all approximated by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same approximation, any one of them could be displayed while still preserving the invariant eval(repr(x)) == x.
Historically, the Python prompt and built-in repr() function would choose the one with 17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most systems) is now able to choose the shortest of these and simply display 0.1.
Is there a way I can get that shortest representation as a decimal.Decimal (or other exact representation)?
Obviously one way would be decimal.Decimal(repr(0.1)) but I'm wondering if there is something explicit that doesn't rely on the vague "on most systems" caveat and possibly is available as a package that would work with earlier version of python.
(Also, functions that do this in other languages may be of interest if there is nothing in python, as this is really a general floating point question)
The following may be useful:
Python provides PyOS_double_to_string, but the code comment says this is only used if _Py_dg_dtoa is not available.
_Py_dg_dtoa is available in the source but I'm not sure if it can be accessed publicly. (In particular, it would be nice if there was a way to 'prove' that a the python interpreter we are using is using this internally). This function has detailed documentation about the way the string conversion is done, and it explains which flags to use to get the shortest decimal representation. If this was available from the interpreter this might be the best option as using it directly would give certainty about what was happening.
The implementation of Errol can be found on git and the paper found here has references to many other implementations along with a summary of their pros and cons.
Any one of the float -> string methods mentioned above would be a reasonable way to proceed, as would the standard repr method but it will require digging into the implementation details to be sure that a particular method guarantees to give you what you a looking for.
I have an LP with integer constraints that I want to solve in exact arithmetic, using Python.
In fact, I only need a feasible point.
Edit: "Exact arithmetic" here means rational numbers, of unbounded enumerator and denominator.
Previous attempts:
Find exact solutions to Linear Program mentions qsoptex, but when I try to import it, I get ImportError: libqsopt_ex.so.2: cannot open shared object file: No such file or directory, although to my knowledge, I gave to path to that library.
SoPlex works on the console, but I could not find a Python interface.
PySCIPOpt (https://github.com/SCIP-Interfaces/PySCIPOpt) is the Python interface for SCIP, including SoPlex, but I don't see how to call a specific solver (with specific options).
cdd (https://pycddlib.readthedocs.io/en/latest/linprog.html) does something, calling it LP, but I have no idea which problem they actually solve.
Speed is only a moderate issue. My larger instances have about 500 variables with box-constraints and 40 equalities, but the numbers involved might be large.
Maybe I am missing the point but any Linear Programming task where you want rational number solution is in fact an Integer Programming problem where you find LCD (Least Common Denominator) for all of the fractional variables and agree the numerators which you later use as integers. So, it seems like the problem only needs reformulation and you can get the exact solution.
I have a question towards the pure math support of Python. As far as I know, at some point the calculation is not precise enough in Python, given that the operands are large enough.
Is there some kind of unlimited precision support in Python the language itself, rather than importing a library like numpy? I'm asking for something like BigDecimal in Java, which supports unlimited precision of decimal calculation.
For python native decimal.Decimal & fractions.Fraction performance once harnessed inside numpy dense-matrix ( sparse-matrix est'd ) computations -- quantitative performance comparison -- kindly ref. >>> https://stackoverflow.com/a/26248202/3666197
Have you looked at the decimal module? https://docs.python.org/2/library/decimal.html