Solving rational number Linear Programming problem in Python - python

I have an LP with integer constraints that I want to solve in exact arithmetic, using Python.
In fact, I only need a feasible point.
Edit: "Exact arithmetic" here means rational numbers, of unbounded enumerator and denominator.
Previous attempts:
Find exact solutions to Linear Program mentions qsoptex, but when I try to import it, I get ImportError: libqsopt_ex.so.2: cannot open shared object file: No such file or directory, although to my knowledge, I gave to path to that library.
SoPlex works on the console, but I could not find a Python interface.
PySCIPOpt (https://github.com/SCIP-Interfaces/PySCIPOpt) is the Python interface for SCIP, including SoPlex, but I don't see how to call a specific solver (with specific options).
cdd (https://pycddlib.readthedocs.io/en/latest/linprog.html) does something, calling it LP, but I have no idea which problem they actually solve.
Speed is only a moderate issue. My larger instances have about 500 variables with box-constraints and 40 equalities, but the numbers involved might be large.

Maybe I am missing the point but any Linear Programming task where you want rational number solution is in fact an Integer Programming problem where you find LCD (Least Common Denominator) for all of the fractional variables and agree the numerators which you later use as integers. So, it seems like the problem only needs reformulation and you can get the exact solution.

Related

Getting the most accurate precision with a function equating factorial, divison and squaring [duplicate]

I'm using the Decimal class for operations that requires precision.
I would like to use 'largest possible' precision. With this, I mean as precise as the system on which the program runs can handle.
To set a certain precision it's simple:
import decimal
decimal.getcontext().prec = 123 #123 decimal precision
I tried to figure out the maximum precision the 'Decimal' class can compute:
print(decimal.MAX_PREC)
>> 999999999999999999
So I tried to set the precision to the maximum precision (knowing it probably won't work..):
decimal.getcontext().prec = decimal.MAX_PREC
But, of course, this throws a Memory Error (on division)
So my question is: How do I figure out the maximum precision the current system can handle?
Extra info:
import sys
print(sys.maxsize)
>> 9223372036854775807
Trying to do this is a mistake. Throwing more precision at a problem is a tempting trap for newcomers to floating-point, but it's not that useful, especially to this extreme.
Your operations wouldn't actually require the "largest possible" precision even if that was a well-defined notion. Either they require exact arithmetic, in which case decimal.Decimal is the wrong tool entirely and you should look into something like fractions.Fraction or symbolic computation, or they don't require that much precision, and you should determine how much precision you actually need and use that.
If you still want to throw all the precision you can at your problem, then how much precision that actually is will depend on what kind of math you're doing, and how many absurdly precise numbers you're attempting to store in memory at once. This can be determined by analyzing your program and the memory requirements of Decimal objects, or you can instead take the precision as a parameter and binary search for the largest precision that doesn't cause a crash.
I'd like to suggest a function that allows you to estimate your maximum precision for a given operation in a brute force way:
def find_optimum(a,b, max_iter):
for i in range(max_iter):
print(i)
c = int((a+b)/2)
decimal.getcontext().prec = c
try:
dummy = decimal.Decimal(1)/decimal.Decimal(7) #your operation
a = c
print("no fail")
except MemoryError:
print("fail")
dummy = 1
b = c
print(c)
del dummy
This is just halving intervals one step at a time and looks if an error occurs. Calling with max_iter=10 and a=int(1e9), b=int(1e11) gives:
>>> find_optimum(int(1e9), int(1e11), 10)
0
fail
50500000000
1
no fail
25750000000
2
no fail
38125000000
3
no fail
44312500000
4
fail
47406250000
5
fail
45859375000
6
no fail
45085937500
7
no fail
45472656250
8
no fail
45666015625
9
no fail
45762695312
This may give a rough idea of what you are dealing with. This took approx half an hour on i5-3470 and 16GB RAM so you really only would use it for testing purposes.
I don't think, that there is an actual exact way of getting the maximum precision for your operation, as you'd have to have exact knowledge of the dependency of your memory usage on memory consumption. I hope this helps you at least a bit and I would really like to know, what you need that kind of precision for.
EDIT I feel like this really needs to be added, since I read your comments under the top rated post here. Using arbitrarily high precision in this manner is not the way, that people calculate constants. You would program something, that utilizes disk space in a smart way (for example calcutating a bunch of digits in RAM and writing this bunch to a text file), but never only use RAM/swap only, because this will always limit your results. With modern algorithms to calculate pi, you don't need infinite RAM, you just put another 4TB hard drive in the machine and let it write the next digits. So far for mathematical constants.
Now for physical constants: They are not precise. They rely on measurement. I'm not quite sure atm (will edit) but I think the most exact physical constant has an error of 10**(-8). Throwing more precision at it, doesn't make it more exact, you just calculate more wrong numbers.
As an experiment though, this was a fun idea, which is why I even posted the answer in the first place.
The maximum precision of the Decimal class is a function of the memory on the device, so there's no good way to set it for the general case. Basically, you're allocating all of the memory on the machine to one variable to get the maximum precision.
If the mathematical operation supports it, long integers will give you unlimited precision. However, you are limited to whole numbers.
Addition, subtraction, multiplication, and simple exponents can be performed exactly with long integers.
Prior to Python 3, the built-in long data type would perform arbitrary precision calculations.
https://docs.python.org/2/library/functions.html#long
In Python >=3, the int data type now represents long integers.
https://docs.python.org/3/library/functions.html#int
One example of a 64-bit integer math is implementation is bitcoind, where transactions calculations require exact values. However, the precision of Bitcoin transactions is limited to 1 "Satoshi"; each Bitcoin is defined as 10^8 (integer) Satoshi.
The Decimal class works similarly under the hood. A Decimal precision of 10^-8 is similar to the Bitcoin-Satoshi paradigm.
From your reply above:
What if I just wanted to find more digits in pi than already found? what if I wanted to test the irrationality of e or mill's constant.
I get it. I really do. My one SO question, several years old, is about arbitrary-precision floating point libraries for Python. If those are the types of numerical representations you want to generate, be prepared for the deep dive. Decimal/FP arithmetic is notoriously tricky in Computer Science.
Some programmers, when confronted with a problem, think “I know, I’ll use floating point arithmetic.” Now they have 1.999999999997 problems. – #tomscott
I think when others have said it's a "mistake" or "it depends" to wonder what the max precision is for a Python Decimal type on a given platform, they're taking your question more literally than I'm guessing it was intended. You asked about the Python Decimal type, but if you're interested in FP arithmetic for educational purposes -- "to find more digits in pi" -- you're going to need more powerful, more flexible tools than Decimal or float. These built-in Python types don't even come close. Those are good enough for NASA maybe, but they have limits... in fact, the very limits you are asking about.
That's what multiple-precision (or arbitrary-precision) floating point libraries are for: arbitrarily-precise representations. Want to compute pi for the next 20 years? Python's Decimal type won't even get you through the day.
The fact is, multi-precision binary FP arithmetic is still kinda fringe science. For Python, you'll need to install the GNU MPFR library on your Linux box, then you can use the Python library gmpy2 to dive as deep as you like.
Then, the question isn't, "What's the max precision my program can use?"
It's, "How do I write my program so that it'll run until the electricity goes out?"
And that's a whole other problem, but at least it's restricted by your algorithm, not the hardware it runs on.

Is there a way of setting a default precision that differs from double in Python?

I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong.
To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).
yes, but no. `
The bigfloat package is a Python wrapper for the GNU MPFR library for
arbitrary-precision floating-point reliable arithmetic. The MPFR
library is a well-known portable C library for arbitrary-precision
arithmetic on floating-point numbers. It provides precise control over
precisions and rounding modes and gives correctly-rounded reproducible
platform-independent results.`
Blockquote
https://pythonhosted.org/bigfloat
You would then need to coerce the builtin float to be bigfloat everywhere, which would likely be non-trivial.
LSODA exposed through scipy.integrate is double precision only.
You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity.
EDIT. In the comments, you indicated
As I've stated three times, I am open to rewriting to avoid LSODA
Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.

How to avoid numerical overflow error with exponential terms?

Dealing with numerical calculations having exponential terms often becomes painful, thanks to overflow errors. For example, suppose you have a probability density, P(x)=C*exp(f(x)/k), where k is very small number, say of the order of 10^(-5).
To find the value of C one has to integrate P(x). Here comes the overflow error. I know it also depends on the form of f(x). But for this moment let us assume, f(x)=sin(x).
How to deal with such problems?
What are the tricks we may use to avoid them?
Is the severity of such problems language dependent? If yes, in which language should one write his code?
As I mentioned in the comments, I strongly advise using analytical methods as far as you can. However, If you want to compute integrals of the form
I=Integral[Exp[f(x)],{x,a,b}]
Where f(x) could potentially overflow the exponent, then you might want to renormalize the system a bit in the following way:
Assume that f(c) is the maximum of f(x) in the domain [a,b], then you can write:
I=Exp[f(c)] Integral[Exp[f(x)-f(c)],{x,a,b}]
It's an ugly trick, but at least your exponents will be small in the integral.
note: just realized this is the comment of roygvib
One of the options is to use GSL - GNU Scientific Library (python and fortran wrappers are available).
There is a function gsl_sf_exp_e10_e that according to the documentation
computes the exponential \exp(x) using the gsl_sf_result_e10 type to return a result with extended range. This function may be useful if the value of \exp(x) would overflow the numeric range of double.
However, I would like to note, that it's slow due to additional checks during evaluation.
P.S. As it was said earlier, it's better to use analytical solutions where possible.

PuLP programming output

I am working with a collaborator on a certain optimization project involving linear programming. We both use Coin-OR branch-and-cut solver to solve the problem. I construct the .LP file using Python-based PuLP package. I am not entirely sure how the collaborator creates their .LP file (definitely not using Python), but essentially, we have two different systems generating .LP files for the exact same problem - i.e. the objective function, variables, constraints are exactly the same.
I typically solve my problem within Python (myProblm.solve()), but I have also been generating a .LP file and calling a CBC solver from the command line to solve this file (problem). Next, I compare the output I get from my system (either Python or command-line), to that my collaborator obtains. [Please note that the output of the problem on my side is exactly the same whether solved in PuLP or on command-line.]
The values of the objective function match well between us, but the other details do not exactly match. For example, if we were to solve this Whiskas blending problem, the total cost of ingredients would be exactly same, but the ratios of ingredients differ. Any idea why that would be?
I manually compared our .LP files and noticed a few differences. For starters, the sequence of constraints and variables is different. In other words, if there are 5 constraints, my file lists them as C1,C2,C5,C4,C3, whereas the same constraints will be listed as C1,C2,C3,C4,C5. Also, my Python code rounds all numbers to 10's place, while his system rounds them to 1's place. Hence, the coefficients of some of the variables have slightly different values.
Do these differences play a role in the exact output of the solver?
Also, the next question by extension is: What should we do to get the exact same output when solving a linear programming optimization problem? Which factors influence the solution of LP problems? Do factors like the structure of an .LP file play a role? Will I get the exact same output if I run the same LP file with exact same conditions on different computers?
Because there are multiple solutions to an LP problem with the same optimal objective function, different solvers cant guarantee that they will return the same solution. This issue becomes even more complicated when MIP problems use branch and bound. Using Multi threading or multiprocessing makes it almost impossible.
In summary to get the same solution either generate the exact same LP files and solve with the same solvers. Or change you objective function so that there is only one optimal solution (perhaps prefer some ordering of ingredients, with a small change of the costs for the ingredients).

Approximate solution to MILP with PuLP

Is it possible to get an approximate solution to a mixed integer linear programming problem with PuLP? My problem is complex and the exact resolution takes too long.
You probably do not mean Linear Programming but rather Mixed Integer Programming. (The original question asked about LPs).
LPs usually solve quite fast and I don't know a good way to find an approximate solution for them. You may want to try an interior point or barrier method and set an iteration or time limit. For Simplex methods this typically does not work very well.
MIP models can take a lot of time to solve. Solvers allow to terminate earlier by setting a gap (gap = 0 means solving to optimality). E.g.
model.solve(GLPK(options=['--mipgap', '0.01']))

Categories

Resources