I'm trying to to find a way to perform an incomplete LU factorization of a symbolic matrix in SymPy and cannot find anything helpful on my own. It's an option for solvers to use ilu as a preconditioner, but it seems there's no way to have it on its own.
Am I missing it? Is it even possible/feasible for symbolic matrices of size 20x20 and larger?
The reason I need this is because I need to approximate O(1) terms in the inverse of that symbolic matrix. I had luck with ilu and non-symbolic matrices, so I thought this may be the way. If this is relevant, the symbols are all binary variables and linear in the terms.
Update 1:
I tried to use the LU solver, but the number of variables in the matrix is much lower than the matrix dimension, so it's no option (unless there is an efficient way to compute just the very first component of the solution vector?). I also tried full LU decomposition with the additional simplification function
def simpfunc(E):
E = E.replace(lambda e: e.is_Pow, lambda e: e.args[0])
return E
which I do hope is correctly formulated this way, since there seems to be no example in the documentation. The idea came from the answer to a previous question here. I could additionally provide an iszerofunc because terms with more than n factors would be zero automatically, but I don't know how to check the degree of terms (example: 0.5x_0x_1x_2x_4 would be zero, while 0.8x_0x_2x_4 would not).
Related
I am new to integer optimization. I am trying to solve the following large (although not that large) binary linear optimization problem:
max_{x} x_1+x_2+...+x_n
subject to: A*x <= b ; x_i is binary for all i=1,...,n
As you can see,
. the control variable is a vector x of lengh, say, n=150; x_i is binary for all i=1,...,n
. I want to maximize the sum of the x_i's
. in the constraint, A is an nxn matrix and b is an nx1 vector. So I have n=150 linear inequality constraints.
I want to obtain a certain number of solutions, NS. Say, NS=100. (I know there is more than one solution, and there are potentially millions of them.)
I am using Google's OR-Tools for Python. I was able to write the problem and to obtain one solution. I have tried many different ways to obtain more solutions after that, but I just couldn't. For example:
I tried using the SCIP solver, and then I used the value of the objective function at the optimum, call it V, to add another constraint, x_1+x_2+...+x_n >= V, on top of the original "Ax<=b," and then used the CP-SAT solver to find NS feasible vectors (I followed the instructions in this guide). There is no optimization in this second step, just a quest for feasibility. This didn't work: the solver produced N replicas of the same vector. Still, when asked for the number of solutions found, it misleadingly replies that solution_printer.solution_count() is equal to NS. Here's a snippet of the code that I used:
# Define the constraints (A and b are lists)
for j in range(n):
constraint_expr = [int(A[j][l])*x[l] for l in range(n)]
model.Add(sum(constraint_expr) <= int(b[j][0]))
V = 112
constraint_obj_val = [-x[l] for l in range(n)]
model.Add(sum(constraint_obj_val) <= -V)
# Call the solver:
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinterWithLimit(x, NS)
solver.parameters.enumerate_all_solutions = True
status = solver.Solve(model, solution_printer)
I tried using the SCIP solver and then using solver.NextSolution(), but every time I was using this command, the algorithm would produce a vector that was less and less optimal every time: the first one corresponded to a value of, say, V=112 (the optimal one!); the second vector corresponded to a value of 111; the third one, to 108; fourth to sixth, to 103; etc.
My question is, unfortunately, a bit vague, but here it goes: what's the best way to obtain more than one solution to my optimization problem?
Please let me know if I'm not being clear enough or if you need more/other chunks of the code, etc. This is my first time posting a question here :)
Thanks in advance.
Is your matrix A integral ? if not, you are not solving the same problem with scip and CP-SAT.
Furthermore, why use scip? You should solve both part with the same solver.
Furthermore, I believe the default solution pool implementation in scip will return all solutions found, in reverse order, thus in decreasing quality order.
In Gurobi, you can do something like this to get more than one optimal solution :
solver->SetSolverSpecificParametersAsString("PoolSearchMode=2"); // or-tools [Gurobi]
From Gurobi Reference [Section 20.1]:
By default, the Gurobi MIP solver will try to find one proven optimal solution to your model.
You can use the PoolSearchMode parameter to control the approach used to find solutions.
In its default setting (0), the MIP search simply aims to find one
optimal solution. Setting the parameter to 1 causes the MIP search to
expend additional effort to find more solutions, but in a
non-systematic way. You will get more solutions, but not necessarily
the best solutions. Setting the parameter to 2 causes the MIP to do a
systematic search for the n best solutions. For both non-default
settings, the PoolSolutions parameter sets the target for the number
of solutions to find.
Another way to find multiple optimal solutions could be to first solve the original problem to optimality and then add the objective function as a constraint with lower and upper bound as the optimal objective value.
I'm attempting to write a simple implementation of the Newton-Raphson method, in Python. I've already done so using the SymPy library, however what I'm working on now will (ultimately) end up running in an environment where only Numpy is available.
For those unfamiliar with the algorithm, it works (in my case) as follows:
I have some a system of symbolic equations, which I "stack" to form a matrix F. The unknowns are X,Y,Z,T (which I wish to determine). Some additional values are initially unknown, until passed to my solver, which substitutes these known values for variables in the symbolic expressions.
Now, the Jacobian matrix (J) of F is computed. This, too, is a matrix of symbolic expressions.
Now, I iterate in some range (max_iter). With each iteration, I form a matrix A by substituing for the unknowns X,Y,Z,T in F current estimates (starting with some initial values). Similarly, I form a matrix b by substituting for X,Y,Z,T current estimates.
I then obtain new estimates by solving the matrix equation Ax = b for x. This vector x holds dT, dX, dY, dZ. I then add these to current estimates for T,X,Y,Z, and iterate again.
Thus far, I've found my largest issue to be computing the Jacobian matrix. I need only to do this once, however it will be different depending upon the coefficients fed to the solver (not unknowns, but only known once fed to the solver, so I can't simply hard-code the Jacobian).
While I'm not terribly familiar with Numpy, I know that it offers numpy.gradient. I'm not sure, however, that this is the same as SymPy's .jacobian.
How can the Jacobian matrix be found, either in "pure" Python, or with Numpy?
EDIT:
Should it be useful to you, more information on the problem can be found [here]. 1. It can be formulated a few different ways, however (as of now) I'm writing it as 4 equations of the form:
\sqrt{(X-x_i)^2+(Y-y_i)^2+(Z-z_i)^2 }= c * (t_i-T)
Where X,Y,Z and T are unknown.
This describes the solution to a localization problem, where we know (a) the location of n >= 4 observers in a 3-dimensional space, (b) the time at which each observer "saw" some signal, and (c) the velocity of the signal. The goal is to determine the coordinates of the signal source X,Y,Z (and, as a side effect, the time of emission, T).
Notice that I've tried (many) other approaches to solving this problem, and all leads point toward a combination of Newton-Raphson with regression.
The LU decomposition function provided by scipy returns a permutation matrix P
P,L,U = scipy.linalg.lu(A)
Where A is a rectangular matrix. However the size of my problem do not allow to store P (even temporary) due to its size, I really need a function that computes a permutation vector (like [L,U,P] = lu(A,'vector') in Matlab). I found a lapack function
LU,p,info = scipy.linalg.lapack.dgetrf(A)
which seems to return a vector p but I learned this latter is not an actual permutation vector since it contains twice the same value (https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/780655). I am thus looking for another function (which may be from another library) to perform this LU decomposition with pivoting. Since the computation time is also very important, I don't think implementing the decomposition myself will be efficient.
Yes, it's a pivot vector, which is a standard LAPACK return. So you'll need to convert it into whatever other form you want yourself (and this is much easier then reimplementing the factorization).
Assume that I have a square matrix M. Assume that I would like to invert the matrix M.
I am trying to use the the fractions mpq class within gmpy2 as members of my matrix M. If you are not familiar with these fractions, they are functionally similar to python's built-in package fractions. The only problem is, there are no packages that will invert my matrix unless I take them out of fraction form. I require the numbers and the answers in fraction form. So I will have to write my own function to invert M.
There are known algorithms that I could program, such as gaussian elimination. However, performance is an issue, so my question is as follows:
Is there a computationally fast algorithm that I could use to calculate the inverse of a matrix M?
Is there anything else you know about these matrices? For example, for symmetric positive definite matrices, Cholesky decomposition allows you to invert faster than the standard Gauss-Jordan method you mentioned.
For general matrix inversions, the Strassen algorithm will give you a faster result than Gauss-Jordan but slower than Cholesky.
It seems like you want exact results, but if you're fine with approximate inversions, then there are algorithms which approximate the inverse much faster than the previously mentioned algorithms.
However, you might want to ask yourself if you need the entire matrix inverse for your specific application. Depending on what you are doing it might be faster to use another matrix property. In my experience computing the matrix inverse is an unnecessary step.
I hope that helps!
Let us assume I have a smooth, nonlinear function f: R^n -> R with the (known) maximum number of roots N. How can I find the roots efficiently? Right now I have calculated the function on the grid on a preselected area, refined the grid where the function is below a predefined threshold and continued that routine, but this does not seem to be very efficient, though, because I have noticed that it is difficult to select the area correctly before and to define the threshold accordingly.
there are several ways to go about this of course, scipy is known to contain the safest and most efficient method for finding a single root provided you know the interval:
scipy.optimize.brentq
to find more roots using some estimate you can use:
scipy.optimize.fsolve
de Moivre's formulae to use for root finding that is fairly quick in comparison to others (in case you would rather build your own method):
given a complex number
the n roots are given by:
where k varies over the integer values from 0 to n − 1.
You can square the function and use global optimization software to locate all the minima inside a domain and pick those with a zero value.
Stochastic multistart based global optimization methods with clustering are quite proper for this task.