Computational cost of scipy.newton_krylov - python

I am trying to compare the computational cost of several methods of solving a nonlinear partial differential equation. One of the methods uses scipy.optimize.newton_krylov. The other methods use scipy.sparese.linalg.lgmres.
My first thought was to count the number of iterations of newton_krylov using the 'callback' keyword. This worked, to a point. The counter that I set up counts the number of iterations made by the newton_krylov solver.
I would also like to count the number of steps it takes in each (Newton) iteration to solve the linear system. I thought that I could use a similar counter and try using a keyword like 'inner_callback', but that gave the number of iterations the newton_krylov solver used rather than the number of steps to solve the inner linear system.
It is possible that I don't understand how newton_krylov was implemented and that it is impossible to separate the the two types of iterations (Newton steps vs lgmres steps) but it would be great if there was a way.
I tried posting this on scicomp.stackexchange, but I didn't get any answers. I think that the question might be too specific to Scipy to get a good answer there, so I hope that someone here can help me. Thank you for your help!

Related

How to obtain multiple solutions of a binary LP problem using Google's OR-Tools in Python?

I am new to integer optimization. I am trying to solve the following large (although not that large) binary linear optimization problem:
max_{x} x_1+x_2+...+x_n
subject to: A*x <= b ; x_i is binary for all i=1,...,n
As you can see,
. the control variable is a vector x of lengh, say, n=150; x_i is binary for all i=1,...,n
. I want to maximize the sum of the x_i's
. in the constraint, A is an nxn matrix and b is an nx1 vector. So I have n=150 linear inequality constraints.
I want to obtain a certain number of solutions, NS. Say, NS=100. (I know there is more than one solution, and there are potentially millions of them.)
I am using Google's OR-Tools for Python. I was able to write the problem and to obtain one solution. I have tried many different ways to obtain more solutions after that, but I just couldn't. For example:
I tried using the SCIP solver, and then I used the value of the objective function at the optimum, call it V, to add another constraint, x_1+x_2+...+x_n >= V, on top of the original "Ax<=b," and then used the CP-SAT solver to find NS feasible vectors (I followed the instructions in this guide). There is no optimization in this second step, just a quest for feasibility. This didn't work: the solver produced N replicas of the same vector. Still, when asked for the number of solutions found, it misleadingly replies that solution_printer.solution_count() is equal to NS. Here's a snippet of the code that I used:
# Define the constraints (A and b are lists)
for j in range(n):
constraint_expr = [int(A[j][l])*x[l] for l in range(n)]
model.Add(sum(constraint_expr) <= int(b[j][0]))
V = 112
constraint_obj_val = [-x[l] for l in range(n)]
model.Add(sum(constraint_obj_val) <= -V)
# Call the solver:
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinterWithLimit(x, NS)
solver.parameters.enumerate_all_solutions = True
status = solver.Solve(model, solution_printer)
I tried using the SCIP solver and then using solver.NextSolution(), but every time I was using this command, the algorithm would produce a vector that was less and less optimal every time: the first one corresponded to a value of, say, V=112 (the optimal one!); the second vector corresponded to a value of 111; the third one, to 108; fourth to sixth, to 103; etc.
My question is, unfortunately, a bit vague, but here it goes: what's the best way to obtain more than one solution to my optimization problem?
Please let me know if I'm not being clear enough or if you need more/other chunks of the code, etc. This is my first time posting a question here :)
Thanks in advance.
Is your matrix A integral ? if not, you are not solving the same problem with scip and CP-SAT.
Furthermore, why use scip? You should solve both part with the same solver.
Furthermore, I believe the default solution pool implementation in scip will return all solutions found, in reverse order, thus in decreasing quality order.
In Gurobi, you can do something like this to get more than one optimal solution :
solver->SetSolverSpecificParametersAsString("PoolSearchMode=2"); // or-tools [Gurobi]
From Gurobi Reference [Section 20.1]:
By default, the Gurobi MIP solver will try to find one proven optimal solution to your model.
You can use the PoolSearchMode parameter to control the approach used to find solutions.
In its default setting (0), the MIP search simply aims to find one
optimal solution. Setting the parameter to 1 causes the MIP search to
expend additional effort to find more solutions, but in a
non-systematic way. You will get more solutions, but not necessarily
the best solutions. Setting the parameter to 2 causes the MIP to do a
systematic search for the n best solutions. For both non-default
settings, the PoolSolutions parameter sets the target for the number
of solutions to find.
Another way to find multiple optimal solutions could be to first solve the original problem to optimality and then add the objective function as a constraint with lower and upper bound as the optimal objective value.

Slight difference in objective function of linear programming makes program extremely slow

I am using Google's OR Tool SCIP (Solving Constraint Integer Programs) solver to solve a Mixed integer programming problem using Python. The problem is a variant of the standard scheduling problem, where there are constraints limiting that each worker works maximum once per day and that every shift is covered by only one worker. The problem is modeled as follows:
Where n represents the worker, d the day and i the specific shift in a given day.
The problem comes when I change the objective function that I want to minimize from
To:
In the first case an optimal solution is found within 5 seconds. In the second case, after 20 minutes running, the optimal solution was still not reached. Any ideas to why this happens?
How can I change the objective function without impacting performance this much?
Here is a sample of the values taken by the variables tier and acceptance used in the objective function.
You should ask the SCIP team.
Have you tried using the SAT backend with 8 threads ?
The only thing that I can spot from reading your post is that the objective function is no longer pure integer after adding the acceptance. If you know that your objective is always integer that helps during the solve since you can also round up all your dual bounds. This might be critical for your problem class.
Maybe you could also post a SCIP log (preferable with statistics) of the two runs?

Long solving time for MILP and want to observe how result change with changing variable automatically

I am trying to solve a MILP problem and it relatively takes so much time to find the optimal solution.(Around an hour or so). I have put a time limit and also change optimality gap(separately) to reduce the amount of time consumed however in that cases I cannot be able to read the relaxed solution values since the problem is not solved.
I need to find out the value of some of the variables in the model with changing some parameters. However, since it takes too much time to solve one problem, it is not efficient to change those parameters after every problem solved.(wake up and change at the middle of my sleep). Is that possible or how to automate the change of some parameters and solve the problem again and again till the whole paramaters range that I defined has solved?(there are multiple parameters I need to change each time, so there are a lot of combinations)
I have a server online 7/24 and after modifying my model with automated version, I want to run it and take all the results later on when they all solved.
Here some part of the log of my problem in case it gives some understanding of problem size
Cutting planes:
MIR: 4
StrongCG: 2
Flow cover: 1
Explored 19098 nodes (6494268 simplex iterations) in 22971.46 seconds
Thread count was 32 (of 160 available processors)
Solution count 10: 2.03677e+07 4.57223e+06 4.01329e+06 ... -0
Pool objective bound 2.03677e+07
Optimal solution found (tolerance 1.00e-04)
Best objective 2.036765174668e+07, best bound 2.036765174668e+07, gap 0.0000%
#
If anyone willing to help, I can also share my model. I am kind of desperate for this model because of long solving time.
Thank you for your time!.

Approximate solution to MILP with PuLP

Is it possible to get an approximate solution to a mixed integer linear programming problem with PuLP? My problem is complex and the exact resolution takes too long.
You probably do not mean Linear Programming but rather Mixed Integer Programming. (The original question asked about LPs).
LPs usually solve quite fast and I don't know a good way to find an approximate solution for them. You may want to try an interior point or barrier method and set an iteration or time limit. For Simplex methods this typically does not work very well.
MIP models can take a lot of time to solve. Solvers allow to terminate earlier by setting a gap (gap = 0 means solving to optimality). E.g.
model.solve(GLPK(options=['--mipgap', '0.01']))

Maximize a function with many parameters (python)

first, let me say that I lack experiences with scientific math or statistics - so this might be a very well-known problem, but I don't know where to start.
I have a function f(x1, x2, ..., xn) where I need to guess the x'ses and find the highest value for f. The function has the following properties:
the total number or parameters is usually around 40 to 60, so a brute-force approach is impossible.
the possible values for each x range from 0.01 to 2.99
the function is steady, meaning that a higher f value means that the guess for the parameters is better and vice versa.
So far, I implemented a pretty basic method in python. It initially sets all parameters to 1, randomly guesses new values and checks if the f is higher than before. If not, roll back to the previous values.
In a loop with 10,000 iterations this seems to work somehow, but the result is propably far from being perfect.
Any suggestions on how to improve the search for the optimal parameters will be appreciated. When googling this issue things linke MCMC came up, but that seems like a very advanced method and I would need a lot of time to even understand the method.
Basic hints or concepts would help me more than elaborated methods and algorithms.
Don't do it yourself. Install SciPy and use its optimization routines. scipy.optimize.minimize looks like a good fit.
I think you want to take a look at scipy.optimize (http://docs.scipy.org/doc/scipy-0.10.0/reference/tutorial/optimize.html). A maximization is the minimization of the -1*function.

Categories

Resources