Linear programming (optimization) with Pulp - python

I would like to ask you something regarding a Linear program for optimization. I already successfully setup the model. However, I have problems in setting up a metaheuristic to reduce the computation time. The basic optimization model can be seen here:
In the metaheuristic algorithm there is an while loop with a condition as follows:
while $ \sum_{i=1}^I b_i y_i \leq \sum_{k=1}^K q_k $ do
I tried to realize this condition with the following code:
while lpSum(b[i]*y[i] for i in I)<=lpSum(q[k] for k in K):
If calculate the two sums separately I get the right results for both. However, when I put them into this condition, the code runs into an endless loop, even when the condition gets fulfilled and it should break the loop. I guess it has to do with the data type, and that the argument can't be an LpAffineExpression. However, I am really struggling to understand this problem.
I hope you understood my problem and I would really appreciate your ideas and explanations a lot! Please tell me, if you need more information on something specific - sorry for being a beginner.
Thanks a lot in advance and best regards,
Bernhard

lpSums do not have a value, like a regular sum has.
Any Python objects can have be compared to other objects using built-in equations like __eq__. That is how I can say date(2000, 1, 1) < date(2000, 1, 2). However, lpAffineExpressionss (which lpSums are a type of) are meant to be used in constraints. Their contents are variables, which are solved by the LP solver, so they do not yet have any values.
Thus the return value for lpSum(x) <= lpSum(y) is not true or false, like with normal equations, but it's an equation. And an equation is not None, or False, or any other falsey value. What you are saying is equivalent to while <some object>:, which is always true. Hence your infinite loop.
I don't know what "using a metaheuristic to reduce computation time" implies in this context - maybe you run a few iterations of the LP solver and then employ your metaheuristic on the result.
If that is the case, use b[i].value() to get the value the variable b[i] was given in that solution, and be sure to compute the total in a regular sum.

Related

Is it possible to model a min-max-problem using pyomo

Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that using max() on a list of constraint-objects returns only the constraint possessesing the maximum value at a given point.
I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not a mathematical operation, but a solver operation. This operation is on the problems level. Keep in mind that when solving a model, Pyomo requires as argument only one sense of optimization (min or max), thus making it unable to understand min-max sense. Even if it did, how could it knows what to maximize or minimize? This is why I suggest you to break your problem in two, unless you work on its formulation.

Tolerance for termination in Nelder-Mead optimization

I am trying to optimize a certain function using the Nelder-Mead method and I need help understanding some of the arguments. I am fairly new to the world of numerical optimizations so, please, forgive my ignorance of what might be obvious to more experienced users. I note that I already looked at minimize(method=’Nelder-Mead’) and at scipy.optimize.minimize but it was not of as much help as I would have hoped. I am trying to optimize function $f$ under two conditions: (i) I want the optimization to stop once the $f$ value is below a certain value and (ii) once the argument is around the optimal value, I don't want the optimizer to increase the step again (i.e., once it gets below the threshold value and stays below for a couple of iterations, I would like the optimization to terminate). Here is the optimization code I use:
scipy.optimize.minimize(fun=f, x0=init_pos, method="nelder-mead",
options={"initial_simplex": simplex,
"disp": True, "maxiter" : 25,
"fatol": 0.50, "adaptive": True})
where f is my function (f : RxR -> [0,sqrt(2))). I understand that x0=init_pos are initial values for f, "initial_simplex": simplex is the initial triangle (in my 2D case), "maxiter" : 25 means that the optimizer will run up to 25 iterations before terminating.
Here are things I do not understand/I am not sure about:
The website 1 says "fatol: Absolute error in func(xopt) between iterations that is acceptable for convergence." Since the optimal value for my function is f(xopt)=0, does "fatol": 0.50 mean that the optimization will terminate once the f(x) will have the value of 0.5 or less? If not, how do I modify the condition for termination (in my case, how do I assure that it does stop once f(x)<=0.5)? I am ok if the optimizer runs a few more iterations around the region giving <0.5 but right now it tends to jump out of the near optimal region in a completely random way and I would like to be able to prevent it (if possible).
Likewise, as far as I understand, "xatol: Absolute error in xopt between iterations that is acceptable for convergence." means that the optimization will terminate once the difference between the optimal and the present arguments is at most xatol. Since in principle I do not know a-priori what the xopt is, does it mean in practice that once |x_n - x_(n+1)|, the optimizer will stop? If no, is there a way of adding a constraint to stop the function once it is near the optimal point?
I will appreciate if someone can either answer or give me a better reference than the SciPy documentation.
this condition stops the algorithm as soon as |f(x_n) - f(x_(n+1))| < fatol
same : this condition stops the algorithm as soon as |x_n - x_(n+1)| < xatol

Solving an equation iteratively on Python

The question is:
Determine at which point the function y=xe(x^2) crosses the unit circle in the x-positive, y-positive quadrant.
Rewrite the problem as a fix-point problem, i.e. in the form x=F(x)
This equation can be solved iteratively: x_n=F(x_n−1)
Implement the above equation into a function fixpoint that takes as argument the initial guess x0 and the tolerance tol and returns the sequence xn of approximations to x.
I'm very new to python, and I've rewritten the equations as
xn=1/(np.sqrt(1+np.exp(2(x0)**2)))
and created a function, but I'm genuinely not too sure how to go about this.
It genuinely is not our issue if you don't understand the language or the problem you are trying to solve.
This looks like homework. You don't learn anything if somebody here does it for you.
Try to solve an iteration or two by hand with calculator, pencil, and paper before you program anything.
Your first equation looks wrong to me.
xn=1/(np.sqrt(1+np.exp(2*(x0)**2)))
I don't know if you forgot a multiplication sign between the two arguments to the exponential function. You should check.
I would prefer x0*x0 to x0**2. Personal taste.
I would expect to see an equation that would take in x(n) and return x(n+1). Yours will never use the new value x(n) to get x(n+1). You're stuck with x(0) as written.
I would expect to see a loop where the initial value of x(n) is x(0). Inside the loop I'd calculate x(n+1) from x(n) and check to see if it's converged to a desired tolerance. If it has, I'd exit the loop. If it has not, I'd update x(n) to equal x(n+1) and loop again.

CPLEX finds optimal LP solution but returns no basis error

I am solving a series of LP problems using the CPLEX Python API.
Since many of the problems are essentially the same, save a hand full of parameters. I want to use a warm start with the solution of the previous problem for most of them, by calling the function cpx.start.set_start(col_status, row_status, col_primal, row_primal, col_dual, row_dual) where cpx = cplex.Cplex().
This function is documented here. Two of the arguments, col_status and row_status, are obtained by calling cpx.solution.basis.get_col_basis() and cpx.solution.basis.get_row_basis().
However, despite cpx.solution.status[cpx.solution.get_status()] returning optimal and being able to obtain both cpx.solution.get_values() and cpx.solution.get_dual_values() ...
Calling cpx.solution.basis.get_basis() returns CPLEX Error 1262: No basis exists.
Now, according to this post one can call the warm start function with empty lists for the column and row basis statuses, as follows.
lastsolution = cpx.solution.get_values()
cpx.start.set_start(col_status=[], row_status=[],
col_primal=lastsolution, row_primal=[],
col_dual=[], row_dual=[])
However, this actually results in making a few more CPLEX iterations. Why more is unclear, but the overall goal is to have significantly less, obviously.
Version Info
Python 2.7.12
CPLEX 12.6.3
I'm not sure why you're getting the CPXERR_NO_BASIS. See my comment.
You may have better luck if you provide the values for row_primal, col_dual, and row_dual too. For example:
cpx2.start.set_start(col_status=[],
row_status=[],
col_primal=cpx.solution.get_values(),
row_primal=cpx.solution.get_linear_slacks(),
col_dual=cpx.solution.get_reduced_costs(),
row_dual=cpx.solution.get_dual_values())
I was able to reproduce the behavior you describe using the afiro.mps model that comes with the CPLEX examples (number of deterministic ticks actually increased when specifying col_primal alone). However, when doing the above, it did help (number of det ticks improved and iterations went to 0).
Finally, I don't believe that there is any guarantee that using set_start will always help (it may even be a bad idea in some cases). I don't have a reference for this.

Maximize a function with many parameters (python)

first, let me say that I lack experiences with scientific math or statistics - so this might be a very well-known problem, but I don't know where to start.
I have a function f(x1, x2, ..., xn) where I need to guess the x'ses and find the highest value for f. The function has the following properties:
the total number or parameters is usually around 40 to 60, so a brute-force approach is impossible.
the possible values for each x range from 0.01 to 2.99
the function is steady, meaning that a higher f value means that the guess for the parameters is better and vice versa.
So far, I implemented a pretty basic method in python. It initially sets all parameters to 1, randomly guesses new values and checks if the f is higher than before. If not, roll back to the previous values.
In a loop with 10,000 iterations this seems to work somehow, but the result is propably far from being perfect.
Any suggestions on how to improve the search for the optimal parameters will be appreciated. When googling this issue things linke MCMC came up, but that seems like a very advanced method and I would need a lot of time to even understand the method.
Basic hints or concepts would help me more than elaborated methods and algorithms.
Don't do it yourself. Install SciPy and use its optimization routines. scipy.optimize.minimize looks like a good fit.
I think you want to take a look at scipy.optimize (http://docs.scipy.org/doc/scipy-0.10.0/reference/tutorial/optimize.html). A maximization is the minimization of the -1*function.

Categories

Resources