how can I use the solver 'trust-constr only with linear inequality constraints?
my constraints is c(x)>0 & c(x)=g(x,teta)-ymin
how can I use only this linear inequality constraints for solver 'trust-constr' ?
the python code for c(x) constraint is doing interpolation like below:
#pressure_cons_val is ymin
#input_val is teta
input_val = (BHP, GOR, WC, GLR, LR)
#function do_interpolation is written
interpol=self.file.do_interpolation(0)
res=interpol(input_val) #g(x,teta)
cons=res-pressure_cons_val
please try to provide a minimal reproducible example.
You can find details about the solver here:
optimize.minimize-trustconstr
If the constraint is linear in one or more variable, you need to wrap it into the SciPy's LinearConstraint object.
I cannot help you better until I'm able to read and understand your code (what are the input variables, the objective function and the constraint(s), see also the general form of a continuous minimization problem).
Related
I remember when using the linprog function in MATLAB, one of the outputs is a structure containing the Lagrange multipliers at the solution "x" (separated by constraint type).
Right now I'm programming in Python and using PuLP to solve a linear optimization problem. I am using the PULP_CBC_CMD solver. Does PuLP have an equivalent function that will return the Lagrange multipliers at my optimal solution?
I've looked on the internet quite a bit and haven't really found any documentation on this.
For the constraints, PuLP can provide the duals:
con.pi
and for the variables, we have reduced costs:
x.dj
Together these form the Lagrangian multipliers.
E.g. you can do something like:
for (id,c) in lpProb.constraints.items():
print(id,c.pi)
for v in lpProb.variables():
print(v.name,v.dj)
Is there any optimization method/solver in mystic or scipy.optimization library that solves the problem in the matrix domain. In other words, is there any optimization method/solver that accepts matrix as an argument and minimizes its trace?
The trace of a matrix is:
tr(A) = sum(i, a[i,i])
So, depending on the rest of the model, (almost) any solver will allow you to use an objective like this. It is linear and is as easy and well-behaved as it gets. It is about the best objective you will ever see.
I have modeled a mathematical optimization problem into pyomo Python as bellow.
I do not understand why just my "constraint2" is labeled as "indexed" in pyomo?
Is that affect the problem solving? Cause I use "ipopt" as solver but my results seems much different when I use "Gekko" library and the same "ipopt" solver.
Thanks in advance.
I'am trying to solve a following problem.
In fact, this is Least Absolute Deviation Regression problem. I want to know how to solve this with python. I know that scipy has "linprog" which solve linear system with linear inequality constraints. But here there is two variable in inequality constraints, t, x. So, I want to know how to apply the "linprog" or is there other library which can solve this problem? Thanks
You have to write your problem in standard form, splitting the second inequality constraint and concatenating the two optimization variables. Then, you can feed it to linprog.
It is more of a math problem than an implementation one.
I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.