I'm trying to use pyscipopt to solve a linear programming problem, but am unable to fit the piecewise linear function as a constraint.
The constraint is expressed as follows:
I've tried to write it as the following:
cfm = quicksum( max(quicksum(cf[i][t] * q[i] - L[t] for i in range(I)), 0) for t in range(T) / quicksum(L[t] for t in range(T)) <= cfm_max
Where cfm_max = 0.15, in this case.
But it is probably very wrong since it returns a NotImplementedError. I've seen examples in piecewise.py found together with the package, but their usage seems different enough to not work in my case.
Would appreciate any help, thanks.
I think this can be written as:
sum(t,y[t]) <= 0.15*sum(t,L[t])
y[t] >= sum(i,CF[i,t]*q[i])-L[t]
where y[t] are non-negative variables. This is now completely linear (no division, no max()).
Related
I want to solve the following (convex) minimization problem:
min ||x||_1 under the constraints sgn(A[x,R]=y) and ||x||_2 = 1
where A is a mx(N+1) matrix, x in R^N a vector, and \[x,R\] a vector that is created by appending a given number R. The objective is to find the optimal value for x.
A is a Fourier matrix and there are fast matrix-vector, inversion, etc. algorithms available. Since this matrix is really big, I need to use an optimization algorithm that utilizes this.
Currently, I use the following implementation in cvxpy, which is way too slow:
import cvxpy as cvx
# rewrite the problem in the form x = x^- + x^+
n = A.shape[1]-1
vx = cvx.Variable(2*n)
objective = cvx.Minimize(cvx.pnorm(vx, 1)) # min ||x||_1
constraints = [vx >= 0, cvx.multiply(A[:,:n] # vx[:n] - A[:,:n] # vx[n:] + A[:,n]*R, y) >= 0,
cvx.norm(vx, 2) <= R] # sgn(A[x,1]) = y, ||x||_2 <= R
x, solve_time = solve(vx, objective, constraints)
solution = x[:n] - x[n:]
Is there a way to use fast matrix computations in cvxpy? Or is there a better library? I found a few implementations that can do this for one special algorithm but not in the general case, so I was not able to implement my problem.
No. The solver will not call your matrix multiplication code. They do their own linear algebra, which is very different in many ways. In a sense your matrix multiplication is just notation for the problem statement.
Regarding performance, it depends heavily on where the bottleneck is. Is it in generating the model (in cvxpy itself) or in the solver? What solver are you using? Consider using a different solver. Obviously, we don't have enough information (and no reproducible example) to answer this question.
i'm triyng to solve this equation using "nsolve" function. unfortunately, this error appears:
ValueError: Could not find root within given tolerance. (435239733.760000060718 > 2.16840434497100886801e-19)
Try another starting point or tweak arguments.
The code is:
import sympy
d=[0.3, 32.6, 33.4, 241.7, 396.2, 444.4, 480.8, 588.9, 1043.9, 1136.1, 1288.1, 1408.1, 1439.4, 1604.8]
N=len(d)
x = sympy.Symbol('x', real=True)
expr2 = sympy.Eq(d[13] + N * sympy.Pow(x, -1) - N * d[13] * sympy.Pow(1 - sympy.exp(-d[13] * N), -1), 0)
expr_2 = sympy.simplify(expr=expr2)
solution = sympy.nsolve(expr_2, -0.01)
s = round(solution, 6)
print(s)
The system you are trying to solve leads to large derivatives and very abrupt changes, including a singularity at x == 0. This is the graph of the equation (Using Mathematica).
Numerical solvers struggle with these functions because most of them assumes some amount of smoothness around the solution and can be confused around singularities. Almost all of them (I'm talking about solvers in general, not just SymPy) benefits from regularization or a reformulation of the problem.
I would suggest to simplify the equation by multiplying both sides by x, which would remove the division by x and lead to a smoother function (linear in this case), for which numerical solvers behave correctly.
With this reformulation, you should find that the solution is 0.000671064.
Moreover, I would also suggest to rescale the coefficients so that they are all in [-1,1]. This also generally helps solvers. In your case, it will find the solution easily since it is linear, but more complex equations might cause problems.
my question is simple, is there an easy way to implement lsqlin of MATLAB in python? Because, according to the documentation:
lsqlin Constrained linear least squares.
X = lsqlin(C,d,A,b) attempts to solve the least-squares problem
min 0.5*(NORM(Cx-d)).^2 subject to Ax <= b
x
where C is m-by-n.
in scipy there is scipy.optimize.lsq_linear but, looking at the documentation, it solves:
minimize 0.5 * ||A x - b||**2
subject to lb <= x <= ub
So I could use it just by setting, as upper bound, x<=BA^-1 and -inf as lower bound but this can't be the final solution (what if the inverse doesn't exist?). What could I use?
I've found a solution. I used cvxpy library (https://www.cvxpy.org/). It's easy defining a least squares problem with this kind of constraint. The resolution of the output is not equal (in MATLAB there is 2.3e-16 while in python it returns 0) but I think that it can be a good approximation. If I find how to increase resulution I will update the answer.
I would like to know how to define a complex objective function using or-tools (if it is possible).
The basic example below shows how to have basic linear problem with Or-tools in python:
solver = pywraplp.Solver('lp_pricing_problem', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
# Define variables with a range from 0 to 1000.
x = solver.NumVar(0, 1000, 'Variable_x')
y = solver.NumVar(0, 1000, 'Variable_y')
# Define some constraints.
solver.Add(x >= 17)
solver.Add(x <= 147)
solver.Add(y >= 61)
solver.Add(y <= 93)
# Minimize 0.5*x + 2*y
objective = solver.Objective()
objective.SetCoefficient(x, 0.5)
objective.SetCoefficient(y, 2)
objective.SetMinimization()
status = solver.Solve()
# Print the solution
if status == solver.OPTIMAL:
print("x: {}, y: {}".format(x.solution_value(), y.solution_value())) # x: 17.0, y: 61.0
In this very basic example the objective function is Minimize(0.5*x + 2*y).
What would be the syntax to obtain, for example, the least squares Minimize(x^2 + y^2) or the absolute value of a variable Minimize(abs(x) + y)?
Is it possible to define a sub-function and call it into the objective function? Or should I proceed another way?
Many thanks in advance,
Romain
You've tagged this question with linear-programming, so you already have the ingredients to figure out the answer here.
If you check out this page, you'll see that OR-Tools solves linear programs, as well as few other families of optimization problems.
So the first objective function you mention, Minimize(0.5*x + 2*y) is solvable because it is linear.
The second objective you mention---Minimize(x^2 + y^2)---cannot be solved with OR-Tools because it is nonlinear: those squared terms make it quadratic. To solve this problem you need something that can do quadratic programming, second-order cone programming, or quadratically constrained quadratic programming. All of these methods include linear programming as a subset. The tool I recommend for solving these sorts of problems is cvxpy, which offers a powerful and elegant interface. (Alternatively, you can approximate the quadratic as linear-piecewise, but you will incur many more constraints.)
The last objective you mention, Minimize(c*abs(x) + y) can be solved as a linear program even though abs(x) itself is nonlinear. To do so, we rewrite the objective as min( c*(t1-t2) +y) and add the constraints t1,t2>=0. This works as long as c is positive and you are minimizing (or c is negative and you are maximizing). A longer explanation is here.
There are many such transformations you can perform and one of the skills of a mathematical programmer/operations researcher is to have many of them memorized.
I try to fit a function to my data using scipy.optimize.curvefit.
Q=optimization.curve_fit(func,X,Y, x0,ERR)
and it works well.
However, now I am trying to use an asymmetric error and I have no idea how to do that - or even if it is possible.
By asymmetric error I mean that the error is not for example: 3+-0.5 but 3 +0.6 -0.2.
So that ERR is an array with two columns.
It would be great if somebody had an idea how to do that - or could me point to a different Python routine which might be able to do it.
That a snippet of the code I am using - but I am not sure it makes it clearer:
A=numpy.genfromtxt('WF.dat')
cc=A[:,4]
def func(A,a1,b1,c1):
N=numpy.zeros(len(x))
for i in range(len(x)):
N[i]=1.0*erf(a1*(A[i,1]-c1*A[i,0]**b1))
return N
x0 = numpy.array([2.5 , -0.07 ,-5.0])
Q=optimization.curve_fit(func,A,cc, x0, Error)
And Error=[ErP,ErM] (2 columns)
Least squares algorithm like curve_fit or scipy.optimize.leastsq will not be able to do this because the loss function is quadratic, and so symmetric for positive and negative error.
I haven't seen any models for this and maybe PAIDA can handle it, as DanHickstein mentioned.
Otherwise, you could use the nonlinear optimizers like optimize.fmin and construct your own asymmetric loss function.
def loss_function(params, ...):
error = (y - func(x, params))
error_neg = (error < 0)
error_squared = error**2 / (error_neg * sigma_low + (1 - error_neg) * sigma_upp))
return error_squared.sum()
and minimize this with fmin or fmin_bfgs.
(I never tried this.)
In the current version, I am afraid it is not doable. curve_fit is a wrap around the popular Fortran library minipack. Check the source code of \scipy_install_path\optimize\minipack.py, you will see: (line 498-509):
if sigma is None:
func = _general_function
else:
func = _weighted_general_function
args += (1.0/asarray(sigma),)
Basically what it means is that of sigma is not provided, then the unweighted Levenberg-Marquardt method in minipack will be called. If sigma is provided, then the weighted LM will be called. That implies, if sigma is to be provided, it must be provided as a array of the same length of X or Y.
That means if you want to have asymmetric error residue on Y, you have to come up with some modification to your target function, as #Jaime suggested.
I'm not 100% sure, but it looks like the PAIDA package might do fits with asymmetric errors:
http://paida.sourceforge.net/documentation/fitter/index.html
A solution, which I've used frequently, is to draw realisations (say 100-1000) from a split-normal distribution, and run your fitting algorithm on each of these realisations with the error set to 0.0. You'll then have 100-1000 best-fitting parameters, from which you can simply take the median, along with any error estimate you want to use.