cvxpy is solving to produce empty answer - python

I am working with the following code:
import sys, numpy as np
import cvxpy as cvx
if __name__ == '__main__':
sims = np.random.randint(20, 30, size=500)
center = 30
n = [500, 1]
# minimize p'*log(p)
# subject to
# sum(p) = 1
# sum(p'*a) = target1
A = np.mat(np.vstack([np.ones(n[0]), sims]))
b = np.mat([1.0, center]).T
x = cvx.Variable(n)
obj = cvx.Maximize(cvx.sum(cvx.entr(x)))
constraints = [A # x == b]
prob = cvx.Problem(obj, constraints)
prob.solve()
weights = np.array(x.value)
Here the x.value is empty. I am not sure how to modify my above setup. I am trying to readjust the mean of sims to a different value defined by variable center here.

Remember to check if prob.value is finite before trying to access the values of the variables after calling prob.solve(). As you have a maximization problem, and prob.value returns -inf (see below output), it means that your problem is infeasible:
import sys, numpy as np
import cvxpy as cvx
if __name__ == '__main__':
sims = np.random.randint(20, 30, size=500)
center = 30
n = [500, 1]
# minimize p'*log(p)
# subject to
# sum(p) = 1
# sum(p'*a) = target1
A = np.mat(np.vstack([np.ones(n[0]), sims]))
b = np.mat([1.0, center]).T
x = cvx.Variable(n)
obj = cvx.Maximize(cvx.sum(cvx.entr(x)))
constraints = [A # x == b]
prob = cvx.Problem(obj, constraints)
prob.solve()
print(prob.value)
weights = np.array(x.value)
Output:
-inf
From Variable values return 'None' after solving the problem:
Diagnosing infeasibility issues is a common task when using optimization models in practice. Usually you will find either a bug in your code, or you will see that the abstract mathematical model can be infeasible (even if coded up perfectly).
For a quick reference of how it just might be how your abstract mathematical model is infeasible, rather than a bug in your code, you can try either replacing the
constraints = [A # x == b]
with
constraints = [A # x >= b] # Outputs 183.9397...
or with
constraints = [A # x <= b] # Outputs 6.2146...
and you will see that your code works.

First in way of debugging:
try and use this to see what is the issue:
prob.solve(verbose=True)
and this to check that a solution was found:
print(prob.status)
In your case the problem is infeasible, the linear problem you are trying to solve - doesn't always have a solution. You may introduce an "eps" variable to define the needed accuracy for your problem, or test in advance using a linear algebra library that some solution exists.

Related

Filling coordinates from distance with pyomo solver

I have a dataset with the distance between points and some coordinates. I want to "fill" probables coordinates thanks to the distances I know.
For this I am trying to solve the following optimisation problem :
Minimize the sum on all the edges of (distance_known(ij)²- ((xj-xi)²+(yj-yi)²)).
I wrote the following script in python :
(I use an example with a 6-nodes graph)
I seems to not work as I want by returning
WARNING: Loading a SolverResults object with a warning status into model.name="FillingCoordinates";
- termination condition: infeasible
- message from solver: Ipopt 3.11.1\x3a Converged to a locally
infeasible point. Problem may be infeasible.
I do not understand why ? May someone help me with this ?
L = np.array([[1,2,3],[1,3,np.sqrt(13)],[1,4,2],[1,5,np.sqrt(8)],[1,6,5],[2,3,2],[3,4,3],[4,5,2],[5,6,np.sqrt(13)]])
P1=[1,0,0]
P2=[4,0,2]
P3=[3,-3,2]
P4=[4,0,2]
P5=[5,2,2]
P6=[6,5,0]
Constr = np.array([P1,P2,P3,P4,P6])
model = pyo.ConcreteModel()
model.name = "FillingCoordinates"
model.lines = pyo.Set(initialize=(i for i in range(len(L))))
model.nodes = pyo.Set(initialize=(i+1 for i in range(6)))
model.x = pyo.Var(model.nodes, initialize = 0, domain=pyo.Integers)
model.y = pyo.Var(model.nodes, initialize = 0, domain=pyo.Integers)
model.constraint_x = pyo.ConstraintList()
model.constraint_y = pyo.ConstraintList()
for c in Constr:
model.constraint_x.add(model.x[c[0]]==c[1])
model.constraint_y.add(model.x[c[0]] == c[2])
for node in model.nodes:
model.constraint_y.add(model.y[node]>=-1)
model.constraint_y.add(model.y[node]<=3)
model.constraint_x.add(model.x[node] >= -4)
model.constraint_x.add(model.x[node] <= 6)
def func_objective(model):
objective_expr = sum([L[line][2]**2
- ((model.x[L[line][0]]-model.x[L[line][1]])**2
+ (model.y[L[line][0]]-model.y[L[line][1]])**2)
for line in model.lines])
return objective_expr
model.objective = pyo.Objective(rule=func_objective,sense=pyo.minimize)
solver = pyo.SolverFactory('ipopt')
solver.solve(model, tee=True)

Optimization with constrain

I would like to solve a constrained optimization problem.
max {ln (c1) + ln (c2)}
s.t. 4(c1) + 6(c2) ≤ 40
I wrote this code:
import numpy as np
from scipy import optimize
def main():
"""
solving a regular constrained optimization problem
max ln(cons[0]) + ln(cons[1])
st. prices[0]*cons[0] + prices[1]*cons[1] <= I
"""
prices = np.array([4.0, 6.0])
I = 40.0
util = lambda cons: np.dot( np.log(cons)) #define utility function
budget = lambda cons: I - np.dot(prices, cons) #define the budget constraint
initval = 40.0*np.ones(2) #set the initial guess for the algorithm
res = optimize.minimize(lambda x: -util(x), initval, method='slsqp',
constraints={'type':'ineq', 'fun':budget},
tol=1e-9)
assert res['success'] == True
print(res)
Unfortunately, my code don't print any solution. Can you help me figure out why?
Your code yields a TypeError since np.dot expects two arguments, see the definition of your utils function. Hence, use
# is the same as np.dot(np.ones(2), np.log(cons))
utils = lambda cons: np.sum(np.log(cons))
instead.

Pyomo.dae - Solving a system of DAEs with Casadi solver

i am trying to solve a system of DAE using pyomo.
This is a toy example
from pyomo.environ import *
from pyomo.dae import *
m = ConcreteModel()
m.r = ContinuousSet(bounds = (0., 1.))
m.t = ContinuousSet(bounds = (0., 5.))
m.c = Var(m.r, m.t)
m.dcdt = DerivativeVar(m.c, wrt = m.t)
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, nfe=20, wrt = m.r, scheme = 'BACKWARD')
# setting initial conditions
m.c[:, 0].fix(5)
def _dae_rule(m, r, t):
return 0 == - m.c[r, t] - m.dcdt[r, t] # note that rewriting to ODE is not desired
m.ode = Constraint(m.r, m.t, rule = _dae_rule)
sim = Simulator(m, package = "casadi")
tsim, profiles = sim.simulate(numpoints=100, integrator="idas")
Unfortunately, execution leads to the error message
DAE_Error: Currently the simulator may only be applied to Pyomo models with a single ContinuousSet
How so? Only m.t is a ContinuousSet?
Manually deleting the ContinuousSet, instead using a DiscreteSet in the first place yields the error message
DAE_Error: Cannot simulate a differential equation with multiple DerivativeVars
I don't understand. Every equation only depends on its own derivative?
Also, if i were to also discretize m.t can i then use any alternative solver that might work?
Thank you very much :)
according to the documentation on Simulator, it only supports models with 1 ContinuousSet and you have m.r and m.t. Maybe you can define a system of DAEs as a function of t, at discrete values of r, or vice versa.

Finding best weight value for smooth constrained least squares with Python?

I have a least squares problem to solve without any known estimates of a parameter. I impose the constraint that my desired solution be smooth (the model parameters vary slowly), so I minimize the difference between adjacent parameters (a traditional remedy used for this geological problem).
The constraints are implemented by arranging the constraining equations as rows in the original data equation d = Gm. The auxiliary parameter w is chosen by trial and error (w is called Lagrange multiplier by some textbooks).
I have the following:
G = np.array([[1,0,1,0,0,6],
[1,0,0,1,0,6.708],
[1,0,0,0,1,8.485],
[0,1,1,0,0,7.616],
[0,1,0,1,0,7],
[0,1,0,0,1,7.616]])
d = np.array([[2.323],
[2.543],
[2.857],
[2.64],
[2.529],
[2.553]])
Now adding a constraint of an arbitrary w-weighted smoothness (w = 0.01):
w = 0.01
G = np.array([[1,0,1,0,0,6],
[1,0,0,1,0,6.708],
[1,0,0,0,1,8.485],
[0,1,1,0,0,7.616],
[0,1,0,1,0,7],
[0,1,0,0,1,7.616],
[w,-w,0,0,0,0],
[0,w,-w,0,0,0],
[0,0,w,-w,0,0],
[0,0,0,w,-w,0],
[0,0,0,0,w,-w]])
d = np.array([[2.323],
[2.543],
[2.857],
[2.64],
[2.529],
[2.553],
[0],
[0],
[0],
[0],
[0]])
However, choosing a proper value for w seems to be a key step to constraint a good solution for the model parameters.
So my question is: with Python, is there a way I can loop over many calculated solutions with different values for w and choose the value that was used to achieve the solution with the best quality?
In the presented solution I'll refer to G_0 as G without the additional constraint and similarly d_0 is d without the additional zeros. I'm also assuming you're reading G_0 and d_0 from somewhere and I'm referring them as known.
import numpy as np
def create_W(n_rows, w):
W = -np.diagflat(np.ones(n_rows), 1)
np.fill_diagonal(W, 1)
return W
def solution_quality_metric(m):
# this need to be implemented to determine what you mean by "best"
n_rows = 5
d_w = np.zeros(n_rows)
# choose range for w values for example w_min = 0, w_max = 1, dw = 0.01
best_m = -np.inf
best_w = w_min
for w in np.arange(w_min, w_max, dw):
W = create_W(n_rows, w)
G = np.concatenate([G_0, W], axis=0)
d = np.concatenate([d_0, d_w])
m = np.lstsq(G, d)
if solution_quality_metric(m) > best_m:
best_m = solution_quality_metric(m)
best_w = w
This code will obviously not work as is since you didn't specify what you mean by "solution with the best quality". For this you'll need to implement the solution_quality_metric function

Is my problem suited for convex optimization, and if so, how to express it with cvxpy?

I have an array of scalars of m rows and n columns. I have a Variable(m) and a Variable(n) that I would like to find solutions for.
The two variables represent values that need to be broadcast over the columns and rows respectively.
I was naively thinking of writing the variables as Variable((m, 1)) and Variable((1, n)), and adding them together as if they're ndarrays. However, that doesn't work, as broadcasting is not allowed.
import cvxpy as cp
import numpy as np
# Problem data.
m = 3
n = 4
np.random.seed(1)
data = np.random.randn(m, n)
# Construct the problem.
x = cp.Variable((m, 1))
y = cp.Variable((1, n))
objective = cp.Minimize(cp.sum(cp.abs(x + y + data)))
# or:
#objective = cp.Minimize(cp.sum_squares(x + y + data))
prob = cp.Problem(objective)
result = prob.solve()
print(x.value)
print(y.value)
This fails on the x + y expression: ValueError: Cannot broadcast dimensions (3, 1) (1, 4).
Now I'm wondering two things:
Is my problem indeed solvable using convex optimization?
If yes, how can I express it in a way that cvxpy understands?
I'm very new to the concept of convex optimization, as well as cvxpy, and I hope I described my problem well enough.
I offered to show you how to represent this as a linear program, so here it goes. I'm using Pyomo, since I'm more familiar with that, but you could do something similar in PuLP.
To run this, you will need to first install Pyomo and a linear program solver like glpk. glpk should work for reasonable-sized problems, but if you are finding it's taking too long to solve, you could try a (much faster) commercial solver like CPLEX or Gurobi.
You can install Pyomo via pip install pyomo or conda install -c conda-forge pyomo. You can install glpk from https://www.gnu.org/software/glpk/ or via conda install glpk. (I think PuLP comes with a version of glpk built-in, so that might save you a step.)
Here's the script. Note that this calculates absolute error as a linear expression by defining one variable for the positive component of the error and another for the negative part. Then it seeks to minimize the sum of both. In this case, the solver will always set one to zero since that's an easy way to reduce the error, and then the other will be equal to the absolute error.
import random
import pyomo.environ as po
random.seed(1)
# ~50% sparse data set, big enough to populate every row and column
m = 10 # number of rows
n = 10 # number of cols
data = {
(r, c): random.random()
for r in range(m)
for c in range(n)
if random.random() >= 0.5
}
# define a linear program to find vectors
# x in R^m, y in R^n, such that x[r] + y[c] is close to data[r, c]
# create an optimization model object
model = po.ConcreteModel()
# create indexes for the rows and columns
model.ROWS = po.Set(initialize=range(m))
model.COLS = po.Set(initialize=range(n))
# create indexes for the dataset
model.DATAPOINTS = po.Set(dimen=2, initialize=data.keys())
# data values
model.data = po.Param(model.DATAPOINTS, initialize=data)
# create the x and y vectors
model.X = po.Var(model.ROWS, within=po.NonNegativeReals)
model.Y = po.Var(model.COLS, within=po.NonNegativeReals)
# create dummy variables to represent errors
model.ErrUp = po.Var(model.DATAPOINTS, within=po.NonNegativeReals)
model.ErrDown = po.Var(model.DATAPOINTS, within=po.NonNegativeReals)
# Force the error variables to match the error
def Calculate_Error_rule(model, r, c):
pred = model.X[r] + model.Y[c]
err = model.ErrUp[r, c] - model.ErrDown[r, c]
return (model.data[r, c] + err == pred)
model.Calculate_Error = po.Constraint(
model.DATAPOINTS, rule=Calculate_Error_rule
)
# Minimize the total error
def ClosestMatch_rule(model):
return sum(
model.ErrUp[r, c] + model.ErrDown[r, c]
for (r, c) in model.DATAPOINTS
)
model.ClosestMatch = po.Objective(
rule=ClosestMatch_rule, sense=po.minimize
)
# Solve the model
# get a solver object
opt = po.SolverFactory("glpk")
# solve the model
# turn off "tee" if you want less verbose output
results = opt.solve(model, tee=True)
# show solution status
print(results)
# show verbose description of the model
model.pprint()
# show X and Y values in the solution
for r in model.ROWS:
print('X[{}]: {}'.format(r, po.value(model.X[r])))
for c in model.COLS:
print('Y[{}]: {}'.format(c, po.value(model.Y[c])))
Just to complete the story, here's a solution that's closer to your original example. It uses cvxpy, but with the sparse data approach from my solution.
I don't know the "official" way to do elementwise calculations with cvxpy, but it seems to work OK to just use the standard Python sum function with a lot of individual cp.abs(...) calculations.
This gives a solution that is very slightly worse than the linear program, but you may be able to fix that by adjusting the solution tolerance.
import cvxpy as cp
import random
random.seed(1)
# Problem data.
# ~50% sparse data set
m = 10 # number of rows
n = 10 # number of cols
data = {
(i, j): random.random()
for i in range(m)
for j in range(n)
if random.random() >= 0.5
}
# Construct the problem.
x = cp.Variable(m)
y = cp.Variable(n)
objective = cp.Minimize(
sum(
cp.abs(x[i] + y[j] + data[i, j])
for (i, j) in data.keys()
)
)
prob = cp.Problem(objective)
result = prob.solve()
print(x.value)
print(y.value)
I did not get the idea, but just some hacky stuff based on the assumption:
you want some cvxpy-equivalent to numpy's broadcasting-rules behaviour on arrays (m, 1) + (1, n)
So numpy-wise:
m = 3
n = 4
np.random.seed(1)
a = np.random.randn(m, 1)
b = np.random.randn(1, n)
a
array([[ 1.62434536],
[-0.61175641],
[-0.52817175]])
b
array([[-1.07296862, 0.86540763, -2.3015387 , 1.74481176]])
a + b
array([[ 0.55137674, 2.48975299, -0.67719333, 3.36915713],
[-1.68472504, 0.25365122, -2.91329511, 1.13305535],
[-1.60114037, 0.33723588, -2.82971045, 1.21664001]])
Let's mimic this with np.kron, which has a cvxpy-equivalent:
aLifted = np.kron(np.ones((1,n)), a)
bLifted = np.kron(np.ones((m,1)), b)
aLifted
array([[ 1.62434536, 1.62434536, 1.62434536, 1.62434536],
[-0.61175641, -0.61175641, -0.61175641, -0.61175641],
[-0.52817175, -0.52817175, -0.52817175, -0.52817175]])
bLifted
array([[-1.07296862, 0.86540763, -2.3015387 , 1.74481176],
[-1.07296862, 0.86540763, -2.3015387 , 1.74481176],
[-1.07296862, 0.86540763, -2.3015387 , 1.74481176]])
aLifted + bLifted
array([[ 0.55137674, 2.48975299, -0.67719333, 3.36915713],
[-1.68472504, 0.25365122, -2.91329511, 1.13305535],
[-1.60114037, 0.33723588, -2.82971045, 1.21664001]])
Let's check cvxpy semi-blindly (we only dimensions; too lazy to setup a problem and fix variable to check the output :-D):
import cvxpy as cp
x = cp.Variable((m, 1))
y = cp.Variable((1, n))
cp.kron(np.ones((1,n)), x) + cp.kron(np.ones((m, 1)), y)
# Expression(AFFINE, UNKNOWN, (3, 4))
# looks good!
Now some caveats:
i don't know how efficient cvxpy can reason about this matrix-form internally
unclear if more efficient as a simple list-comprehension based form using cp.vstack and co (it probably is)
this operation itself kills all sparsity
(if both vectors are dense; your matrix is dense)
cvxpy and more or less all convex-optimization solvers are based on some sparsity assumption
scaling this problem up to machine-learning dimensions will not make you happy
there is probably a much more concise mathematical theory for your problem then to use (sparsity-assuming) (pretty) general (DCP implemented in cvxpy is a subset) convex-optimization

Categories

Resources