cvxpy - How to obtain the variable value after each iteration? - python

I am using cvxpy to solve a second order cone program. I have used the boilerplate code as mentioned in the cvxpy website - cvxpy SOCP webpage. I do not know how to obtain the variable value after each iteration...
Code from the link:
# Import packages.
import cvxpy as cp
import numpy as np
# Generate a random feasible SOCP.
m = 3
n = 10
p = 5
n_i = 5
f = np.random.randn(n)
A = []
b = []
c = []
d = []
x0 = np.random.randn(n)
for i in range(m):
A.append(np.random.randn(n_i, n))
b.append(np.random.randn(n_i))
c.append(np.random.randn(n))
d.append(np.linalg.norm(A[i] # x0 + b, 2) - c[i].T # x0)
# Define and solve the CVXPY problem.
x = cp.Variable(n)
# We use cp.SOC(t, x) to create the SOC constraint ||x||_2 <= t.
soc_constraints = [
cp.SOC(c[i].T # x + d[i], A[i] # x + b[i]) for i in range(m)
]
prob = cp.Problem(cp.Minimize(f.T#x), soc_constraints)
prob.solve()
# Print result.
print("The optimal value is", prob.value)
print("A solution x is")
print(x.value)
x.value here only gives the variable value after all the iterations are done. I want x.value after each iteration.

Related

CVXPY numerical instability

I noticed that if you add to your target something multiplied by zero it will affect the solution significantly. I found this bug when using cvxpy on my dataset which I cant upload of course. But here is an example from their official resource: https://www.cvxpy.org/examples/basic/sdp.html
import cvxpy as cp
import numpy as np
# Generate a random SDP.
n = 3
p = 3
np.random.seed(1)
C = np.random.randn(n, n)
A = []
b = []
for i in range(p):
A.append(np.random.randn(n, n))
b.append(np.random.randn())
# Define and solve the CVXPY problem.
# Create a symmetric matrix variable.
X = cp.Variable((n,n), symmetric=True)
# The operator >> denotes matrix inequality.
constraints = [X >> 0]
constraints += [
cp.trace(A[i] # X) == b[i] for i in range(p)]
prob = cp.Problem(cp.Minimize(cp.trace(C # X)),
constraints)
prob.solve()
# Print result.
print("The optimal value is", prob.value)
print("A solution X is")
print(X.value)
Now do the same but change
prob = cp.Problem(cp.Minimize(cp.trace(C # X) + 0*cp.trace(C # X)**9),
constraints)
You will see that value of X will change.

Problem of using GEKKO variables in self-defined function in python

I tried to use GEKKO variables in self-defined function in python. But there are always annoying errors which I cannot find reason for. Could you please give me a favour?
The whole code is too long. So I only picked the important lines here to show the problem.
index = 0
na = 3
inter = np.zeros((na,na))
intera = np.zeros(na*na)
# all the other unmentioned parameters are constant.
def myfunction(x1,x2):
...
gvol_fv = A + B * x1 + C * x1 ** 2
for i in range(na):
for j in range(na):
print(index,aij[index],bij[index],cij[index])
intera[index] = aij[index] + bij[index] * x1 + cij[index] * x1**2
inter[i][j] = math.exp((aij[index] + bij[index] * x1 + cij[index] * x1**2.0 / 1000.0)/x1)
index = index+1
print(index)
...
return [ac1,ac2] # ac1 and ac2 are very complicated variables.
x1 = m.Const(300.0)
x21,x22 = [m.Var(0.01,0.0,1.0) for i in range (2)]
mf_x21_1 = myfunction(x1,x21)[0]
mf_x21_2 = myfunction(x1,x21)[1]
mf_x22_1 = myfunction(x1,x22)[0]
mf_x22_2 = myfunction(x1,x22)[1]
m.Equation(mf_x21_1==mf_x22_1)
m.Equation(mf_x21_2==mf_x22_2)
m.options.IMODE = 1
m.solve()
The errors are as following:
#### for intera[index]:
ValueError: setting an array element with a sequence.
#### for inter[i][j]:
TypeError: a float is required
Unfortunately Gekko doesn't have full support for numpy arrays right now, and the error comes from trying to insert a Gekko variable into a numpy array. To make an array of Gekko variables, you need to either use Gekko arrays, or nested lists. Here's an example of both approaches:
from gekko import GEKKO
m = GEKKO()
ni = 3 # number of rows
nj = 2 # number of columns
# best method: use m.Array function
x = m.Array(m.Var,(ni,nj))
m.Equations([x[i][j]==i*j+1 for i in range(ni) for j in range(nj)])
# another way: list comprehensions
y = [[m.Var() for j in range(nj)] for i in range(ni)]
for i in range(ni):
for j in range(nj):
m.Equation(x[i][j]**2==y[i][j])
# summation
z = m.Var()
m.Equation(z==sum([sum([x[i][j] for i in range(ni)]) for j in range(nj)]))
m.solve()
print('x:')
print(x)
print('y=x**2:')
print(y)
print('z')
print(z.value)

Rows and columns restrictions on python

I have a list of lists m which I need to modify
I need that the sum of each row to be greater than A and the sum of each column to be lesser than B
I have something like this
x = 5 #or other number, not relevant
rows = len(m)
cols = len(m[0])
for r in range(rows):
while sum(m[r]) < A:
c = randint(0, cols-1)
m[r][c] += x
for c in range(cols):
cant = sum([m[r][c] for r in range(rows)])
while cant > B:
r = randint(0, rows-1)
if m[r][c] >= x: #I don't want negatives
m[r][c] -= x
My problem is: I need to satisfy both conditions and, this way, after the second for I won't be sure if the first condition is still met.
Any suggestions on how to satisfy both conditions and, of course, with the best execution? I could definitely consider the use of numpy
Edit (an example)
#input
m = [[0,0,0],
[0,0,0]]
A = 20
B = 25
# one desired output (since it chooses random positions)
m = [[10,0,15],
[15,0,5]]
I may need to add
This is for the generation of the random initial population of a genetic algorithm, the restrictions are to make them a possible solution, and I would need to run this like 80 times to get different possible solutions
Something like this should to the trick:
import numpy
from scipy.optimize import linprog
A = 10
B = 20
m = 2
n = m * m
# the coefficients of a linear function to minimize.
# setting this to all ones minimizes the sum of all variable
# values in the matrix, which solves the problem, but see below.
c = numpy.ones(n)
# the constraint matrix.
# This is matrix-multiplied with the current solution candidate
# to form the left hand side of a set of normalized
# linear inequality constraint equations, i.e.
#
# x_0 * A_ub[0][0] + x_1 * A_ub[0][1] <= b_0
# x_1 * A_ub[1][0] + x_1 * A_ub[1][1] <= b_1
# ...
A_ub = numpy.zeros((2 * m, n))
# row sums. Since the <= inequality is a fixed component,
# we just multiply everthing by (-1), i.e. we demand that
# the negative sums are smaller than the negative limit -A.
#
# Assign row ranges all at once, because numpy can do this.
for r in xrange(0, m):
A_ub[r][r * m:(r + 1) * m] = -1
# We want that the sum of the x in each (flattened)
# column is smaller than B
#
# The manual stepping for the column sums in row-major encoding
# is a little bit annoying here.
for r in xrange(0, m):
for j in xrange(0, m):
A_ub[r + m][r + m * j] = 1
# the actual upper limits for the normalized inequalities.
b_ub = [-A] * m + [B] * m
# hand the linear program to scipy
solution = linprog(c, A_ub=A_ub, b_ub=b_ub)
# bring the solution into the desired matrix form
print numpy.reshape(solution.x, (m, m))
Caveats
I use <=, not < as indicated in your question, because that's what numpy supports.
This minimizes the total sum of all values in the target vector.
For your use case, you probably want to minimize the distance
to the original sample, which the linear program cannot handle, since neither the squared error nor the absolute difference can be expressed using a linear combination (which is what c stands for). For that, you will probably need to go to full minimize().
Still, this should get you rough idea.
A NumPy solution:
import numpy as np
val = B / len(m) # column sums <= B
assert val * len(m[0]) >= A # row sums >= A
# create array shaped like m, filled with val
arr = np.empty_like(m)
arr[:] = val
I chose to ignore the original content of m - it's all zero in your example anyway.
from random import *
m = [[0,0,0],
[0,0,0]]
A = 20
B = 25
x = 1 #or other number, not relevant
rows = len(m)
cols = len(m[0])
def runner(list1, a1, b1, x1):
list1_backup = list(list1)
rows = len(list1)
cols = len(list1[0])
for r in range(rows):
while sum(list1[r]) <= a1:
c = randint(0, cols-1)
list1[r][c] += x1
for c in range(cols):
cant = sum([list1[r][c] for r in range(rows)])
while cant >= b1:
r = randint(0, rows-1)
if list1[r][c] >= x1: #I don't want negatives
list1[r][c] -= x1
good_a_int = 0
for r in range(rows):
test1 = sum(list1[r]) > a1
good_a_int += 0 if test1 else 1
if good_a_int == 0:
return list1
else:
return runner(list1=list1_backup, a1=a1, b1=b1, x1=x1)
m2 = runner(m, A, B, x)
for row in m:
print ','.join(map(lambda x: "{:>3}".format(x), row))

ValueError: x and y must have the same first dimension

I am trying to implement a finite difference approximation to solve the Heat Equation, u_t = k * u_{xx}, in Python using NumPy.
Here is a copy of the code I am running:
## This program is to implement a Finite Difference method approximation
## to solve the Heat Equation, u_t = k * u_xx,
## in 1D w/out sources & on a finite interval 0 < x < L. The PDE
## is subject to B.C: u(0,t) = u(L,t) = 0,
## and the I.C: u(x,0) = f(x).
import numpy as np
import matplotlib.pyplot as plt
# parameters
L = 1 # legnth of the rod
T = 10 # terminal time
N = 10
M = 100
s = 0.25
# uniform mesh
x_init = 0
x_end = L
dx = float(x_end - x_init) / N
x = np.arange(x_init, x_end, dx)
x[0] = x_init
# time discretization
t_init = 0
t_end = T
dt = float(t_end - t_init) / M
t = np.arange(t_init, t_end, dt)
t[0] = t_init
# Boundary Conditions
for m in xrange(0, M):
t[m] = m * dt
# Initial Conditions
for j in xrange(0, N):
x[j] = j * dx
# definition of solution u(x,t) to u_t = k * u_xx
u = np.zeros((N, M+1)) # array to store values of the solution
# Finite Difference Scheme:
u[:,0] = x**2 #initial condition
for m in xrange(0, M):
for j in xrange(1, N-1):
if j == 1:
u[j-1,m] = 0 # Boundary condition
elif j == N-1:
u[j+1,m] = 0
else:
u[j,m+1] = u[j,m] + s * ( u[j+1,m] -
2 * u[j,m] + u[j-1,m] )
print u, #t, x
plt.plot(u, t)
#plt.show()
I think my code is working properly and it is producing an output. I want to plot the output of the solution u versus t (my time vector). If I can plot the graph then I am able to check if my numerical approximation agrees with the expected phenomena for the Heat Equation. However, I am getting the error that "x and y must have same first dimension". How can I correct this issue?
An additional question: Am I better off attempting to make an animation with matplotlib.animation instead of using matplotlib.plyplot ???
Thanks so much for any and all help! It is very greatly appreciated!
Okay so I had a "brain dump" and tried plotting u vs. t sort of forgetting that u, being the solution to the Heat Equation (u_t = k * u_{xx}), is defined as u(x,t) so it has values for time. I made the following correction to my code:
print u #t, x
plt.plot(u)
plt.show()
And now my programming is finally displaying an image. And here it is:
It is absolutely beautiful, isn't it?

From CVX to CVXPY or CVXOPT

I've been trying to pass some code from Matlab to Python. I have the same convex optimization problem working on Matlab but I'm having problems passing it to either CVXPY or CVXOPT.
n = 1000;
i = 20;
y = rand(n,1);
A = rand(n,i);
cvx_begin
variable x(n);
variable lambda(i);
minimize(sum_square(x-y));
subject to
x == A*lambda;
lambda >= zeros(i,1);
lambda'*ones(i,1) == 1;
cvx_end
This is what I tried with Python and CVXPY.
import numpy as np
from cvxpy import *
# Problem data.
n = 100
i = 20
np.random.seed(1)
y = np.random.randn(n)
A = np.random.randn(n, i)
# Construct the problem.
x = Variable(n)
lmbd = Variable(i)
objective = Minimize(sum_squares(x - y))
constraints = [x == np.dot(A, lmbd),
lmbd <= np.zeros(itr),
np.sum(lmbd) == 1]
prob = Problem(objective, constraints)
print("status:", prob.status)
print("optimal value", prob.value)
Nonetheless, it's not working. Does any of you have any idea how to make it work? I'm pretty sure my problem is in the constraints. And also it would be nice to have it with CVXOPT.
I think I got it, I had one of the constraints wrong =), I added a random seed number in order to compare the results and check that are in fact the same in both languages. I leave the data here so maybe this is useful for somebody someday ;)
Matlab
rand('twister', 0);
n = 100;
i = 20;
y = rand(n,1);
A = rand(n,i);
cvx_begin
variable x(n);
variable lmbd(i);
minimize(sum_square(x-y));
subject to
x == A*lmbd;
lmbd >= zeros(i,1);
lmbd'*ones(i,1) == 1;
cvx_end
CVXPY
import numpy as np
import cvxpy as cp
# random seed
np.random.seed(0)
# Problem data.
n = 100
i = 20
y = np.random.rand(n)
# A = np.random.rand(n, i) # normal
A = np.random.rand(i, n).T # in this order to test random numbers
# Construct the problem.
x = cp.Variable(n)
lmbd = cp.Variable(i)
objective = cp.Minimize(cp.sum_squares(x - y))
constraints = [x == A*lmbd,
lmbd >= np.zeros(i),
cp.sum(lmbd) == 1]
prob = cp.Problem(objective, constraints)
result = prob.solve(verbose=True)
CVXOPT is pending.....

Categories

Resources