CVXPY failing randomly on basic quadratic problem - python

I'm finding CVXPY is randomly failing with the following error:
ArpackError: ARPACK error 3: No shifts could be applied during a cycle of the Implicitly restarted
Arnoldi iteration. One possibility is to increase the size of NCV relative to NEV.
The code below is a minimal example where it is just trying to do mean variance optimisation with no constraints, identity correlation matrix, and normally distributed mean vector. Roughly once in every thousand runs this fails. It doesn't seem to matter which solver I ask it to use, which makes me think it is failing setting up the problem?
import cvxpy as cp
import numpy as np
n = 199
np.random.seed(100)
mu = np.random.normal(size = n)
C = np.eye(n)
for repeat in range(1000):
x = cp.Variable(n)
mean = x.T # mu
variance = cp.quad_form(x, C)
objective = cp.Maximize(mean - variance)
constraints = []
prob = cp.Problem(objective, constraints)
result = prob.solve()
print(repeat, end = " ")

Related

Jacobian matrix (conductance matrix) back to nodal analysis equations (Python)

I'm currently doing my master's project in accelerating circuit simulators and I am learning some basics for circuit simulations. My "circuit simulator" code in Python can currently run OP analysis for linear resistive circuits which uses the MNA matrix and Ax=b matrix solvers which solve for the x.
I am trying to add Newton-Raphson to the mix for non-linear resistive circuits OP analysis. This is where I am a bit stuck. It seems to use the Jacobian matrix (the conductance matrix) and normal nodal analysis equations for the Ax=b, x to be solved. However, I can't find a way of making the code generic and not hard-coding the nodal analysis equations. The only way that I could think of is by expanding the Jacobian matrix (which needs integration for each element inside the matrix). I have tried my method of expanding the Jacobian matrix while multiplying with the variables needed to be solved but it's not accurate as it is not the direct integration of the elements. Is there another way of doing this or I should just try integrating every element in the Jacobian matrix with respect to the variables of the x array?
The code below is only for the Newton-Raphson method test for the non-linear circuit, not added with the generic code yet
import numpy as np
import time as tm
import matplotlib.pyplot as plt
def F(x):
return np.array(
[(x[0]-x[1])/1000 + x[2],
(x[1]-x[0])/1000 + 0.001*(x[1])**3,
x[0] - 1])
def J(x):
return np.array(
[[1/1000, -1/1000, 1],
[-1/1000, (1+(x[1])**2)/1000, 0], # Need to add 3 as coefficient of x[1] to simulate the hard-coded function
[1, 0, 0]])
def Newton_system(x_guess, eps):
"""
Solve nonlinear system F=0 by Newton's method.
J is the Jacobian of F. Both F and J must be functions of x.
At input, x holds the start value. The iteration continues
until ||F|| < eps.
"""
RHS = np.array([0,0,1])
x_old = x_guess
F_x = np.matmul(J(x_old),x_old)-RHS # "Generic" expanded Jacobian input
error = 9e9
#F_value = F(x_old) # Hard-coded input
F_value = F_x # "Generic" expanded Jacobian input
iteration_counter = 0
while error > eps and iteration_counter < 5000:
delta = np.linalg.solve(J(x_old), F_value)
x_new = x_old - delta
#F_value = F(x_new) # Hard-coded input
F_value = np.matmul(J(x_new),x_new)-RHS # "Generic" expanded Jacobian input
error = np.max(np.abs(x_new - x_old))
x_old = x_new
iteration_counter += 1
print(x_new,iteration_counter)
# Here, either a solution is found, or too many iterations
if error > eps:
iteration_counter = -1
return x_new, iteration_counter
expected = np.array([1, 0.7, -0.003])
st = tm.time()
ans, n = Newton_system(x_guess = np.array([1, 0.6823, -0.003]), eps=1e-9)
et = tm.time()
diff = et - st
print("The amount of time taken is", diff)
Please help :(

Python - solving for risk budget portfolio using cvxpy

I'm looking to write a set of code that allows me to set risk budget constraints to individual positions in a portfolio, i.e. each position to contribute a set amount of risk to the portfolio, and I'm looking to do it specifically in CVXPY as I have noticed sometimes SCIPY breaks the constraints.
I have the below code, I was wondering if you would be able to provide me with some direction as I have encountered cvxpy "The objective is not DCP" error and I'm not sure how to correct it.
Please also let me know if I should rephrase my question to make it clearer.
import cvxpy as cp
import numpy as np
# covmat is a (31,31) numpy array, covariance matrix calculated from monthly returns
risk_budget = np.repeat(1/31, 31).reshape(-1,1)
def risk_budget_objective(risk_budget, covmat):
n = covmat.shape[0]
# set equal weight
equal_wts = np.repeat(1 / n, n)
# weights vertical
wts = cp.Variable((n,1))
constraints = [cp.sum(wts) == 1.0] # weight constraints
port_variance = cp.square(cp.quad_form(wts, covmat)) # portfolio variance, not volatility
mrc = covmat # wts * 12 # vector # marginal risk contribution annualised
risk_contrib = cp.multiply(mrc, wts) / port_variance # calculate risk contribution
mean_square_diff = cp.sum(cp.square(risk_contrib - risk_budget)) # squared difference and summed
prob = cp.Problem(cp.Minimize(mean_square_diff), constraints) # minimise squared difference
prob.solve(solver=cp.SCS)
if problem.status not in ["infeasible", "unbounded"]:
solution = wts.value
return solution
else:
print('Problem not feasible... resorting to equal weight...')
return equal_wts
Looking into the objects, it seems that the problem lies in the following code:
risk_contrib = cp.multiply(mrc, wts) / port_variance
As the curvature is "UNKOWN" rather than convex.

How to use decision variable in the exponential term while using CVXPY?

The objective function of my optimization problem involves the exponential function of the decision variables. My code is
import cvxpy as cp
import numpy as np
import math
A = np.array([0.805156521,0.464522,0.00452762,0.00047562])
Zmax = 200
Zmin = 25
T = cp.Variable(2)
# Defining the objective function
objective = cp.Minimize((A[0]+A[1])*math.exp(T[0]) + (A[2]+A[3])*math.exp(T[1]))
# Defining the constraints
constraints = []
constraints += [cp.sum(T) == 250]
constraints += [Zmin <= T]
constraints += [T <= Zmax]
prob = cp.Problem(objective, constraints)
prob.solve()
# Print result.
print("\nThe optimal value is", prob.value)
print("A solution T is")
print(T.value)
I am getting the following error on the line where the objective function is defined:
TypeError: must be real number, not index
I am using default solvers in the CVXPY package. Kindly advise how to tackle this problem. Thank you. (I have already asked the same question here, but haven't got any response. Therefore, asking in this forum.)

Performance issue with Scipy's solve_bvp and coupled differential equations

I'm facing a problem while trying to implement the coupled differential equation below (also known as single-mode coupling equation) in Python 3.8.3. As for the solver, I am using Scipy's function scipy.integrate.solve_bvp, whose documentation can be read here. I want to solve the equations in the complex domain, for different values of the propagation axis (z) and different values of beta (beta_analysis).
The problem is that it is extremely slow (not manageable) compared with an equivalent implementation in Matlab using the functions bvp4c, bvpinit and bvpset. Evaluating the first few iterations of both executions, they return the same result, except for the resulting mesh which is a lot greater in the case of Scipy. The mesh sometimes even saturates to the maximum value.
The equation to be solved is shown here below, along with the boundary conditions function.
import h5py
import numpy as np
from scipy import integrate
def coupling_equation(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty_like(a).astype(complex)
for idx, z_i in enumerate(z_mesh):
beta_zf_i = np.interp(z_i, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_i, z_a, ka_z) # Get ka at the desired point of the mesh
coupling_matrix = np.empty((2, 2), complex)
coupling_matrix[0] = [-1j * beta_zf_i, ka_z_i]
coupling_matrix[1] = [ka_z_i, 1j * beta_zf_i]
a_p[:, idx] = np.matmul(coupling_matrix, a[:, idx]) # Solve the coupling matrix
return a_p
def boundary_conditions(a_a, a_b):
return np.hstack(((a_a[0]-1), a_b[1]))
Moreover, I couldn't find a way to pass k, z and beta_zf as arguments of the function coupling_equation, given that the fun argument of the solve_bpv function must be a callable with the parameters (x, y). My approach is to define some global variables, but I would appreciate any help on this too if there is a better solution.
The analysis function which I am trying to code is:
def analysis(k, z, beta_analysis, max_mesh):
s11_analysis = np.empty_like(beta_analysis, dtype=complex)
s21_analysis = np.empty_like(beta_analysis, dtype=complex)
initial_mesh = np.linspace(z[0], z[-1], 10) # Initial mesh of 10 samples along L
mesh = initial_mesh
# a_init must be complex in order to solve the problem in a complex domain
a_init = np.vstack((np.ones(np.size(initial_mesh)).astype(complex),
np.zeros(np.size(initial_mesh)).astype(complex)))
for idx, beta in enumerate(beta_analysis):
print(f"Iteration {idx}: beta_analysis = {beta}")
global beta_zf
beta_zf = beta * np.ones(len(z)) # Global variable so as to use it in coupling_equation(x, y)
a = integrate.solve_bvp(fun=coupling_equation,
bc=boundary_conditions,
x=mesh,
y=a_init,
max_nodes=max_mesh,
verbose=1)
# mesh = a.x # Mesh for the next iteration
# a_init = a.y # Initial guess for the next iteration, corresponding to the current solution
s11_analysis[idx] = a.y[1][0]
s21_analysis[idx] = a.y[0][-1]
return s11_analysis, s21_analysis
I suspect that the problem has something to do with the initial guess that is being passed to the different iterations (see commented lines inside the loop in the analysis function). I try to set the solution of an iteration as the initial guess for the following (which must reduce the time needed for the solver), but it is even slower, which I don't understand. Maybe I missed something, because it is my first time trying to solve differential equations.
The parameters used for the execution are the following:
f2 = h5py.File(r'path/to/file', 'r')
k = np.array(f2['k']).squeeze()
z = np.array(f2['z']).squeeze()
f2.close()
analysis_points = 501
max_mesh = 1e6
beta_0 = 3e2;
beta_low = 0; # Lower value of the frequency for the analysis
beta_up = beta_0; # Upper value of the frequency for the analysis
beta_analysis = np.linspace(beta_low, beta_up, analysis_points);
s11_analysis, s21_analysis = analysis(k, z, beta_analysis, max_mesh)
Any ideas on how to improve the performance of these functions? Thank you all in advance, and sorry if the question is not well-formulated, I accept any suggestions about this.
Edit: Added some information about performance and sizing of the problem.
In practice, I can't find a relation that determines de number of times coupling_equation is called. It must be a matter of the internal operation of the solver. I checked the number of callings in one iteration by printing a line, and it happened in 133 ocasions (this was one of the fastests). This must be multiplied by the number of iterations of beta. For the analyzed one, the solver returned this:
Solved in 11 iterations, number of nodes 529.
Maximum relative residual: 9.99e-04
Maximum boundary residual: 0.00e+00
The shapes of a and z_mesh are correlated, since z_mesh is a vector whose length corresponds with the size of the mesh, recalculated by the solver each time it calls coupling_equation. Given that a contains the amplitudes of the progressive and regressive waves at each point of z_mesh, the shape of a is (2, len(z_mesh)).
In terms of computation times, I only managed to achieve 19 iterations in about 2 hours with Python. In this case, the initial iterations were faster, but they start to take more time as their mesh grows, until the point that the mesh saturates to the maximum allowed value. I think this is because of the value of the input coupling coefficients in that point, because it also happens when no loop in beta_analysisis executed (just the solve_bvp function for the intermediate value of beta). Instead, Matlab managed to return a solution for the entire problem in just 6 minutes, aproximately. If I pass the result of the last iteration as initial_guess (commented lines in the analysis function, the mesh overflows even faster and it is impossible to get more than a couple iterations.
Based on semi-random inputs, we can see that max_mesh is sometimes reached. This means that coupling_equation can be called with a quite big z_mesh and a arrays. The problem is that coupling_equation contains a slow pure-Python loop iterating on each column of the arrays. You can speed the computation up a lot using Numpy vectorization. Here is an implementation:
def coupling_equation_fast(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty(a.shape, dtype=np.complex128)
beta_zf_i = np.interp(z_mesh, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_mesh, z_a, ka_z) # Get ka at the desired point of the mesh
# Fast manual matrix multiplication
a_p[0] = (-1j * beta_zf_i) * a[0] + ka_z_i * a[1]
a_p[1] = ka_z_i * a[0] + (1j * beta_zf_i) * a[1]
return a_p
This code provides a similar output with semi-random inputs compared to the original implementation but is roughly 20 times faster on my machine.
Furthermore, I do not know if max_mesh happens to be big with your inputs too and even if this is normal/intended. It may make sense to decrease the value of max_mesh in order to reduce the execution time even more.

Nonlinear constraints with scipy

The problem at hand is optimization of multivariate function with nonlinear constraints.
There is a differential equation (in its oversimplified form)
dy/dx = y(x)*t(x) + g(x)
I need to minimize the solution of the DE y(x), but by varying the t(x).
Since it is physics under the hood, there are constraints on t(x). I successfully implemented all of them except one:
0 < t(x) < 1 for any x in range [a,b]
For certainty, the t(x) is a general polynomial:
t(x) = a0 + a1*x + a2*x**2 + a3*x**3 + a4*x**4 + a5*x**5
The x is fixed numpy.ndarray of floats and the optimization goes for coefficients a. I use scipy.optimize with trust-constr.
What I have tried so far:
Root finding at each step and determining the minimal/maximal value of the function using optimize.root and checking for sign changes. Return 0.5 if constraints are satisfied and numpy.inf or -1 or whatever not in [0;1] range if constraints are not satisfied. The optimizer stops soon and the function is not minimized properly.
Since x is fixed-length and known, I tried to define a constraint for each point, so I got N constraints where N = len(x). This works (at least look like) but takes forever for not-so large N. Also, since x is discrete and non-uniform, I can't be sure that there are no violated constraints for any x in [a,b].
EDIT #1: the minimal reproducible example
import scipy.optimize as optimize
from scipy.optimize import Bounds
import numpy as np
# some function y(x)
x = np.linspace(-np.pi,np.pi,100)
y = np.sin(x)
# polynomial t(z)
def t(a,z):
v = 0.0;
for ii in range(len(a)):
v += a[ii]*z**ii
return v
# let's minimize the sum
def targetFn(a):
return np.sum(y*t(a,x))
# polynomial order
polyord = 3
# simple bounds to have reliable results,
# otherwise the solution will grow toward +-infinity
bnd = 10.0
bounds = Bounds([-bnd for i in range(polyord+1)],
[bnd for i in range(polyord+1)])
res = optimize.minimize(targetFn, [1.0 for i in range(polyord+1)],
bounds = bounds)
if np.max(t(res.x,x))>200:
print('max constraint violated!')
if np.min(t(res.x,x))<-100:
print('min constraint violated!')
In the reproducible example given above, let the constraints to be that the value of the polynomial t(a,x) is in range [-100;200] for the given x.
So the question is: how does one properly define a constraint to tell the optimizer that the function's values must be constrained for the given range of arguments?

Categories

Resources