Mixed Integer Programming Constraints in CVXPY - python

Where:
V :3x3 Matrix of complex numbers constants
V: scalar Complex number constant
The problem is to find a boolean matrix X that
Minimize Residules=cp.norm(cp.sum(cp.multiply(Vc,S))-V)
The following code works:
import numpy as np
import cvxpy as cp
V= np.random.random(3)*10 + np.random.random(3)*10 * 1j
C=3+4j
X=cp.Variable((3,3), boolean=True)
Residules=cp.norm(cp.sum(cp.multiply(Vc,S))-V)
Objective= cp.Minimize(Residules)
Const1=cp.sum(X,0)<=1
Prob1= cp.Problem(Objective)
Prob1.solve()
X=np.array(X.value)
print(np.round(X))
print(Prob1.value)
The output:
[[ 1. 0. 0.]
[ 1. -0. -0.]
[-0. 1. -0.]]
1.39538277332097
My question:
I want put a constraint on the problem so that for each column in Matrix X only one element can be '1' and the rest should be zeros. So that in each Column there is at maximum one element with the value 1.
I tried :
Const1=cp.sum(X,0)<=1
Prob1= cp.Problem(Objective,[Const1])
Prob1.solve()
The following error occured:
File
"path\Anaconda3\lib\site-packages\cvxpy\reductions\complex2real\complex2real.py",
line 95, in invert
dvars[vid] = solution.dual_vars[cid]
KeyError: 11196
Any other way to set this constraint ??

I separated the complex from real part. I think it works.
import numpy as np
import cvxpy as cp
Vr= np.random.random((3,3))
Vi=np.random.random((3,3))
Cr=3
Ci=4
X=cp.Variable((3,3),boolean=True)
Real=cp.sum(cp.multiply(Vr,X))-Cr
Imag=cp.sum(cp.multiply(Vi,X))-Ci
Residules=cp.norm(cp.hstack([Real, Imag]), 2)
Objective= cp.Minimize(Residules)
const1=[cp.sum(X,axis = 0)<=1]
Prob1= cp.Problem(Objective,const1)
Prob1.solve()
X=np.array(X.value)
print(np.round(X))
print(Prob1.value)

Related

How to divide a numpy array elementwise by another numpy array of lower dimension

Let's say I have a numpy array [[0,1],[3,4],[5,6]] and want to divide it elementwise by [1,2,0].
The desired result will be [[0,1],[1.5,2],[0,0]]. So if the division is by zero, then the result is zero. I only found a way to do it in pandas dataframe with div command, but couldn't find it for numpy arrays and conversion to dataframe does not seem like a good solution.
You could wrap your operation with np.where to assign the invalid values to 0:
>>> np.where(d[:,None], x/d[:,None], 0)
array([[0. , 1. ],
[1.5, 2. ],
[0. , 0. ]])
This will still raise a warning though because we're not avoiding the division by zero:
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:1:
RuntimeWarning: divide by zero encountered in `true_divide`
"""Entry point for launching an IPython kernel.
A better way is to provide a mask to np.divide with the where argument:
>>> np.divide(x, d[:,None], where=d[:,None] != 0)
array([[0. , 1. ],
[1.5, 2. ],
[0. , 0. ]])
I have worked out this solution:
[list(x/y) if y != 0 else len(x)*[0,] for x, y in zip(a1, a2)]

Difference between scipy.linalg.expm versus hand-coded one

I was trying to implement the matrix exponential function as in scipy.linalg.expm. I gained inspiration from kaityo256's github repository. I thus wrote down the following.
from scipy.linalg import expm
from scipy.linalg import eigh
from scipy.linalg import inv
from math import exp as math_exp
from numpy import array, zeros
from numpy.random import random_sample
from numpy.testing import assert_allclose
def diag2sqr(x):
'''Makes an square matrix from a diagonal one.
Takes a 1d matrix. Determines its data type.
Finds out the shape of the 1d matrix.
Makes an empty square matrix with both
dimensions equal to largest (nonzero) dimension of
the 1d matrix. It then fills the elements of the
1d matrix into diagonal slots of the empty
square one.
Parameters
----------
x : ndarray
ndarray of be coverted to a square ndarray
Returns
-------
xsqr : ndarray
ndarray with diagonals sameas those of x
all other elements are zero
dtype same as that of x
'''
x_flat = x.ravel()
xsqr = zeros((x_flat.shape[0], x_flat.shape[0]), dtype=x.dtype)
# Making the empty matrix
for i in range(x_flat.shape[0]):
xsqr[i, i] = x_flat[i]
# filling up the ith element
print('xsqr', xsqr)
return xsqr
def kaityo_expm(x, ):
'''Exponentiates an ndarray (kaityo).
Exponentiates a ndarray in the most naive way.
Parameters
----------
x : ndarray
The ndarray to be exponentiated
Returns
-------
kexpm : ndarray
x after exponentiating
'''
rx, ux = eigh(x)
# Find eigenvalues and eigenvectors
# eigenvectors composed to form a unitary
ux_inv = inv(ux)
# Inverse of the unitary
# tx = diag([array([math_exp(i) for i in rx]).ravel()])
# tx = array([math_exp(i) for i in rx])
tx = diag2sqr(array([math_exp(i) for i in rx]))
# Constructing the diagonal matrix
kexpm1 = tx#ux_inv
kexpm = ux#kexpm1
return kexpm
Afterwards, I tried to test the above code versus scipy.linalg.expm.
x = random_sample((10, 10))
assert_allclose(expm(x), kaityo_expm(x))
This leads to the following output.
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatch: 100%
Max absolute difference: 7.04655733
Max relative difference: 0.59875635
x: array([[18.032424, 16.224408, 12.432163, 16.614248, 12.85653 , 13.705387,
15.096966, 10.577946, 18.399573, 17.938062],
[16.352809, 17.525898, 12.79079 , 16.295562, 13.512996, 14.407979,...
y: array([[18.649103, 13.157682, 11.264763, 16.099163, 15.2293 , 17.854499,
11.691586, 13.412066, 15.023189, 15.598455],
[13.157682, 13.612502, 9.628261, 12.659313, 13.559437, 13.382417,..
Obviously, both the implementations differ.
The questions are as follows:
Is it acceptable for them to differ?
Is my implementation wrong?
If my implementation is wrong, how do I fix it?
If my implementation is correct, when is it safe to use scipy.linalg.expm?
I have seen the following questions:
Matrix exponentiation with scipy: expm, expm2 and expm3
from a mathematical approach the definition of exponential of a matrix is made using the Taylor series of the exponential, so:
let A be a diagonal square matrix:
the problem arise when A is a generic square matrix, so before doing the exponential you will need do diagonalize it using eigenvalue and eigenvectors:
with U the matrix of eigenvectors and Lambda the matrix with the eigenvalues on the diagonal.
at this point we are close to finding what is an exponential of a matrix:
now lets implement this result in a simple script:
>>> import numpy as np
>>> import scipy.linalg as ln
>>> A = [[2/3, -4/3, 2],
[5/6, 4/3, -2],
[5/6, -2/3, 0]]
>>> A = np.matrix(A)
>>> print(A)
[[ 0.66666667 -1.33333333 2. ]
[ 0.83333333 1.33333333 -2. ]
[ 0.83333333 -0.66666667 0. ]]
>>> eigvalue, eigvectors = np.linalg.eig(A)
>>> print("eigvalue: ", eigvalue)
>>> print("eigvectors:")
>>> print(eigvectors)
eigvalue: [ 1. -1. 2.]
eigvectors:
[[ 0.81649658 0.27216553 0.87287156]
[ 0.40824829 -0.68041382 -0.21821789]
[ 0.40824829 -0.68041382 0.43643578]]
>>> e_Lambda = np.eye(np.size(A, 0))*(np.exp(eigvalue))
>>> print(e_Lambda)
[[2.71828183 0. 0. ]
[0. 0.36787944 0. ]
[0. 0. 7.3890561 ]]
>>> e_A = eigvectors*e_Lambda*eigvectors.I
>>> print(e_A)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> e_A2 = ln.expm(A)
>>> print(e_A2)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> np.testing.assert_allclose(e_A, e_A2)
>>> print(e_A - e_A2)
[[-1.77635684e-15 1.77635684e-15 -8.88178420e-16]
[ 4.44089210e-16 -1.77635684e-15 8.88178420e-16]
[-2.22044605e-16 0.00000000e+00 4.44089210e-16]]
we see that the result is basically the same, so i think it's safe to use scipy.linalg.expm for matrix exponentiation.
i created a repo with the notebook for further testing.

Why am I getting a "disciplined convex programming" error when using cvxpy?

import cvxpy as cp
import numpy as np
n = 3
PP = cp.Variable((n,n),"PP")
KK = [[2,1,3],[1,2,1],[3,1,2]]
s = np.array([[.1, .4, .5]]).T
t = np.array([[.4, .2, .4]]).T
e = np.ones((n,1))
x = PP.T#e - s
y = PP#e - t
for b in range(1,21):
obj = (1/4/b) * (cp.quad_form(x,KK) + cp.quad_form(y,KK)) - cp.trace(KK#PP)
prob = cp.Problem(cp.Minimize(obj),[PP>=0,cp.sum(PP)==1])
obj=prob.solve()
print("status:",prob.status)
print("obj:",obj)
print(PP.value)
When I run this, I get
cvxpy.error.DCPError: Problem does not follow DCP rules. Specifically:
The objective is not DCP. Its following subexpressions are not:
QuadForm(PP.T * [[1.]
[1.]
[1.]] + -[[0.1]
[0.4]
[0.5]], [[2. 1. 3.]
[1. 2. 1.]
[3. 1. 2.]])
I don't see why I'm getting this error when my matrix KK is clearly PSD. Why is this happening?
Duplicate here at
https://scicomp.stackexchange.com/q/34657/34383
EDIT: I spoke too soon. Your matrix KK is not PSD (it has an eigenvalue -1). For people who see this issue with a matrix that should mathematically be PSD, I've left my original answer below.
Your matrix likely is likely, numerically, not quite PSD, even though mathematically it is. This is a limitation with CVXPY's quad form atom (that we may try to address later).
For now, you can take a (matrix) square root sqrt_K of K (using, eg, scipy.linalg.sqrtm), and replace the quad_form atom with cp.sum_squares(sqrt_K # y).

AgglomerativeClustering on precomputed Sparse Matrix

In my current approach, I have
from scipy.sparse import csr_matrix
from sklearn.cluster import AgglomerativeClustering
import pandas as pd
s = pd.DataFrame([[0.8, 0. , 3. ],
[1. , 1. , 2. ],
[0.3, 3. , 4. ]], columns=['dist', 'v1', 'v2'])
sparseD = csr_matrix((1-s['dist'], (s['v1'].values, s['v2'].values)), shape=(N, N))
agg = AgglomerativeClustering(n_clusters=None, affinity='precomputed', linkage='complete', distance_threshold=.25)
agg.fit_predict(sparseD)
The last line raises
TypeError: cannot take a sparse matrix.
If I cast the data toarray, the code works and produces the expected output, but uses a lot of memory and is slow: on the real data size: 61K x 61K.
I am wondering if there is another library (or scikit API) that can do the same linkage clustering on a precomputed, sparse Distance matrix -- if there were no entry for a given (element1, element2) pair, the API would not link them and everything else would be the same.

Eigenvectors in Numpy: Very bad numerics? Did I do something wrong?

For some calculations I need an eigenvalue decomposition. Now I tried to evaluate the functions of numpy and noticed that there is a very bad behavior! Look at this:
import numpy as np
N = 3
A = np.matrix(np.random.random([N,N]))
A = 0.5*(A.H + A) #Hermetian part
la, V = np.linalg.eig(A)
VI = np.matrix(np.linalg.inv(V))
V = np.matrix(V)
/edit: I chose a hermetian Matrix now, so it is normal.
The mathematics say that we should have VI * VH = 1, and VH * A * V = VI * A * V = D, where D is the diagonal matrix of the eigenvalues. The result which I got from a random matrix was:
print(A.H*A - A*A.H)
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
this shows that A is normal.
print(V.H*A*V)
[[ 1.71513832e+00 5.55111512e-17 -1.11022302e-16]
[ -1.11022302e-16 -5.17694280e-01 0.00000000e+00]
[ -7.63278329e-17 -4.51028104e-17 1.28559996e-01]]
print(VI*A*V)
[[ 1.71513832e+00 -2.77555756e-16 -2.22044605e-16]
[ 7.49400542e-16 -5.17694280e-01 -4.16333634e-17]
[ -3.33066907e-16 1.70002901e-16 1.28559996e-01]]
This two work correct, since the off-diagonals are very small and on the diagonal we have the eigenvalues.
print(VI*V.H)
[[ 0.50868822 -0.57398479 0.64169912]
[ 0.16362266 0.79620605 0.58248052]
[-0.84525968 -0.19130446 0.49893755]]
This should be one, but its far away from it.
So folks, now tell me, what has gone wrong during making the eigenvectors, even in this small example?? Can anybody tell me when I have to care while using this functions, and what I can do against the great missmatch?
Quote from numpy.linalg.eig documentation:
Likewise, the (complex-valued) matrix of eigenvectors v is unitary if the matrix a is normal, i.e., if dot(a, a.H) = dot(a.H, a), where a.H denotes the conjugate transpose of a.
Obviously, in the example you have, A^H A != A A^H, so the matrix V is not unitary.
Therefore, V.T.conj() is not related to the inverse of V.
The most common case where this assumption is correct is for hermitian matrices.

Categories

Resources