MINRES implementation in Python - python

Is there any python implementation of MINRES pseudoinversion algorithm that can deal with Hermitian matrices?
I have found a few sources, but all of them are only capable of working with real matrices and do not seem to be easily generalizable onto the complex case:
https://searchcode.com/codesearch/view/89958680/
https://github.com/pascanur/theano_optimize
(there are a couple of other links, but my reputation does not allow me to post them)

A Hermitian system of size $n$
$$\mathbf y = \mathbf H^{-1}\mathbf v$$
can be embedded in a real, symmetric system of size $2n$:
\begin{equation}
\begin{bmatrix}
\Re(\mathbf y)\\Im(\mathbf y)
\end{bmatrix}=
\begin{bmatrix}
\Re(\mathbf H)&-\Im(\mathbf H)\\Im(\mathbf H)&\Re(\mathbf H)
\end{bmatrix}^{-1}
\begin{bmatrix}
\Re(\mathbf v)\\Im(\mathbf v)
\end{bmatrix}.
\end{equation}
Minimum-residual methods are often used for large problems, where constructing $H$ is impractical. In which case we may have an operation which computes a matrix-vector product, $f: \mathbb C^n \to \mathbb C^n; ,, f(\mathbf v) = \mathbf H\mathbf v.$ This function can be wrapped to operate on $\mathbf x \in \mathbb R^{2n}$ by converting $\mathbf x$ back to a complex vector, applying $f$, and then embedding the result back in $\mathbb R^{2n}$.
Here is an example in python / numpy / scipy:
from scipy.sparse.linalg import minres, LinearOperator
from pylab import *
# Problem size
N = 100
# error helper
er = lambda t,a,b:print('%s error:'%t,mean(abs(a-b)))
# random Hermitian matrix
Q = randn(N,N) + 1j*randn(N,N)
H = Q#conj(Q.T)
# random complex vector
v = randn(N) + 1j*randn(N)
# ground-truth solution
x0 = inv(H)#v
# Pack/unpack complex vector as stacked real vector
c2r = lambda v:block([real(v),imag(v)])
r2c = lambda v:kron([1,1j],eye(N))#v
# Verify that we can embed C^n in R^(2N)
Hr = real(H)
Hi = imag(H)
Hs = block([[Hr,-Hi],[Hi,Hr]])
vs = c2r(v)
xs = inv(Hs)#vs
x1 = r2c(xs)
er('Embed',x0,x1)
# Verify that minres works as expected in R-embed
x2 = r2c(minres(Hs,vs,tol=1e-12)[0])
er('Minres 1',x0,x2)
# Demonstrate using operators
Av = lambda u:c2r( H # r2c(u) )
A = LinearOperator((N*2,)*2,Av,Av)
# Minres, converting input/output to/from complex/real
x3 = r2c(minres(Hs,vs,tol=1e-12)[0])
er('Minres 2',x0,x3)
>>> Embed error: 5.317184726020268e-12
>>> Minres 1 error: 6.641342200989796e-11
>>> Minres 2 error: 6.641342200989796e-11

Related

fft normalization vs the norm-option

i have to often normalize the result of circular-convolution, so I 'borrowed'-and-modfied the following routine :
def fft_normalize(x):# Normalize a vector x in complex domain.
c = np.fft.rfft(x)
# Look at real and image as if they were real
ri = np.vstack([c.real, c.imag])
# Normalize magnitude of each complex/real pair
norm = np.linalg.norm(ri, axis=0)
if np.any(norm==0): norm[norm == 0] = np.float64(1e-308) #!fixme
ri= np.divide(ri,norm)
c_proj = ri[0,:] + 1j * ri[1,:]
rv = np.fft.irfft(c_proj, n=x.shape[-1])
return rv
def fft_convolution(a, b):
return np.fft.irfft(np.fft.rfft(a) * np.fft.rfft(b))
so that I do this :
fft_normalize(fft_convolution(a,b))
i see in the numpy docs there is a 'norm' option. Is this equivalent to what i'm doing ?
And if yes, which option should I use ?
def fft_convolution2(a, b):
return np.fft.irfft(np.fft.rfft(a) * np.fft.rfft(b), norm='ortho')
When I test it it behaves better when I do fft_normalize()
Second, i had to add this line, but it does not seem, right. any ideas ?
if np.any(norm==0): norm[norm == 0] = np.float64(1e-308) #!fixme
As a side note, if you know !! numpy docs mentions that they promote float32 to float64 and that scipy.fftpack does not !!
Would fftpack be faster ! because scipy says fftpack is obsolete and there is no info on the new scipy ?
#cris are you sugesting i do it this way :
def fft_normalize(x):# Normalize a vector x in complex domain.
c = np.fft.rfft(x)
ri = np.vstack([c.real, c.imag])
norm = np.abs(c)
if np.any(norm==0): norm[norm == 0] = MIN #!fixme
ri= np.divide(ri,norm)
c_proj = ri[0,:] + 1j * ri[1,:]
rv = np.fft.irfft(c_proj, n=x.shape[-1])
return rv
The norm argument to the FFT functions in NumPy determine whether the transform result is multiplied by 1, 1/N or 1/sqrt(N), with N the number of samples in the array. Normally, the inverse transform is normalized by dividing by N, and the forward transform is not. Specifying “ortho” here causes both transforms to be normalized by 1/sqrt(2). Specifying “forward” causes only the forward transform to be normalized by 1/N.
These normalizations are very different from the one you apply, where you normalize each element in the frequency domain.

How to include known parameter that changes over time in solve_bvp

I am trying to use scipy's solve_bvp in python to solve differential equations that depend on a known parameter that changes over time. I have this parameter saved in a numpy array. However, when I try to use this array in the derivatives function, I get the following error ValueError: operands could not be broadcast together with shapes (10,) (11,).
Below is a simplified version of my code. I want the variable d2 to take certain values at different times according to an array, d2_set_values. The differential equations for some of the 12 variables then depend on d2. I hope it's clear from this code what I'm trying to achieve.
import numpy as np
from scipy.integrate import solve_bvp
t = np.linspace(0, 10, 11)
# Known parameter that changes over time
d2_set_values = np.zeros(t.size)
d2_set_values[:4] = 0.1
d2_set_values[4:8] = 0.2
d2_set_values[8:] = 0.1
# Initialise y vector
y = np.zeros((12, t.size))
# ODEs
def fun(x, y):
S1, I1, R1, S2, I2, R2, lamS1, lamI1, lamR1, lamS2, lamI2, lamR2 = y
d1 = 0.5*(I1 + 0.1*I2)*(lamS1 - lamI1)
d2 = d2_set_values
dS1dt = -0.5*S1*(1-d1)*(I1 + 0.1*I2)
dS2dt = -0.5*S2*(1-d2)*(I2 + 0.1*I1)
dI1dt = 0.5*S1*(1-d1)*(I1 + 0.1*I2) - 0.2*I1
dI2dt = 0.5*S2*(1-d2)*(I2 + 0.1*I1) - 0.2*I2
dR1dt = 0.2*I1
dR2dt = 0.2*I2
dlamS1dt = 0.5*(1-d1)*S1*lamS1
dlamS2dt = 0.5*(1-d2)*S2*lamS2
dlamI1dt = 0.5*(1-d1)*I1*lamI1
dlamI2dt = 0.5*(1-d2)*I2*lamI2
dlamR1dt = lamR1
dlamR2dt = lamR2
return np.vstack((dS1dt, dI1dt, dR1dt, dS2dt, dI2dt, dR2dt, dlamS1dt, dlamI1dt, dlamR1dt, dlamS2dt, dlamI2dt, dlamR2dt))
# Boundary conditions
def bc(ya, yb):
return np.array([ya[0]-0.99, ya[1]-0.01, ya[2]-0., ya[3]-1.0, ya[4]-0., ya[5]-0.,
yb[6]-0., yb[7]-1., yb[8]-0., yb[9]-0, yb[10]-0, yb[11]-0])
# Run the solver
sol = solve_bvp(fun, bc, t, y)
I have even tried reducing the size of d2_set_values by one, but that doesn't solve the issue.
Any help I can get would be much appreciated!

Dissimilarity between the output of linear equation produced by Numpy Polynomial and Matlab polyfit

The objective is to find the coefficients of a polynomial P(x) of degree 1 (a line) that fits the data y best in a least-squares sense, and then compute the root of P.
In Matlab, the linear least squares fit can be calculated via
[p, S, mu] = polyfit(x, y, 1)
This produces the coefficients p, which define the polynomial:
-1.5810877
6.0094824
y = p(1) * x + p(2)
mu represents the mean and std of the x.
In Python, this can be achieved via the Numpy's polynomial as suggested here:
import numpy as np
x = [
6210, 6211, 6212, 6213, 6214, 6215, 6216, 6217, 6218, 6219,
6220, 6221, 6222, 6223, 6224, 6225, 6226, 6227, 6228, 6229,
6230, 6231, 6232, 6233, 6234, 6235, 6236, 6237, 6238, 6239,
6240, 6241, 6242, 6243, 6244, 6245, 6246, 6247, 6248, 6249,
6250, 6251, 6252, 6253, 6254, 6255, 6256, 6257, 6258, 6259,
6260, 6261, 6262, 6263, 6264, 6265, 6266, 6267, 6268, 6269,
6270, 6271, 6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279,
6280, 6281, 6282, 6283, 6284, 6285, 6286, 6287, 6288
]
y = [
7.8625913, 7.7713094, 7.6833806, 7.5997391, 7.5211883,
7.4483986, 7.3819046, 7.3221073, 7.2692747, 7.223547,
7.1849418, 7.1533613, 7.1286001, 7.1103559, 7.0982385,
7.0917811, 7.0904517, 7.0936642, 7.100791, 7.1111741,
7.124136, 7.1389918, 7.1550579, 7.1716633, 7.1881566,
7.2039142, 7.218349, 7.2309117, 7.2410989, 7.248455,
7.2525721, 7.2530937, 7.249711, 7.2421637, 7.2302341,
7.213747, 7.1925621, 7.1665707, 7.1356878, 7.0998487,
7.0590014, 7.0131001, 6.9621005, 6.9059525, 6.8445964,
6.7779589, 6.7059474, 6.6284504, 6.5453324, 6.4564347,
6.3615761, 6.2605534, 6.1531439, 6.0391097, 5.9182019,
5.7901659, 5.6547484, 5.5117044, 5.360805, 5.2018456,
5.034656, 4.8591075, 4.6751242, 4.4826899, 4.281858,
4.0727611, 3.8556159, 3.6307325, 3.3985188, 3.1594861,
2.9142516, 2.6635408, 2.4081881, 2.1491354, 1.8874279,
1.6242117,1.3607255,1.0982931,0.83831298
]
p = np.polynomial.polynomial.Polynomial.fit(x, y, 1, domain=[-1, 1])
print(p)
436.53467443432453 - 0.0688950539698132·x¹
Both the mean and std can be calculated as follows:
x_mean = np.mean(x)
x_std = np.std(x)
I notice however, the coefficients produce by Matlab's polyfit and the Numpy's Polynomial are different:
MATLAB: [-1.5810877, 6.0094824] vs Python: [436.53467443432453, 0.0688950539698132]
The difference is significant for my use-case, because I want to compute the roots of the line.
In Matlab, this can be calculates as
>> x_m = p(1);
>> x_mean = mu(1);
>> x_c = p(2);
>> x_std = mu(2);
>> x_intercept = (x_m * x_mean - x_c * x_std) ./ x_m
x_intercept =
6336.2266
Whereas in Python
>>> x_mean = np.mean(x)
>>> x_std = np.std(x)
>>> x_c, x_m = line.coef
>>> x_intercept = (x_m * x_mean - x_c * x_std) / x_m
>>> x_intercept
150737.19742902054
Clearly, the difference between these two is big. How can I replicate the x_intercept calculated using Matlab in Python?
For the sake of numerical precision, it is handy to map polynomial data to the domain [-1, 1] or some similarly well-bounded region when computing fits. A classic example is given in the numpy docs regarding fitting Chebyshev polynomials, which appear to lose all their interesting features if you zoom out far enough. The mapping is less interesting for lines, but highly useful for higher order polynomials, where x**n may explode. The specific window [-1, 1] is used exactly to prevent such explosions for arbitrarily shifted data with large n (since +/-1**n is always just +/-1, and x**n for |x| < 1 is always well bounded).
Let's start by looking at the data (which is always a good first step):
I've included a line fit here. Just by eye, I can tell that a root at ~6340 is reasonable for the line fit. It's also pretty clear that the data is actually more cubic in nature, with an "actual" root around 6290, but that's irrelevant to the question.
MATLAB takes the approach of mapping the x-data such that the 1-sigma portion of the domain fits within the window [-1, 1]. That means that the polynomial coefficients you get apply to (x - mean(xRight)) / std(xRight). If you choose to do scaling, by requesting three output arguments from polyfit, you do not have the option of choosing a different scale:
>> [p_scaled, s, m] = polyfit(xRight, yRight, 1)
p_scaled =
-1.5811 6.0095
s =
struct with fields:
R: [2×2 double]
df: 77
normr: 8.5045
m =
1.0e+03 *
6.2490
0.0229
>> p_unscaled = polyfit(xRight, yRight, 1)
p_unscaled =
-0.0689 436.5347
You can compute the values of both fits at say x = 6250 using polyval:
>> polyval(p_scaled, 6250, s, mu)
ans =
5.9406
>> polyval(p_unscaled, 6250)
ans =
5.9406
>> polyval(p_scaled, (6250 - mean(xRight)) / std(xRight))
ans =
5.9406
And of course manually:
>> (6250 - m(1)) / m(2) * p_scaled(1) + p_scaled(2)
ans =
5.9406
>> 6250 * p_unscaled(1) + p_unscaled(2)
ans =
5.9406
Python does something similar with the domain and window arguments to np.polynomial.Polynomial.fit. Just as MATLAB maps [-std(x), std(x)] + mean(x) to [-1, 1], domain is mapped to window. The biggest difference is that you can choose both domain and window. Here are some common options:
>>> p_nomap = np.polynomial.Polynomial.fit(xRight, yRight, 1, domain=[-1, 1]); print(p_nomap)
436.53467443435204 - 0.06889505396981752 x**1
>>> p_default = np.polynomial.Polynomial.fit(xRight, yRight, 1); print(p_default)
6.009482176962028 - 2.686907104822783 x**1
>>> p_minmax = np.polynomial.Polynomial.fit(xRight, yRight, 1, domain=[xRight_arr.min(), xRight_arr.max()]); print(p_minmax)
6.009482176962028 - 2.686907104822783 x**1
>>> p_matlab = np.polynomial.Polynomial.fit(xRight, yRight, 1, domain=np.std(xRight) * np.arange(-1, 2, 2) + np.mean(xRight)); print(p_matlab)
6.009482176962061 - 1.571048948945243 x**1
You can see the following equivalences:
Numpy's p_matlab <=> MATLAB's p_scaled
Numpy's p_nomap <=> MATLAB's p_unscaled
Numpy's p_minmax <=> Numpy's p_default
You can verify with a similar test as in MATLAB, except that instead of polyval, you can just call a Polynomial object directly:
>>> p_nomap(6250)
5.940587122992554
>>> p_default(6250)
5.940587122992223
>>> p_minmax(6250)
5.940587122992223
>>> p_matlab(6250)
5.94058712299223
Python has a nicer syntax in this case, and since you are using an object, it keeps track of all the scales and offsets for you internally.
You can estimate the root of the line in any number of ways now that you understand how the different choices of domain work. In MATLAB:
>> roots(p_unscaled)
ans =
6.3362e+03
>> roots(p_scaled) * m(2) + m(1)
ans =
6.3362e+03
>> -p_unscaled(2) / p_unscaled(1)
ans =
6.3362e+03
>> -p_scaled(2) / p_scaled(1) * m(2) + m(1)
ans =
6.3362e+03
You can do the same thing in numpy:
>>> p_nomap.roots()
array([6336.22661252])
>>> p_default.roots()
array([6336.22661252])
>>> p_minmax.roots()
array([6336.22661252])
>>> p_matlab.roots()
array([6336.22661252])
To evaluate manually, we can use Polynomial.mapparms, which is roughly equivalent to the output parameter m in MATLAB. The major difference is that mapparms outputs [-np.mean(xRight) / np.std(xRight), 1 / np.std(xRight)], which simplifies the evaluation process.
>>> -p_nomap.coef[0] / p_nomap.coef[1]
6336.22661251699
>>> (-p_default.coef[0] / p_default.coef[1] - p_default.mapparms()[0]) / p_default.mapparms()[1]
6336.226612516987
>>> (-p_minmax.coef[0] / p_minmax.coef[1] - p_minmax.mapparms()[0]) / p_minmax.mapparms()[1]
6336.226612516987
>>> (-p_matlab.coef[0] / p_matlab.coef[1] - p_matlab.mapparms()[0]) / p_matlab.mapparms()[1]
6336.226612516987
So the key to getting equivalent results is to either choose identical parameters (and understand their correspondence), or to use the provided functions to evaluate polynomials and their roots. In general I recommend the latter regardless of the former. The reason those functions were provided is that you can get accurate results from any choice of domain, as long as you pass the data along consistently (manually for polyval and automatically for Polynomial.__call__)

Is my problem suited for convex optimization, and if so, how to express it with cvxpy?

I have an array of scalars of m rows and n columns. I have a Variable(m) and a Variable(n) that I would like to find solutions for.
The two variables represent values that need to be broadcast over the columns and rows respectively.
I was naively thinking of writing the variables as Variable((m, 1)) and Variable((1, n)), and adding them together as if they're ndarrays. However, that doesn't work, as broadcasting is not allowed.
import cvxpy as cp
import numpy as np
# Problem data.
m = 3
n = 4
np.random.seed(1)
data = np.random.randn(m, n)
# Construct the problem.
x = cp.Variable((m, 1))
y = cp.Variable((1, n))
objective = cp.Minimize(cp.sum(cp.abs(x + y + data)))
# or:
#objective = cp.Minimize(cp.sum_squares(x + y + data))
prob = cp.Problem(objective)
result = prob.solve()
print(x.value)
print(y.value)
This fails on the x + y expression: ValueError: Cannot broadcast dimensions (3, 1) (1, 4).
Now I'm wondering two things:
Is my problem indeed solvable using convex optimization?
If yes, how can I express it in a way that cvxpy understands?
I'm very new to the concept of convex optimization, as well as cvxpy, and I hope I described my problem well enough.
I offered to show you how to represent this as a linear program, so here it goes. I'm using Pyomo, since I'm more familiar with that, but you could do something similar in PuLP.
To run this, you will need to first install Pyomo and a linear program solver like glpk. glpk should work for reasonable-sized problems, but if you are finding it's taking too long to solve, you could try a (much faster) commercial solver like CPLEX or Gurobi.
You can install Pyomo via pip install pyomo or conda install -c conda-forge pyomo. You can install glpk from https://www.gnu.org/software/glpk/ or via conda install glpk. (I think PuLP comes with a version of glpk built-in, so that might save you a step.)
Here's the script. Note that this calculates absolute error as a linear expression by defining one variable for the positive component of the error and another for the negative part. Then it seeks to minimize the sum of both. In this case, the solver will always set one to zero since that's an easy way to reduce the error, and then the other will be equal to the absolute error.
import random
import pyomo.environ as po
random.seed(1)
# ~50% sparse data set, big enough to populate every row and column
m = 10 # number of rows
n = 10 # number of cols
data = {
(r, c): random.random()
for r in range(m)
for c in range(n)
if random.random() >= 0.5
}
# define a linear program to find vectors
# x in R^m, y in R^n, such that x[r] + y[c] is close to data[r, c]
# create an optimization model object
model = po.ConcreteModel()
# create indexes for the rows and columns
model.ROWS = po.Set(initialize=range(m))
model.COLS = po.Set(initialize=range(n))
# create indexes for the dataset
model.DATAPOINTS = po.Set(dimen=2, initialize=data.keys())
# data values
model.data = po.Param(model.DATAPOINTS, initialize=data)
# create the x and y vectors
model.X = po.Var(model.ROWS, within=po.NonNegativeReals)
model.Y = po.Var(model.COLS, within=po.NonNegativeReals)
# create dummy variables to represent errors
model.ErrUp = po.Var(model.DATAPOINTS, within=po.NonNegativeReals)
model.ErrDown = po.Var(model.DATAPOINTS, within=po.NonNegativeReals)
# Force the error variables to match the error
def Calculate_Error_rule(model, r, c):
pred = model.X[r] + model.Y[c]
err = model.ErrUp[r, c] - model.ErrDown[r, c]
return (model.data[r, c] + err == pred)
model.Calculate_Error = po.Constraint(
model.DATAPOINTS, rule=Calculate_Error_rule
)
# Minimize the total error
def ClosestMatch_rule(model):
return sum(
model.ErrUp[r, c] + model.ErrDown[r, c]
for (r, c) in model.DATAPOINTS
)
model.ClosestMatch = po.Objective(
rule=ClosestMatch_rule, sense=po.minimize
)
# Solve the model
# get a solver object
opt = po.SolverFactory("glpk")
# solve the model
# turn off "tee" if you want less verbose output
results = opt.solve(model, tee=True)
# show solution status
print(results)
# show verbose description of the model
model.pprint()
# show X and Y values in the solution
for r in model.ROWS:
print('X[{}]: {}'.format(r, po.value(model.X[r])))
for c in model.COLS:
print('Y[{}]: {}'.format(c, po.value(model.Y[c])))
Just to complete the story, here's a solution that's closer to your original example. It uses cvxpy, but with the sparse data approach from my solution.
I don't know the "official" way to do elementwise calculations with cvxpy, but it seems to work OK to just use the standard Python sum function with a lot of individual cp.abs(...) calculations.
This gives a solution that is very slightly worse than the linear program, but you may be able to fix that by adjusting the solution tolerance.
import cvxpy as cp
import random
random.seed(1)
# Problem data.
# ~50% sparse data set
m = 10 # number of rows
n = 10 # number of cols
data = {
(i, j): random.random()
for i in range(m)
for j in range(n)
if random.random() >= 0.5
}
# Construct the problem.
x = cp.Variable(m)
y = cp.Variable(n)
objective = cp.Minimize(
sum(
cp.abs(x[i] + y[j] + data[i, j])
for (i, j) in data.keys()
)
)
prob = cp.Problem(objective)
result = prob.solve()
print(x.value)
print(y.value)
I did not get the idea, but just some hacky stuff based on the assumption:
you want some cvxpy-equivalent to numpy's broadcasting-rules behaviour on arrays (m, 1) + (1, n)
So numpy-wise:
m = 3
n = 4
np.random.seed(1)
a = np.random.randn(m, 1)
b = np.random.randn(1, n)
a
array([[ 1.62434536],
[-0.61175641],
[-0.52817175]])
b
array([[-1.07296862, 0.86540763, -2.3015387 , 1.74481176]])
a + b
array([[ 0.55137674, 2.48975299, -0.67719333, 3.36915713],
[-1.68472504, 0.25365122, -2.91329511, 1.13305535],
[-1.60114037, 0.33723588, -2.82971045, 1.21664001]])
Let's mimic this with np.kron, which has a cvxpy-equivalent:
aLifted = np.kron(np.ones((1,n)), a)
bLifted = np.kron(np.ones((m,1)), b)
aLifted
array([[ 1.62434536, 1.62434536, 1.62434536, 1.62434536],
[-0.61175641, -0.61175641, -0.61175641, -0.61175641],
[-0.52817175, -0.52817175, -0.52817175, -0.52817175]])
bLifted
array([[-1.07296862, 0.86540763, -2.3015387 , 1.74481176],
[-1.07296862, 0.86540763, -2.3015387 , 1.74481176],
[-1.07296862, 0.86540763, -2.3015387 , 1.74481176]])
aLifted + bLifted
array([[ 0.55137674, 2.48975299, -0.67719333, 3.36915713],
[-1.68472504, 0.25365122, -2.91329511, 1.13305535],
[-1.60114037, 0.33723588, -2.82971045, 1.21664001]])
Let's check cvxpy semi-blindly (we only dimensions; too lazy to setup a problem and fix variable to check the output :-D):
import cvxpy as cp
x = cp.Variable((m, 1))
y = cp.Variable((1, n))
cp.kron(np.ones((1,n)), x) + cp.kron(np.ones((m, 1)), y)
# Expression(AFFINE, UNKNOWN, (3, 4))
# looks good!
Now some caveats:
i don't know how efficient cvxpy can reason about this matrix-form internally
unclear if more efficient as a simple list-comprehension based form using cp.vstack and co (it probably is)
this operation itself kills all sparsity
(if both vectors are dense; your matrix is dense)
cvxpy and more or less all convex-optimization solvers are based on some sparsity assumption
scaling this problem up to machine-learning dimensions will not make you happy
there is probably a much more concise mathematical theory for your problem then to use (sparsity-assuming) (pretty) general (DCP implemented in cvxpy is a subset) convex-optimization

How to run a .py module?

I've got zero experience with Python. I have looked around some tutorial materials, but it seems difficult to understand a advanced code. So I came here for a more specific answer.
For me the mission is to redo the code in my computer.
Here is the scenario:
I'm a graduate student studying tensor factorization in relation learning. A paper[1] providing a code to run this algorithm, as follows:
import logging, time
from numpy import dot, zeros, kron, array, eye, argmax
from numpy.linalg import qr, pinv, norm, inv
from scipy.linalg import eigh
from numpy.random import rand
__version__ = "0.1"
__all__ = ['rescal', 'rescal_with_random_restarts']
__DEF_MAXITER = 500
__DEF_INIT = 'nvecs'
__DEF_PROJ = True
__DEF_CONV = 1e-5
__DEF_LMBDA = 0
_log = logging.getLogger('RESCAL')
def rescal_with_random_restarts(X, rank, restarts=10, **kwargs):
"""
Restarts RESCAL multiple time from random starting point and
returns factorization with best fit.
"""
models = []
fits = []
for i in range(restarts):
res = rescal(X, rank, init='random', **kwargs)
models.append(res)
fits.append(res[2])
return models[argmax(fits)]
def rescal(X, rank, **kwargs):
"""
RESCAL
Factors a three-way tensor X such that each frontal slice
X_k = A * R_k * A.T. The frontal slices of a tensor are
N x N matrices that correspond to the adjecency matrices
of the relational graph for a particular relation.
For a full description of the algorithm see:
Maximilian Nickel, Volker Tresp, Hans-Peter-Kriegel,
"A Three-Way Model for Collective Learning on Multi-Relational Data",
ICML 2011, Bellevue, WA, USA
Parameters
----------
X : list
List of frontal slices X_k of the tensor X. The shape of each X_k is ('N', 'N')
rank : int
Rank of the factorization
lmbda : float, optional
Regularization parameter for A and R_k factor matrices. 0 by default
init : string, optional
Initialization method of the factor matrices. 'nvecs' (default)
initializes A based on the eigenvectors of X. 'random' initializes
the factor matrices randomly.
proj : boolean, optional
Whether or not to use the QR decomposition when computing R_k.
True by default
maxIter : int, optional
Maximium number of iterations of the ALS algorithm. 500 by default.
conv : float, optional
Stop when residual of factorization is less than conv. 1e-5 by default
Returns
-------
A : ndarray
array of shape ('N', 'rank') corresponding to the factor matrix A
R : list
list of 'M' arrays of shape ('rank', 'rank') corresponding to the factor matrices R_k
f : float
function value of the factorization
iter : int
number of iterations until convergence
exectimes : ndarray
execution times to compute the updates in each iteration
"""
# init options
ainit = kwargs.pop('init', __DEF_INIT)
proj = kwargs.pop('proj', __DEF_PROJ)
maxIter = kwargs.pop('maxIter', __DEF_MAXITER)
conv = kwargs.pop('conv', __DEF_CONV)
lmbda = kwargs.pop('lmbda', __DEF_LMBDA)
if not len(kwargs) == 0:
raise ValueError( 'Unknown keywords (%s)' % (kwargs.keys()) )
sz = X[0].shape
dtype = X[0].dtype
n = sz[0]
k = len(X)
_log.debug('[Config] rank: %d | maxIter: %d | conv: %7.1e | lmbda: %7.1e' % (rank,
maxIter, conv, lmbda))
_log.debug('[Config] dtype: %s' % dtype)
# precompute norms of X
normX = [norm(M)**2 for M in X]
Xflat = [M.flatten() for M in X]
sumNormX = sum(normX)
# initialize A
if ainit == 'random':
A = array(rand(n, rank), dtype=dtype)
elif ainit == 'nvecs':
S = zeros((n, n), dtype=dtype)
T = zeros((n, n), dtype=dtype)
for i in range(k):
T = X[i]
S = S + T + T.T
evals, A = eigh(S,eigvals=(n-rank,n-1))
else :
raise 'Unknown init option ("%s")' % ainit
# initialize R
if proj:
Q, A2 = qr(A)
X2 = __projectSlices(X, Q)
R = __updateR(X2, A2, lmbda)
else :
R = __updateR(X, A, lmbda)
# compute factorization
fit = fitchange = fitold = f = 0
exectimes = []
ARAt = zeros((n,n), dtype=dtype)
for iter in xrange(maxIter):
tic = time.clock()
fitold = fit
A = __updateA(X, A, R, lmbda)
if proj:
Q, A2 = qr(A)
X2 = __projectSlices(X, Q)
R = __updateR(X2, A2, lmbda)
else :
R = __updateR(X, A, lmbda)
# compute fit value
f = lmbda*(norm(A)**2)
for i in range(k):
ARAt = dot(A, dot(R[i], A.T))
f += normX[i] + norm(ARAt)**2 - 2*dot(Xflat[i], ARAt.flatten()) + lmbda*(R[i].flatten()**2).sum()
f *= 0.5
fit = 1 - f / sumNormX
fitchange = abs(fitold - fit)
toc = time.clock()
exectimes.append( toc - tic )
_log.debug('[%3d] fit: %.5f | delta: %7.1e | secs: %.5f' % (iter,
fit, fitchange, exectimes[-1]))
if iter > 1 and fitchange < conv:
break
return A, R, f, iter+1, array(exectimes)
def __updateA(X, A, R, lmbda):
n, rank = A.shape
F = zeros((n, rank), dtype=X[0].dtype)
E = zeros((rank, rank), dtype=X[0].dtype)
AtA = dot(A.T,A)
for i in range(len(X)):
F += dot(X[i], dot(A, R[i].T)) + dot(X[i].T, dot(A, R[i]))
E += dot(R[i], dot(AtA, R[i].T)) + dot(R[i].T, dot(AtA, R[i]))
A = dot(F, inv(lmbda * eye(rank) + E))
return A
def __updateR(X, A, lmbda):
r = A.shape[1]
R = []
At = A.T
if lmbda == 0:
ainv = dot(pinv(dot(At, A)), At)
for i in range(len(X)):
R.append( dot(ainv, dot(X[i], ainv.T)) )
else :
AtA = dot(At, A)
tmp = inv(kron(AtA, AtA) + lmbda * eye(r**2))
for i in range(len(X)):
AtXA = dot(At, dot(X[i], A))
R.append( dot(AtXA.flatten(), tmp).reshape(r, r) )
return R
def __projectSlices(X, Q):
q = Q.shape[1]
X2 = []
for i in range(len(X)):
X2.append( dot(Q.T, dot(X[i], Q)) )
return X2
It's boring to paste such a long code but there is no other way to figure out my problems. I'm sorry about this.
I import this module and pass them arguments according to the author's website:
import pickle, sys
from rescal import rescal
rank = sys.argv[1]
X = pickle.load('us-presidents.pickle')
A, R, f, iter, exectimes = rescal(X, rank, lmbda=1.0)
The dataset us-presidents.rdf can be found here.
My questions are:
According to the code note, the tensor X is a list. I don't quite understand this, how do I relate a list to a tensor in Python? Can I understand tensor = list in Python?
Should I convert RDF format to a triple(subject, predicate, object) format first? I'm not sure of the data structure of X. How do I assignment values to X by hand?
Then, how to run it?
I paste the author's code without his authorization, is it an act of infringement? if so, I am so sorry and I will delete it soon.
The problems may be a little bored, but these are important to me. Any help would be greatly appreciated.
[1] Maximilian Nickel, Volker Tresp, Hans-Peter Kriegel,
A Three-Way Model for Collective Learning on Multi-Relational Data,
in Proceedings of the 28th International Conference on Machine Learning, 2011 , Bellevue, WA, USA
To answer Q2: you need to transform the RDF and save it before you can load it from the file 'us-presidents.pickle'. The author of that code probably did that once because the Python native pickle format loads faster. As the pickle format includes the datatype of the data, it is possible that X is some numpy class instance and you would need either an example pickle file as used by this code, or some code doing the pickle.dump to figure out how to convert from RDF to this particular pickle file as rescal expects it.
So this might answer Q1: the tensor consists of a list of elements. From the code you can see that the X parameter to rescal has a length (k = len(X) ) and can be indexed (T = X[i]). So it elements are used as a list (even if it might be some other datatype, that just behaves as such.
As an aside: If you are not familiar with Python and are just interested in the result of the computation, you might get more help contacting the author of the software.
According to the code note, the tensor X is a list. I don't quite understand this, how do I relate a list to a tensor in Python? Can I
understand tensor = list in Python?
Not necessarily but the author of the code has decided to represent the tensor data as a list data structure. As the comments indicate, the list X contains:
List of frontal slices X_k of the tensor X. The shape of each X_k is ('N', 'N')
That means the tensor is repesented as a list of tuples: [(N, N), ..., (N, N)].
I'm not sure of the data structure of X. How do I assignment values to X by hand?
Now that we now the data structure of X, we can assign values to it using assignment. The following will assign the tuple (1, 3) to the first position in the list X (as the first position is at index 0, the second at position 1, et cetera):
X[0] = (1, 3)
Similarly, the following will assign the tuple (2, 4) to the second position:
X[1] = (2, 4)

Categories

Resources