Complex eigenvalues computation using scipy.sparse.linalg.eigs - python

Given the following input numpy 2d-array A that may be retrieved with the following link through the file hill_mat.npy, it would be great if I can compute only a subset of its eigenvalues using an iterative solver like scipy.sparse.linalg.eigs.
First of all, a little bit of context. This matrix A results from a quadratic eigenvalue problem of size N which has been linearized in an equivalent eigenvalue problem of double size 2*N. A has the following structure (blue color being zeroes):
plt.imshow(np.where(A > 1e-15,1.,0), interpolation='None')
and the following features:
A shape = (748, 748)
A dtype = float64
A sparsity ratio = 77.64841716949297 %
The true dimensions of A are much bigger than this small reproducible example. I expect the real sparsity ratio and shape to be close to 95% and (5508, 5508) for this case.
The resulting eigenvalues of A are complex (which come in complex conjugate pairs) and I am more interested in the ones with the smallest imaginary part in modulus.
Problem: when using direct solver:
w_dense = np.linalg.eigvals(A)
idx = np.argsort(abs(w_dense.imag))
w_dense = w_dense[idx]
calculation times become rapidly prohibitive. I am thus looking to use a sparse algorithm:
from scipy.sparse import csc_matrix, linalg as sla
w_sparse = sla.eigs(A, k=100, sigma=0+0j, which='SI', return_eigenvectors=False)
but it seems that ARPACK doesn't find any eigenvalues this way. From the scipy/arpack tutorial, when looking for small eigenvalues like which = 'SI', one should use the so-called shift-invert mode by specifying sigma kwarg, i.e. in order for the algorithm to know where it could expect to find these eigenvalues. Nonetheless, all of my attempts did not yield any results...
Could someone more experienced with this function give me a hand in order to make this work?
Here follows a whole code snippet:
import numpy as np
from matplotlib import pyplot as plt
from scipy.sparse import csc_matrix, linalg as sla
A = np.load('hill_mat.npy')
print('A shape =', A.shape)
print('A dtype =', A.dtype)
print('A sparsity ratio =',(np.product(A.shape) - np.count_nonzero(A)) / np.product(A.shape) *100, '%')
# quick look at the structure of A
plt.imshow(np.where(A > 1e-15,1.,0), interpolation='None')
# direct
w_dense = np.linalg.eigvals(A)
idx = np.argsort(abs(w_dense.imag))
w_dense = w_dense[idx]
# sparse
w_sparse = sla.eigs(csc_matrix(A), k=100, sigma=0+0j, which='SI', return_eigenvectors=False)

Problem finally solved, I guess I should have read the documentation more carefully, but yet, the following is quite counter-intuitive and could be better emphasized in my opinion:
... ARPACK contains a mode that allows a quick determination of
non-external eigenvalues: shift-invert mode. As mentioned above, this
mode involves transforming the eigenvalue problem to an equivalent
problem with different eigenvalues. In this case, we hope to find
eigenvalues near zero, so we’ll choose sigma = 0. The transformed
eigenvalues will then satisfy , so our small eigenvalues become large
eigenvalues .
This way, when looking for small eigenvalues, in order to help LAPACK do the work, one should activate shift-invert mode by specifying an appropriate sigma value while also reversing the desired specified subset specified in the which keyword argument.
Thus, it is simply a matter of executing:
w_sparse = sla.eigs(csc_matrix(A), k=100, sigma=0+0j, which='LM', return_eigenvectors=False, maxiter=2000)
idx = np.argsort(abs(w_sparse.imag))
w_sparse = w_sparse[idx]
Therefore, I can only hope this mistake help someone else :)

Related

How to apply the colamd algorithm on a sparse matrix in python?

I am trying to apply the Colamd (Column approximate minimum degree permutation) on a scipy sparse matrix.
A incomplete solution can be found from the scipy doc:
from scipy.sparse import csc_matrix, linalg as sla
A = csc_matrix([[1,2,0,4],[1,0,0,1],[1,0,2,1],[2,2,1,0.]])
lu = sla.splu(A,permc_spec = 'COLAMD')
print(lu.perm_c)
However, if A is a singular matrix like below, the 'Python Scipy.sparse RuntimeError: Factor is exactly singular' is raised.
A = csc_matrix([[0,1,0], [0,2,1], [0,3,0]])
To sum up, I am searching a colamd code/algorithm in python working like the colamd(S) matlab method works (ie with singular matrix).
I cannot call matlab code from python.
Thank you
Not sure if this will give you the same exact solution (since it's not clear to me how stable AMD/COLAMD is); but I was able to get a solution by adding a small epsilon-identity matrix to make A positive definite. As you make epsilon small (e.g. machine epsilon), I expect to get some reasonable solution.
epsilon=1e-6
from scipy.sparse import csc_matrix, linalg as sla
A = csc_matrix([[0,1,0], [0,2,1], [0,3,0]])+epsilon*csc_matrix(np.eye(3))
lu = sla.splu(A,permc_spec = 'COLAMD')
print(lu.perm_c)
> [0 1 2]
(note for your example problem; pretty much any positive epsilon other than 1.0 gave this permutation)

numpy.sum transition to kahan but with masked arrays for increased precision

I have a multi-array stack of data that is masked to exclude 'bad' or problematic values- this is in the 3rd dimension. Current code utilizes np.sum, but the level of precision (both large and small numbers) has negatively impacted results. I've attempted to implement the kahan_sum referenced here but forgotten about the masked arrays, and the results are not similar (due to masking). It is my hope that the added precision retention by utilizing a kahan summation and accumulator will permit downstream operations to maintain less error.
Source/research:
https://github.com/numpy/numpy/issues/8786
Kahan summation
Python floating point precision sum (I've jacked up the precision as far as possible but it doesn't help)
import numpy as np
import numpy.ma as ma
def kahan_sum(a, axis=None):
s = numpy.zeros(a.shape[:axis] + a.shape[axis+1:])
c = numpy.zeros(s.shape)
for i in range(a.shape[axis]):
# http://stackoverflow.com/a/42817610/353337
y = a[(slice(None),) * axis + (i,)] - c
t = s + y
c = (t - s) - y
s = t.copy()
return s
data=np.random.rand(5,5,5)
dd=np.ma.masked_array(data=d, mask=np.random.rand(5,5,5)<0.2)
I want to sum along the 3rd (axis=2) as that's essentially my 'stack' of photos.
The masks are not coming out as I expected. It's possible I'm just overtired...
np.sum(dd, axis=2)
kahan_sum(dd, axis=2)
np.sum provides a fully populated array of data and excluded the 'masked' values.
kahan_sum essentially or'd all of the masked values, and I've been unable to come up with a pattern for it.
Printing the mask is pretty evident that thats where the problem is; I'm just not figuring out how to fix it or why it's operating the way it is.
Thank you.
If you really need more precision, consider using math.fsum which is accurate to fp resolution. If A is your 3D masked array, something like:
i,j,k = A.shape
np.frompyfunc(lambda i,j:math.fsum(A[i,j].compressed().tolist()),2,1)(*np.ogrid[:i,:j])
But before that I'd triple check thatnp.sum really isn't good enough. As far as I know it uses pairwise summation along contiguous axes which in practice tends to be pretty good.

theano gradient with respect to matrix row

As the question suggests, I would like to compute the gradient with respect to a matrix row. In code:
import numpy.random as rng
import theano.tensor as T
from theano import function
t_x = T.matrix('X')
t_w = T.matrix('W')
t_y = T.dot(t_x, t_w.T)
t_g = T.grad(t_y[0,0], t_x[0]) # my wish, but DisconnectedInputError
t_g = T.grad(t_y[0,0], t_x) # no problems, but a lot of unnecessary zeros
f = function([t_x, t_w], [t_y, t_g])
y,g = f(rng.randn(2,5), rng.randn(7,5))
As the comments indicate, the code works without any problems when I compute the gradient with respect to the entire matrix. In this case the gradient is correctly computed, but the problem is that the result has only non-zero entries in row 0 (because other rows of x obviously do not appear in the equations for the first row of y).
I have found this question, suggesting to store all rows of the matrix in separate variables and build graphs from these variables. In my setting though, I have no idea how much rows might be in X.
Would anybody have an idea how to get the gradient with respect to a single row of a matrix or how I could omit the extra zeros in the output? If anybody would have suggestions how an arbitrary amount of vectors can be stacked, that should work as well, I guess.
I realised that it is possible to get rid of the zeros when computing derivatives with respect to the entries in row i:
t_g = T.grad(t_y[i,0], t_x)[i]
and for computing the Jacobian, I found out that
t_g = T.jacobian(t_y[i], t_x)[:,i]
does the trick. However it seems to have a rather heavy impact on computation speed.
It would also be possible to approach this problem mathematically. The Jacobian of the matrix multiplication t_y w.r.t. t_x is simply the transpose of t_w.T, which is t_w in this case (the transpose of the transpose is the original matrix). Thus, the computation would be as simple as
t_g = t_w

Numpy Cholesky decomposition LinAlgError

In my attempt to perform cholesky decomposition on a variance-covariance matrix for a 2D array of periodic boundary condition, under certain parameter combinations, I always get LinAlgError: Matrix is not positive definite - Cholesky decomposition cannot be computed. Not sure if it's a numpy.linalg or implementation issue, as the script is straightforward:
sigma = 3.
U = 4
def FromListToGrid(l_):
i = np.floor(l_/U)
j = l_ - i*U
return np.array((i,j))
Ulist = range(U**2)
Cov = []
for l in Ulist:
di = np.array([np.abs(FromListToGrid(l)[0]-FromListToGrid(i)[0]) for i, x in enumerate(Ulist)])
di = np.minimum(di, U-di)
dj = np.array([np.abs(FromListToGrid(l)[1]-FromListToGrid(i)[1]) for i, x in enumerate(Ulist)])
dj = np.minimum(dj, U-dj)
d = np.sqrt(di**2+dj**2)
Cov.append(np.exp(-d/sigma))
Cov = np.vstack(Cov)
W = np.linalg.cholesky(Cov)
Attempts to remove potential singularies also failed to resolve the problem. Any help is much appreciated.
Digging a bit deeper in problem, I tried printing the Eigenvalues of the Cov matrix.
print np.linalg.eigvalsh(Cov)
And the answer turns out to be this
[-0.0801339 -0.0801339 0.12653595 0.12653595 0.12653595 0.12653595 0.14847999 0.36269785 0.36269785 0.36269785 0.36269785 1.09439988 1.09439988 1.09439988 1.09439988 9.6772531 ]
Aha! Notice the first two negative eigenvalues? Now, a matrix is positive definite if and only if all its eigenvalues are positive. So, the problem with the matrix is not that it's close to 'zero', but that it's 'negative'. To extend #duffymo analogy, this is linear algebra equivalent of trying to take square root of negative number.
Now, let's try to perform same operation, but this time with scipy.
scipy.linalg.cholesky(Cov, lower=True)
And that fails saying something more
numpy.linalg.linalg.LinAlgError: 12-th leading minor not positive definite
That's telling something more, (though I couldn't really understand why it's complaining about 12-th minor).
Bottom line, the matrix is not quite close to 'zero' but is more like 'negative'
The problem is the data you're feeding to it. The matrix is singular, according to the solver. That means a zero or near-zero diagonal element, so inversion is impossible.
It'd be easier to diagnose if you could provide a small version of the matrix.
Zero diagonals aren't the only way to create a singularity. If two rows are proportional to each other then you don't need both in the solution; they're redundant. It's more complex than just looking for zeroes on the diagonal.
If your matrix is correct, you have a non-empty null space. You'll need to change algorithms to something like SVD.
See my comment below.

resampling, interpolating matrix

I'm trying to interpolate some data for the purpose of plotting. For instance, given N data points, I'd like to be able to generate a "smooth" plot, made up of 10*N or so interpolated data points.
My approach is to generate an N-by-10*N matrix and compute the inner product the original vector and the matrix I generated, yielding a 1-by-10*N vector. I've already worked out the math I'd like to use for the interpolation, but my code is pretty slow. I'm pretty new to Python, so I'm hopeful that some of the experts here can give me some ideas of ways I can try to speed up my code.
I think part of the problem is that generating the matrix requires 10*N^2 calls to the following function:
def sinc(x):
import math
try:
return math.sin(math.pi * x) / (math.pi * x)
except ZeroDivisionError:
return 1.0
(This comes from sampling theory. Essentially, I'm attempting to recreate a signal from its samples, and upsample it to a higher frequency.)
The matrix is generated by the following:
def resampleMatrix(Tso, Tsf, o, f):
from numpy import array as npar
retval = []
for i in range(f):
retval.append([sinc((Tsf*i - Tso*j)/Tso) for j in range(o)])
return npar(retval)
I'm considering breaking up the task into smaller pieces because I don't like the idea of an N^2 matrix sitting in memory. I could probably make 'resampleMatrix' into a generator function and do the inner product row-by-row, but I don't think that will speed up my code much until I start paging stuff in and out of memory.
Thanks in advance for your suggestions!
This is upsampling. See Help with resampling/upsampling for some example solutions.
A fast way to do this (for offline data, like your plotting application) is to use FFTs. This is what SciPy's native resample() function does. It assumes a periodic signal, though, so it's not exactly the same. See this reference:
Here’s the second issue regarding time-domain real signal interpolation, and it’s a big deal indeed. This exact interpolation algorithm provides correct results only if the original x(n) sequence is periodic within its full time inter­val.
Your function assumes the signal's samples are all 0 outside of the defined range, so the two methods will diverge away from the center point. If you pad the signal with lots of zeros first, it will produce a very close result. There are several more zeros past the edge of the plot not shown here:
Cubic interpolation won't be correct for resampling purposes. This example is an extreme case (near the sampling frequency), but as you can see, cubic interpolation isn't even close. For lower frequencies it should be pretty accurate.
If you want to interpolate data in a quite general and fast way, splines or polynomials are very useful. Scipy has the scipy.interpolate module, which is very useful. You can find many examples in the official pages.
Your question isn't entirely clear; you're trying to optimize the code you posted, right?
Re-writing sinc like this should speed it up considerably. This implementation avoids checking that the math module is imported on every call, doesn't do attribute access three times, and replaces exception handling with a conditional expression:
from math import sin, pi
def sinc(x):
return (sin(pi * x) / (pi * x)) if x != 0 else 1.0
You could also try avoiding creating the matrix twice (and holding it twice in parallel in memory) by creating a numpy.array directly (not from a list of lists):
def resampleMatrix(Tso, Tsf, o, f):
retval = numpy.zeros((f, o))
for i in xrange(f):
for j in xrange(o):
retval[i][j] = sinc((Tsf*i - Tso*j)/Tso)
return retval
(replace xrange with range on Python 3.0 and above)
Finally, you can create rows with numpy.arange as well as calling numpy.sinc on each row or even on the entire matrix:
def resampleMatrix(Tso, Tsf, o, f):
retval = numpy.zeros((f, o))
for i in xrange(f):
retval[i] = numpy.arange(Tsf*i / Tso, Tsf*i / Tso - o, -1.0)
return numpy.sinc(retval)
This should be significantly faster than your original implementation. Try different combinations of these ideas and test their performance, see which works out the best!
I'm not quite sure what you're trying to do, but there are some speedups you can do to create the matrix. Braincore's suggestion to use numpy.sinc is a first step, but the second is to realize that numpy functions want to work on numpy arrays, where they can do loops at C speen, and can do it faster than on individual elements.
def resampleMatrix(Tso, Tsf, o, f):
retval = numpy.sinc((Tsi*numpy.arange(i)[:,numpy.newaxis]
-Tso*numpy.arange(j)[numpy.newaxis,:])/Tso)
return retval
The trick is that by indexing the aranges with the numpy.newaxis, numpy converts the array with shape i to one with shape i x 1, and the array with shape j, to shape 1 x j. At the subtraction step, numpy will "broadcast" the each input to act as a i x j shaped array and the do the subtraction. ("Broadcast" is numpy's term, reflecting the fact no additional copy is made to stretch the i x 1 to i x j.)
Now the numpy.sinc can iterate over all the elements in compiled code, much quicker than any for-loop you could write.
(There's an additional speed-up available if you do the division before the subtraction, especially since inthe latter the division cancels the multiplication.)
The only drawback is that you now pay for an extra Nx10*N array to hold the difference. This might be a dealbreaker if N is large and memory is an issue.
Otherwise, you should be able to write this using numpy.convolve. From what little I just learned about sinc-interpolation, I'd say you want something like numpy.convolve(orig,numpy.sinc(numpy.arange(j)),mode="same"). But I'm probably wrong about the specifics.
If your only interest is to 'generate a "smooth" plot' I would just go with a simple polynomial spline curve fit:
For any two adjacent data points the coefficients of a third degree polynomial function can be computed from the coordinates of those data points and the two additional points to their left and right (disregarding boundary points.) This will generate points on a nice smooth curve with a continuous first dirivitive. There's a straight forward formula for converting 4 coordinates to 4 polynomial coefficients but I don't want to deprive you of the fun of looking it up ;o).
Here's a minimal example of 1d interpolation with scipy -- not as much fun as reinventing, but.
The plot looks like sinc, which is no coincidence:
try google spline resample "approximate sinc".
(Presumably less local / more taps ⇒ better approximation,
but I have no idea how local UnivariateSplines are.)
""" interpolate with scipy.interpolate.UnivariateSpline """
from __future__ import division
import numpy as np
from scipy.interpolate import UnivariateSpline
import pylab as pl
N = 10
H = 8
x = np.arange(N+1)
xup = np.arange( 0, N, 1/H )
y = np.zeros(N+1); y[N//2] = 100
interpolator = UnivariateSpline( x, y, k=3, s=0 ) # s=0 interpolates
yup = interpolator( xup )
np.set_printoptions( 1, threshold=100, suppress=True ) # .1f
print "yup:", yup
pl.plot( x, y, "green", xup, yup, "blue" )
pl.show()
Added feb 2010: see also basic-spline-interpolation-in-a-few-lines-of-numpy
Small improvement. Use the built-in numpy.sinc(x) function which runs in compiled C code.
Possible larger improvement: Can you do the interpolation on the fly (as the plotting occurs)? Or are you tied to a plotting library that only accepts a matrix?
I recommend that you check your algorithm, as it is a non-trivial problem. Specifically, I suggest you gain access to the article "Function Plotting Using Conic Splines" (IEEE Computer Graphics and Applications) by Hu and Pavlidis (1991). Their algorithm implementation allows for adaptive sampling of the function, such that the rendering time is smaller than with regularly spaced approaches.
The abstract follows:
A method is presented whereby, given a
mathematical description of a
function, a conic spline approximating
the plot of the function is produced.
Conic arcs were selected as the
primitive curves because there are
simple incremental plotting algorithms
for conics already included in some
device drivers, and there are simple
algorithms for local approximations by
conics. A split-and-merge algorithm
for choosing the knots adaptively,
according to shape analysis of the
original function based on its
first-order derivatives, is
introduced.

Categories

Resources