For some calculations I need an eigenvalue decomposition. Now I tried to evaluate the functions of numpy and noticed that there is a very bad behavior! Look at this:
import numpy as np
N = 3
A = np.matrix(np.random.random([N,N]))
A = 0.5*(A.H + A) #Hermetian part
la, V = np.linalg.eig(A)
VI = np.matrix(np.linalg.inv(V))
V = np.matrix(V)
/edit: I chose a hermetian Matrix now, so it is normal.
The mathematics say that we should have VI * VH = 1, and VH * A * V = VI * A * V = D, where D is the diagonal matrix of the eigenvalues. The result which I got from a random matrix was:
print(A.H*A - A*A.H)
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
this shows that A is normal.
print(V.H*A*V)
[[ 1.71513832e+00 5.55111512e-17 -1.11022302e-16]
[ -1.11022302e-16 -5.17694280e-01 0.00000000e+00]
[ -7.63278329e-17 -4.51028104e-17 1.28559996e-01]]
print(VI*A*V)
[[ 1.71513832e+00 -2.77555756e-16 -2.22044605e-16]
[ 7.49400542e-16 -5.17694280e-01 -4.16333634e-17]
[ -3.33066907e-16 1.70002901e-16 1.28559996e-01]]
This two work correct, since the off-diagonals are very small and on the diagonal we have the eigenvalues.
print(VI*V.H)
[[ 0.50868822 -0.57398479 0.64169912]
[ 0.16362266 0.79620605 0.58248052]
[-0.84525968 -0.19130446 0.49893755]]
This should be one, but its far away from it.
So folks, now tell me, what has gone wrong during making the eigenvectors, even in this small example?? Can anybody tell me when I have to care while using this functions, and what I can do against the great missmatch?
Quote from numpy.linalg.eig documentation:
Likewise, the (complex-valued) matrix of eigenvectors v is unitary if the matrix a is normal, i.e., if dot(a, a.H) = dot(a.H, a), where a.H denotes the conjugate transpose of a.
Obviously, in the example you have, A^H A != A A^H, so the matrix V is not unitary.
Therefore, V.T.conj() is not related to the inverse of V.
The most common case where this assumption is correct is for hermitian matrices.
Related
I am trying to calculate the Pearson correlation coefficient between two vectors in 2-dimensions using np.corrcoef. When the dimension of the vectors is different than two, they work fine, see for example:
import numpy as np
x = np.random.uniform(-10, 10, 3)
y = np.random.uniform(-10, 10, 3)
print(x, y)
print(np.corrcoef(x,y))
Output:
[-6.59840638 -1.81100446 5.6158669 ] [ 6.7200348 -7.0373677 -2.11395157]
[[ 1. -0.53299763]
[-0.53299763 1. ]]
However, when the dimension is exactly two, the correlation is wrong with the only values 1 or -1:
import numpy as np
x = np.random.uniform(-10, 10, 2)
y = np.random.uniform(-10, 10, 2)
print(x, y)
print(np.corrcoef(x,y))
Output 1:
[-2.61268708 8.32602293] [6.42020314 3.43806504]
[[ 1. -1.]
[-1. 1.]]
Output 2:
[ 5.04249697 -3.6599369 ] [6.12936665 3.15827974]
[[1. 1.]
[1. 1.]]
Output 3:
[7.33503682 7.7145613 ] [-9.54304108 7.43840944]
[[1. 1.]
[1. 1.]]
Question: What's happening and how to solve it?
There are a couple misunderstandings leading to your confusion:
I'll use row major order as numpy "Each row of x represents a variable, and each column a single observation of all those variables."
The Pearson correlation coefficient describes the linear relationship between 2 variables. If you only have 2 values point for each. You can always create a linear relationship between the 2. With the normalization, you'll always get 1 or -1.
A covariance or correlation matrix is usually calculated amongst the components of a random vector X=(X1,....,Xn).T . When you say you want the correlation between 2 vectors, it is unclear whether you want the cross-correlation between X an Y in which case you need np.correlate.
I was trying to implement the matrix exponential function as in scipy.linalg.expm. I gained inspiration from kaityo256's github repository. I thus wrote down the following.
from scipy.linalg import expm
from scipy.linalg import eigh
from scipy.linalg import inv
from math import exp as math_exp
from numpy import array, zeros
from numpy.random import random_sample
from numpy.testing import assert_allclose
def diag2sqr(x):
'''Makes an square matrix from a diagonal one.
Takes a 1d matrix. Determines its data type.
Finds out the shape of the 1d matrix.
Makes an empty square matrix with both
dimensions equal to largest (nonzero) dimension of
the 1d matrix. It then fills the elements of the
1d matrix into diagonal slots of the empty
square one.
Parameters
----------
x : ndarray
ndarray of be coverted to a square ndarray
Returns
-------
xsqr : ndarray
ndarray with diagonals sameas those of x
all other elements are zero
dtype same as that of x
'''
x_flat = x.ravel()
xsqr = zeros((x_flat.shape[0], x_flat.shape[0]), dtype=x.dtype)
# Making the empty matrix
for i in range(x_flat.shape[0]):
xsqr[i, i] = x_flat[i]
# filling up the ith element
print('xsqr', xsqr)
return xsqr
def kaityo_expm(x, ):
'''Exponentiates an ndarray (kaityo).
Exponentiates a ndarray in the most naive way.
Parameters
----------
x : ndarray
The ndarray to be exponentiated
Returns
-------
kexpm : ndarray
x after exponentiating
'''
rx, ux = eigh(x)
# Find eigenvalues and eigenvectors
# eigenvectors composed to form a unitary
ux_inv = inv(ux)
# Inverse of the unitary
# tx = diag([array([math_exp(i) for i in rx]).ravel()])
# tx = array([math_exp(i) for i in rx])
tx = diag2sqr(array([math_exp(i) for i in rx]))
# Constructing the diagonal matrix
kexpm1 = tx#ux_inv
kexpm = ux#kexpm1
return kexpm
Afterwards, I tried to test the above code versus scipy.linalg.expm.
x = random_sample((10, 10))
assert_allclose(expm(x), kaityo_expm(x))
This leads to the following output.
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatch: 100%
Max absolute difference: 7.04655733
Max relative difference: 0.59875635
x: array([[18.032424, 16.224408, 12.432163, 16.614248, 12.85653 , 13.705387,
15.096966, 10.577946, 18.399573, 17.938062],
[16.352809, 17.525898, 12.79079 , 16.295562, 13.512996, 14.407979,...
y: array([[18.649103, 13.157682, 11.264763, 16.099163, 15.2293 , 17.854499,
11.691586, 13.412066, 15.023189, 15.598455],
[13.157682, 13.612502, 9.628261, 12.659313, 13.559437, 13.382417,..
Obviously, both the implementations differ.
The questions are as follows:
Is it acceptable for them to differ?
Is my implementation wrong?
If my implementation is wrong, how do I fix it?
If my implementation is correct, when is it safe to use scipy.linalg.expm?
I have seen the following questions:
Matrix exponentiation with scipy: expm, expm2 and expm3
from a mathematical approach the definition of exponential of a matrix is made using the Taylor series of the exponential, so:
let A be a diagonal square matrix:
the problem arise when A is a generic square matrix, so before doing the exponential you will need do diagonalize it using eigenvalue and eigenvectors:
with U the matrix of eigenvectors and Lambda the matrix with the eigenvalues on the diagonal.
at this point we are close to finding what is an exponential of a matrix:
now lets implement this result in a simple script:
>>> import numpy as np
>>> import scipy.linalg as ln
>>> A = [[2/3, -4/3, 2],
[5/6, 4/3, -2],
[5/6, -2/3, 0]]
>>> A = np.matrix(A)
>>> print(A)
[[ 0.66666667 -1.33333333 2. ]
[ 0.83333333 1.33333333 -2. ]
[ 0.83333333 -0.66666667 0. ]]
>>> eigvalue, eigvectors = np.linalg.eig(A)
>>> print("eigvalue: ", eigvalue)
>>> print("eigvectors:")
>>> print(eigvectors)
eigvalue: [ 1. -1. 2.]
eigvectors:
[[ 0.81649658 0.27216553 0.87287156]
[ 0.40824829 -0.68041382 -0.21821789]
[ 0.40824829 -0.68041382 0.43643578]]
>>> e_Lambda = np.eye(np.size(A, 0))*(np.exp(eigvalue))
>>> print(e_Lambda)
[[2.71828183 0. 0. ]
[0. 0.36787944 0. ]
[0. 0. 7.3890561 ]]
>>> e_A = eigvectors*e_Lambda*eigvectors.I
>>> print(e_A)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> e_A2 = ln.expm(A)
>>> print(e_A2)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> np.testing.assert_allclose(e_A, e_A2)
>>> print(e_A - e_A2)
[[-1.77635684e-15 1.77635684e-15 -8.88178420e-16]
[ 4.44089210e-16 -1.77635684e-15 8.88178420e-16]
[-2.22044605e-16 0.00000000e+00 4.44089210e-16]]
we see that the result is basically the same, so i think it's safe to use scipy.linalg.expm for matrix exponentiation.
i created a repo with the notebook for further testing.
import cvxpy as cp
import numpy as np
n = 3
PP = cp.Variable((n,n),"PP")
KK = [[2,1,3],[1,2,1],[3,1,2]]
s = np.array([[.1, .4, .5]]).T
t = np.array([[.4, .2, .4]]).T
e = np.ones((n,1))
x = PP.T#e - s
y = PP#e - t
for b in range(1,21):
obj = (1/4/b) * (cp.quad_form(x,KK) + cp.quad_form(y,KK)) - cp.trace(KK#PP)
prob = cp.Problem(cp.Minimize(obj),[PP>=0,cp.sum(PP)==1])
obj=prob.solve()
print("status:",prob.status)
print("obj:",obj)
print(PP.value)
When I run this, I get
cvxpy.error.DCPError: Problem does not follow DCP rules. Specifically:
The objective is not DCP. Its following subexpressions are not:
QuadForm(PP.T * [[1.]
[1.]
[1.]] + -[[0.1]
[0.4]
[0.5]], [[2. 1. 3.]
[1. 2. 1.]
[3. 1. 2.]])
I don't see why I'm getting this error when my matrix KK is clearly PSD. Why is this happening?
Duplicate here at
https://scicomp.stackexchange.com/q/34657/34383
EDIT: I spoke too soon. Your matrix KK is not PSD (it has an eigenvalue -1). For people who see this issue with a matrix that should mathematically be PSD, I've left my original answer below.
Your matrix likely is likely, numerically, not quite PSD, even though mathematically it is. This is a limitation with CVXPY's quad form atom (that we may try to address later).
For now, you can take a (matrix) square root sqrt_K of K (using, eg, scipy.linalg.sqrtm), and replace the quad_form atom with cp.sum_squares(sqrt_K # y).
I need to compute TheilSen regression slope and intercept between values of two rasters in a GRASS GIS python script. The two rasters in this example (xtile and ytile) are both of the same dimensions 250x250 pixels and contain nodata (null) values.
I have so far used grass.script alone, so I am new to scipy. I tried to read some tutorials and based on that came up with following code I tried on command-line:
>>> from scipy import stats
>>> import grass.script as grass
>>> import grass.script.array as garray
>>> x = garray.array(mapname="xtile")
>>> y = garray.array(mapname="ytile")
>>> res_list = stats.theilslopes(y, x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/scipy/stats/_stats_mstats_common.py", line 214, in theilslopes
deltax = x[:, np.newaxis] - x
MemoryError
Evidently, it would not be so simple.
Edit: I removed my thoughts about the problem being dimensionality of the array, I was wrong. Now it seems the 250x250 array size is simply too much. Is it so? Any idea how to evade this?
Then there is another problem it seems. When I try to print the array x,
>>> print x
[[ 0. 0. 0. ... 0. 0. 0.]
[ 0. 0. 0. ... 0. 0. 0.]
[ 0. 402. 0. ... 0. 0. 0.]
...
[ 0. 0. 0. ... 0. 0. 0.]
[ 0. 0. 0. ... 0. 0. 0.]
[ 0. 0. 0. ... 0. 0. 0.]]
it appears all the nodata values from the raster got read as zeroes into the array. In the raster in question there is majority of nodata (or null as named in GRASS) pixels, which should be ignored in the regression, ie. if any value within the rasters x or y is nodata, corresponding pair of x,y data should not be used in the regression computation. Is it possible to define nodata values in the arrays so that these would be ignored directly in the manner described, or is it needded to first filter out the nodata pairs from the pair of arrays?
Thank you.
I am not sure it is appropriate to answer my own question, but maybe it would be useful for others with similar problem.
I was able to finally make it work. The working (i.e. it computes something) modified example solves both problems mentioned in original question:
>>> from scipy import stats
>>> import grass.script as grass
>>> import grass.script.array as garray
>>> x = garray.array(mapname="xtile").reshape(-1)
>>> y = garray.array(mapname="ytile").reshape(-1)
>>> # Now the null raster values are changed to zeroes.
>>> # Let us filter them out by pairs of x, y values for any pair which contains a zero value.
>>> xfiltered = np.array([])
>>> yfiltered = np.array([])
>>> i = 0
>>> for xi in np.nditer(x):
... if x[i] > 0:
... if y[i] > 0:
... xfiltered = np.append(xfiltered, [x[i]])
... yfiltered = np.append(yfiltered, [y[i]])
... i += 1
...
>>> # Compute the regression.
>>> res_list = stats.theilslopes(yfiltered, xfiltered)
>>> res_list
(0.8738738738738738, -26.207207207207148, 0.8327338129496403, 0.9155844155844156)
Explanation: I filtered out all zero values which were nodata in the original rasters (and possibly negative values as well, there should not be any and if they were, it would mean defective data - physical meaning of the data is quantized reflectance) before regression computation. The use of reshape was not necessary for the stats.theilslopes, as I experimentally verified with a test data, but it makes the filtering of the two arrays much easier.
Now, I am still not sure why the filtering was needed for the stats.theilslopes to finish without errors (although with all the zeroes in the data the results would be wrong anyway). It could be, that filtering out the zeroes just made the the set small enough to fit in memory, but i do not believe so. I think that the majority zero values make it impossible to compute median slope of the pairs of x,y points, since in case majority of points have identical x,y values, then also majority of their pairs have undefined slope and median is about the majority. But it could be something entirely else.
Also, as I am not very skilled Python programmer, maybe the way I did this is not the most efficient one. That could be corrected by others.
Last remark, I probably have the x and y data reversed, if y are the dependent and x the independent variables. Intuitively I felt they should be in y,x order, but I see in all the tutorials and docs it is always x,y. I left it as it was in the original question, since you can search the inverse formula x=f(y) regression line in some cases and this is not part of the problem I was trying to solve.
let us consider following matrix
2 4
1 3
0 0
0 0
creation of this matrix in python and corresponding singular value decomposition can be done in python in a simple way
A =np.array([[2,4],[1,3],[0,0],[0,0]])
u,s,v =np.linalg.svd(A)
when i typed dimensions of corresponding matrix, i got following
print(u)
print(np.diag(s))
print(v)
[-0.57604844 0.81741556 0. 0. ]
[ 0. 0. 1. 0. ]
[ 0. 0. 0. 1. ]]
[[5.4649857 0. ]
[0. 0.36596619]]
[[-0.40455358 -0.9145143 ]
[-0.9145143 0.40455358]]
therefore following code for reconstructing original matrix does not work
print(u.dot(np.dot(np.diag(s),v)))
how can i fix this problem? thanks in advance
In the formal definition of the SVD, the shape of s should be (4, 2). However NumPy's routine returns an array of singular values of shape (2,). Furthermore, np.diag() doesn't know anything about how big s "should" be in the full decomposition. It just takes an array of shape (n,) and returns a 2D array of shape (n, n). So your inner product ends up with shapes (4, 4) * (2, 2) * (2, 2), which of course fails because sizes of the first product don't make sense.
To fix this, just construct an array of the correct size for s:
>>> u, s, v = np.linalg.svd(A)
>>> true_s = np.zeros((u.shape[1], v.shape[0]))
>>> true_s[:s.size, :s.size] = np.diag(s)
>>> np.allclose(u.dot(true_s).dot(v), A)
True