Interpolate between two matrices with numpy - python

I have two HxW matrices A and B. I'd like to get an NxHxW matrix C such that C[0]=A, C[-1]=B, and each of the remaining N-2 slices are linearly interpolated between A and B. Is there a single numpy function I can do this with, without needing a for loop?

Just use linspace if you are looking for linear interpolation between just 2 points.
A = np.array([[0,1],
[2,3]])
B = np.array([[1, 3],
[-1,-2]])
C = np.linspace(A,B,4) #<- Change this to H+2, which is H linearly interpolated values between the 2 points
C
array([[[ 0. , 1. ], #<-- A matrix is C[0]
[ 2. , 3. ]],
[[ 0.33333333, 1.66666667],
[ 1. , 1.33333333]], #
#<-- Elementwise equally spaced values
[[ 0.66666667, 2.33333333], #
[ 0. , -0.33333333]],
[[ 1. , 3. ], #<-- B matrix is C[-1]
[-1. , -2. ]]])

Related

Getting first principal component and reduction in variance with PCA using Numpy

I am following this example here: https://machinelearningmastery.com/calculate-principal-component-analysis-scratch-python/
A = array([[1, 2], [3, 4], [5, 6]])
print(A)
# calculate the mean of each column
M = mean(A.T, axis=1)
print(M)
# center columns by subtracting column means
C = A - M
print(C)
# calculate covariance matrix of centered matrix
V = cov(C.T)
print(V)
# eigendecomposition of covariance matrix
values, vectors = eig(V)
print(vectors)
print(values)
# project data
P = vectors.T.dot(C.T)
print(P.T)
which gives:
original data
[[1 2]
[3 4]
[5 6]]
column mean
[ 3. 4.]
centered matrix
[[-2. -2.]
[ 0. 0.]
[ 2. 2.]]
covariance matrix
[[ 4. 4.]
[ 4. 4.]]
vectors
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]
values
[ 8. 0.]
projected data
[[-2.82842712 0. ]
[ 0. 0. ]
[ 2.82842712 0. ]]
If I want to find the first principal direction, do I simply take the eigenvalue that corresponds to the largest eigenvector? Therefore:[0.70710678, 0.70710678] ?
Building upon this, is the first principal component the highest eigenvector projected onto the data? Something like:
vectors[:,:1].T.dot(C.T)
which gives:
array([[-2.82842712, 0. , 2.82842712]])
I just fear I have the terminology confused, or I'm oversimplifying things. Thanks in advance!

Numpy covariance command returning matrix with more dimensions than input

I have an arbitrary row vector "u" and an arbitrary matrix "e" as follows:
u = np.resize(np.array([8,3]),[1,2])
e = np.resize(np.array([[2,2,5,5],[1, 6, 7, 4]]),[4,2])
np.cov(u,e)
array([[ 12.5, 0. , 0. , -12.5, 7.5],
[ 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. ],
[-12.5, 0. , 0. , 12.5, -7.5],
[ 7.5, 0. , 0. , -7.5, 4.5]])
The matrix that this returns is 5x5. This is confusing to me because the largest dimension of the inputs is only 4.
Thus, this may be less of a numpy question and more of a math question...not sure...
Please refer to the official numpy documentation (https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.cov.html) and check whether you usage of the numpy.cov function is consistent with what you are trying to achieve and you understand what you are trying to do.
When looking at the signature
numpy.cov(m, y=None, rowvar=True, bias=False, ddof=None, fweights=None, aweights=None)
m : array_like
A 1-D or 2-D array containing multiple variables and observations.
Each row of m represents a variable, and each column a single observation > > of all those variables. Also see rowvar below.
y : array_like, optional
An additional set of variables and observations. y has the same form as that of m.
Note how m and y are combined as shown in the last example on the page
>>> x = [-2.1, -1, 4.3]
>>> y = [3, 1.1, 0.12]
>>> X = np.stack((x, y), axis=0)
>>> print(np.cov(X))
[[ 11.71 -4.286 ]
[ -4.286 2.14413333]]
>>> print(np.cov(x, y))
[[ 11.71 -4.286 ]
[ -4.286 2.14413333]]
>>> print(np.cov(x))
11.71

Numpy - Modal matrix and diagonal Eigenvalues

I wrote a simple Linear Algebra code in Python Numpy to calculate the Diagonal of EigenValues by calculating $M^{-1}.A.M$ (M is the Modal Matrix) and it's working strange.
Here's the Code :
import numpy as np
array = np.arange(16)
array = array.reshape(4, -1)
print(array)
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
eigenvalues, eigenvectors = np.linalg.eig(array)
print eigenvalues
[ 3.24642492e+01 -2.46424920e+00 1.92979794e-15 -4.09576009e-16]
print eigenvectors
[[-0.11417645 -0.7327781 0.54500164 0.00135151]
[-0.3300046 -0.28974835 -0.68602671 0.40644504]
[-0.54583275 0.15328139 -0.2629515 -0.8169446 ]
[-0.76166089 0.59631113 0.40397657 0.40914805]]
inverseEigenVectors = np.linalg.inv(eigenvectors) #M^(-1)
diagonal= inverseEigenVectors.dot(array).dot(eigenvectors) #M^(-1).A.M
print(diagonal)
[[ 3.24642492e+01 -1.06581410e-14 5.32907052e-15 0.00000000e+00]
[ 7.54951657e-15 -2.46424920e+00 -1.72084569e-15 -2.22044605e-16]
[ -2.80737213e-15 1.46768503e-15 2.33547852e-16 7.25592561e-16]
[ -6.22319863e-15 -9.69656080e-16 -1.38050658e-30 1.97215226e-31]]
the final 'diagonal' matrix should be a diagonal matrix with EigenValues on the main diagonal and zeros elsewhere. but it's not... the two first main diagonal values ARE eigenvalues but the two second aren't (although just like the two second eigenvalues, they are nearly zero).
and by the way a number like $-1.06581410e-14$ is literally zero so how can I make numpy show them as zero?
What am I doing wrong?
Thanks...
Just round the final result to the desired digits :
print(diagonal.round(5))
array([[ 32.46425, 0. , 0. , 0. ],
[ 0. , -2.46425, 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]])
Don't confuse precision of computation and printing policies.
>>> diagonal[np.abs(diagonal)<0.0000000001]=0
>>> print diagonal
[[ 32.4642492 0. 0. 0. ]
[ 0. -2.4642492 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]
>>>

Numpy matrix combination

I have a rotation matrix and translation vector as corresponding numpy objects. What is the best way to combine them into a 4x4 transform matrix? Are there any functions which allow to avoid dummy element-wise copying?
There are many ways to do this; here are two.
You can create an empty 4x4 array. Then the rotation matrix and the translation vector can each be copied into the 4x4 transform matrix with slice assignment. For example, R and t are the rotation matrix and translation vector, respectively.
In [23]: R
Out[23]:
array([[ 0.51456517, -0.25333656, 0.81917231],
[ 0.16196059, 0.96687621, 0.19727939],
[-0.8420163 , 0.03116053, 0.53855136]])
In [24]: t
Out[24]: array([ 1. , 2. , 0.5])
Create an empty 4x4 array M, and fill it with R and t.
In [25]: M = np.empty((4, 4))
In [26]: M[:3, :3] = R
In [27]: M[:3, 3] = t
In [28]: M[3, :] = [0, 0, 0, 1]
In [29]: M
Out[29]:
array([[ 0.51456517, -0.25333656, 0.81917231, 1. ],
[ 0.16196059, 0.96687621, 0.19727939, 2. ],
[-0.8420163 , 0.03116053, 0.53855136, 0.5 ],
[ 0. , 0. , 0. , 1. ]])
Or you can assemble the transform matrix with functions such as numpy.hstack and numpy.vstack:
In [30]: M = np.vstack((np.hstack((R, t[:, None])), [0, 0, 0 ,1]))
In [31]: M
Out[31]:
array([[ 0.51456517, -0.25333656, 0.81917231, 1. ],
[ 0.16196059, 0.96687621, 0.19727939, 2. ],
[-0.8420163 , 0.03116053, 0.53855136, 0.5 ],
[ 0. , 0. , 0. , 1. ]])
Note that t[:, None] (which could also be spelled t[:, np.newaxis] or t.reshape(-1, 1)) creates a 2-d view of t with shape (3, 1). This makes the shape compatible with M in the call to np.hstack.
In [55]: t[:, None]
Out[55]:
array([[ 1. ],
[ 2. ],
[ 0.5]])

Numpy array scaling not returning proper values

I have a numpy array that I want to alter by scaling all of the columns (e.g. all the values in a column are divided by the maximum value in that column so that all values are <1).
A sample output of the array is
[ 2. 0. 367.877 ..., -0.358 51.547 -32.633]
[ 2. 0. 339.824 ..., -0.33 52.562 -27.581]
[ 3. 0. 371.438 ..., -0.406 55.108 -35.573]
I've tried scaling the array (data_in) by the following code:
#normalize the data_in array
data_in_normalized = data_in / data_in.max(axis=0)
However, the output of data_in_normalized is:
[ 0.5 0. 0.95437199 0.89363654 0.80751792 ]
[ 0.46931238 0.50660904 0.5003812 0.91250444 0.625 ]
[ 0.96229214 0.89483109 0.86989432 0.86491407 0.71287646 ]
[ -23.90909091 0.34346373 1.25110652 0. 0.8537859 1. 1.]
Clearly, it didn't normalize--there are multiple areas where the maximum value is >1. Is there a better way to scale the data, or am I using the max() function incorrectly (e.g. is the max() value being shared between columns?)
IIUC, it's not that the maximum value is shared between columns, it's that you probably want to divide by the maximum absolute value instead, because you have elements of both signs. 1 > -100, after all, and so if you divide by the maximum value of a column with [1, -100], nothing would change.
For example:
>>> data_in = np.array([[-3,-2],[2,1]])
>>> data_in
array([[-3, -2],
[ 2, 1]])
>>> data_in.max(axis=0)
array([2, 1])
>>> data_in / data_in.max(axis=0)
array([[-1.5, -2. ],
[ 1. , 1. ]])
but
>>> data_in / np.abs(data_in).max(axis=0)
array([[-1. , -1. ],
[ 0.66666667, 0.5 ]])

Categories

Resources