Linear Discriminant Analysis inverse transform - python

I try to use Linear Discriminant Analysis from scikit-learn library, in order to perform dimensionality reduction on my data which has more than 200 features. But I could not find the inverse_transform function in the LDA class.
I just wanted to ask, how can I reconstruct the original data from a point in LDA domain?
Edit base on #bogatron and #kazemakase answer:
I think the term "original data" was wrong and instead I should use "original coordinate" or "original space". I know without all PCAs we can't reconstruct the original data, but when we build the shape space we project the data down to lower dimension with help of PCA. The PCA try to explain the data with only 2 or 3 components which could capture the most of the variance of the data and if we reconstruct the data base on them it should show us the parts of the shape that causes this separation.
I checked the source code of the scikit-learn LDA again and I noticed that the eigenvectors are store in scalings_ variable. when we use the svd solver, it's not possible to inverse the eigenvectors (scalings_) matrix, but when I tried the pseudo-inverse of the matrix, I could reconstruct the shape.
Here, there are two images which are reconstructed from [ 4.28, 0.52] and [0, 0] points respectively:
I think that would be great if someone explain the mathematical limitation of the LDA inverse transform in depth.

The inverse of the LDA does not necessarily make sense beause it loses a lot of information.
For comparison, consider the PCA. Here we get a coefficient matrix that is used to transform the data. We can do dimensionality reduction by stripping rows from the matrix. To get the inverse transform, we first invert the full matrix and then remove the columns corresponding to the removed rows.
The LDA does not give us a full matrix. We only get a reduced matrix that cannot be directly inverted. It is possible to take the pseudo inverse, but this is much less efficient than if we had the full matrix at our disposal.
Consider a simple example:
C = np.ones((3, 3)) + np.eye(3) # full transform matrix
U = C[:2, :] # dimensionality reduction matrix
V1 = np.linalg.inv(C)[:, :2] # PCA-style reconstruction matrix
print(V1)
#array([[ 0.75, -0.25],
# [-0.25, 0.75],
# [-0.25, -0.25]])
V2 = np.linalg.pinv(U) # LDA-style reconstruction matrix
print(V2)
#array([[ 0.63636364, -0.36363636],
# [-0.36363636, 0.63636364],
# [ 0.09090909, 0.09090909]])
If we have the full matrix we get a different inverse transform (V1) than if we simple invert the transform (V2). That is because in the second case we lost all information about the discarded components.
You have been warned. If you still want to do the inverse LDA transform, here is a function:
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.utils.validation import check_is_fitted
from sklearn.utils import check_array, check_X_y
import numpy as np
def inverse_transform(lda, x):
if lda.solver == 'lsqr':
raise NotImplementedError("(inverse) transform not implemented for 'lsqr' "
"solver (use 'svd' or 'eigen').")
check_is_fitted(lda, ['xbar_', 'scalings_'], all_or_any=any)
inv = np.linalg.pinv(lda.scalings_)
x = check_array(x)
if lda.solver == 'svd':
x_back = np.dot(x, inv) + lda.xbar_
elif lda.solver == 'eigen':
x_back = np.dot(x, inv)
return x_back
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
lda = LinearDiscriminantAnalysis()
Z = lda.fit(X, y).transform(X)
Xr = inverse_transform(lda, Z)
# plot first two dimensions of original and reconstructed data
plt.plot(X[:, 0], X[:, 1], '.', label='original')
plt.plot(Xr[:, 0], Xr[:, 1], '.', label='reconstructed')
plt.legend()
You see, the result of the inverse transform does not have much to do with the original data (well, it's possible to guess the direction of the projection). A considerable part of the variation is gone for good.

There is no inverse transform because in general, you can not return from the lower dimensional feature space to your original coordinate space.
Think of it like looking at your 2-dimensional shadow projected on a wall. You can't get back to your 3-dimensional geometry from a single shadow because information is lost during the projection.
To address your comment regarding PCA, consider a data set of 10 random 3-dimensional vectors:
In [1]: import numpy as np
In [2]: from sklearn.decomposition import PCA
In [3]: X = np.random.rand(30).reshape(10, 3)
Now, what happens if we apply the Principal Components Transformation (PCT) and apply dimensionality reduction by keeping only the top 2 (out of 3) PCs, then apply the inverse transform?
In [4]: pca = PCA(n_components=2)
In [5]: pca.fit(X)
Out[5]:
PCA(copy=True, iterated_power='auto', n_components=2, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
In [6]: Y = pca.transform(X)
In [7]: X.shape
Out[7]: (10, 3)
In [8]: Y.shape
Out[8]: (10, 2)
In [9]: XX = pca.inverse_transform(Y)
In [10]: X[0]
Out[10]: array([ 0.95780971, 0.23739785, 0.06678655])
In [11]: XX[0]
Out[11]: array([ 0.87931369, 0.34958407, -0.01145125])
Obviously, the inverse transform did not reconstruct the original data. The reason is that by dropping the lowest PC, we lost information. Next, let's see what happens if we retain all PCs (i.e., we do not apply any dimensionality reduction):
In [12]: pca2 = PCA(n_components=3)
In [13]: pca2.fit(X)
Out[13]:
PCA(copy=True, iterated_power='auto', n_components=3, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
In [14]: Y = pca2.transform(X)
In [15]: XX = pca2.inverse_transform(Y)
In [16]: X[0]
Out[16]: array([ 0.95780971, 0.23739785, 0.06678655])
In [17]: XX[0]
Out[17]: array([ 0.95780971, 0.23739785, 0.06678655])
In this case, we were able to reconstruct the original data because we didn't throw away any information (since we retained all the PCs).
The situation with LDA is even worse because the maximum number of components that can be retained is not 200 (the number of features for your input data); rather, the maximum number of components you can retain is n_classes - 1. So if, for example, you were doing a binary classification problem (2 classes), the LDA transform would be going from 200 input dimensions down to just a single dimension.

Related

Is there a particular way to convert 3-d array to 2-d array for clustering?

I have a 3-d array of shape=(3, 60000, 10) which needs to be 2-D so as to be able to visualize it when clustering.
I was planning on implementing the k-means clustering from scikit-learn to the 3-d array and read that it only takes in 2-D shape , I just wanted some advice as to whether there is a right way to do it ? I was planning on making it (60000,30) , but wanted a clarification before I go ahead.
How I read it is that you have 10 features each consisting of 3d data. Do you intend to cluster all 10 features? If so reshape it such that you have 600000 x 3 points (assuming you want to separate in space). For example this
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt, numpy as np
# 3x points
data = np.random.rand(100, 3, 10) + np.arange(10) # add arbitrary offset for "difference" in real data
data = np.moveaxis(data, -1, 1).reshape(-1, 3)
n_clus = 10 # cluster in 10 --> fill in with your goal in mind
km = KMeans(n_clusters = n_clus).fit(data)
fig, ax = plt.subplots(subplot_kw = dict(projection = '3d'))
colors = plt.cm.tab20(np.linspace(0, 1, n_clus))
ax.scatter(*data.T, c = colors[km.labels_])
fig.show()
Yields
(600000 , 30) is probably not a great idea. K-means clustering uses a distance metrics to define clusters, Euclidean distance normally, but when you increase number of variables in the second dimension you fall into a curse of dimensionality where results of clustering will stop making sense.
You can of course try (600000, 30) and see if it works, but if it doesn't, you'll need to do reduce dimensionality, for example by doing a PCA and use principal components to do clustering.
EDIT
I'll probably try and explain what I mean by dimensionality and the issues it causes since there appears to be some confusion.
A 2d array of size (100, 2) is a 2-dimensional data, i.e. it's 100 observations of 2 variables. The trend line between those points would be a 1d object (line) and you can plot it on a 2d plane. Similarly, a (100, 3) array is 3-dimensional with a trendline being a 2d plane and you can plot those points on a 3d chart.
Then (100, 100) array is 100-dimensional. A trend would be a 99-dimensional hyperplane and you cannot visualise even in principle. Now let's see what issues this causes. Let's define a simple function calculating Euclidean distance:
def distance(x, y):
return sum((i - j)**2 for i, j in zip(x, y))**0.5
The function takes two iterables as arguments and calculates Euclidean distance between those. Now let's try with something simple:
v1 = (1, 1)
v2 = (2, 2)
v3 = (100, 100)
v4 = (120, 120)
>> distance(v1, v2)
Out: 1.4142135623730951
>> distance(v1, v3)
Out: 140.0071426749364
>> distance(v1, v4)
Out: 168.2914139223983
If we make these tuples 3 dimensional keeping the same values in all dimensions, distances become respectively: 1.73, 171.47, 206.11.
Now for the fun part - let's add a bunch of dimensions filled with "1"s:
v1 = [1, 1, 1] + list(1 for i in range(47))
v2 = [2, 2, 2] + list(1 for i in range(47))
v2 = [100, 100, 100] + list(1 for i in range(47))
v4 = [120, 120, 120] + list(1 for i in range(47))
>>> distance(v1, v2)
171.47302994931886
>>> distance(v1, v3)
175.16278143486988
>>> distance(v1, v4)
206.11404610069638
So here we increased dimensions without adding additional information to separate variables an suddenly what appeared as two distinct clusters are not so defined any more, in fact v1, v2 and v3 appear more like they belong together and v4 being an outsider.
This will also happen in most cases, unless the higher dimensions continue the pattern of the first three, i.e. (1, 1, 1...), (2, 2, 2,..), (100, 100, 100...), (120, 120, 120,...). But in most cases you will see distances shrink and clusters become indistinguishable.

PCA implementation on 3D numpy array

I have a feature set of size 2240*5*16. 2240 are number of samples, 5 represents number of channels and 16 shows # of statistical features extracted such as mean, variance, etc.
Now, I want to apply PCA. However, PCA is applicable on 2D array. I applied the following code:
from sklearn.decomposition import PCA
pca = PCA(n_components=5)
pca.fit(features)
I get the following error.
ValueError: Found array with dim 3. Estimator expected <= 2.
It doesn't support axis argument. As it is only applicable on 2D, how can I utilize it on my case (3D)? Any suggestion, if I want to reduce the dimensions from 2240*5*16 to 2240*5*5, please?
I would just loop over each channel and do PCA separately.
import numpy as np
from sklearn.decomposition import PCA
X = np.random.rand(1000, 5, 10)
X_transform = np.zeros((X.shape[0], 5, 5))
for i in range(X.shape[1]):
pca = PCA(n_components=5)
f = pca.fit_transform(X[:, i, :])
X_transform[:, i, :] = f
print((X_transform.shape))

use of numpy.newaxis in machine learning

I am trying to increase dimensionality of my inital array:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
x = 10*rng.rand(50)
y = np.sin(x) + 0.1*rng.rand(50)
poly = PolynomialFeatures(7, include_bias=False)
poly.fit_transform(x[:,np.newaxis])
First, I know np.newaxis is creating additional column. Why is this necessary?
Now I will train the updated x data(poly) with linear regression
test_x = np.linspace(0,10,1000)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# train with increased dimension(x=poly) with its target
model.fit(poly,y)
# testing
test_y = model.predict(x_test)
When I run this it give me :ValueError: Expected 2D array, got scalar array instead: on model.fit(poly,y) line. I've already added a dimension to poly, what is happening?
Also what's the difference between x[:,np.newaxis] Vs. x[:,None]?
In [55]: x=10*np.random.rand(5)
In [56]: x
Out[56]: array([6.47634068, 6.25520837, 7.58822106, 4.65466951, 2.35783624])
In [57]: x.shape
Out[57]: (5,)
newaxis does not add a column, it adds a dimension:
In [58]: x1 = x[:,np.newaxis]
In [59]: x1
Out[59]:
array([[6.47634068],
[6.25520837],
[7.58822106],
[4.65466951],
[2.35783624]])
In [60]: x1.shape
Out[60]: (5, 1)
np.newaxis has the value of None, so both work the same.
In[61]: x[:,None].shape
Out[61]: (5, 1)
One is a little clearer to human readers, the other a little easier to type.
https://www.numpy.org/devdocs/reference/constants.html
Whether x or x1 works depends on the expectations of the learning code. Some learning code expects inputs of the shape (samples, features). It could assume that a (50,) shape array is 50 samples, 1 feature, or 1 case, 50 features. But it's better if you tell exactly what you mean.
Look at the docs:
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html#sklearn.preprocessing.PolynomialFeatures.fit_transform
poly.fit_transform
X : numpy array of shape [n_samples, n_features]
Sure looks like fit_transform expects a 2d input.
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.fit
Both X and y are supposed to be 2d.

Sparse Matrix error in MLPRegressor

Context
I'm running into an error when trying to use sparse matrices as an input to sklearn.neural_network.MLPRegressor. Nominally, this method is able to handle sparse matrices. I think this might be a bug in scikit-learn, but wanted to check on here before I submit an issue.
The Problem
When passing a scipy.sparse input to sklearn.neural_network.MLPRegressor I get:
ValueError: input must be a square array
The error is raised by the matrix_power function within numpy.matrixlab.defmatrix. It seems to occur because matrix_power passes the sparse matrix to numpy.asanyarray (L137), which returns an array of size=1, ndim=0 containing the sparse matrix object. matrix_power then performs some dimension checks (L138-141) to make sure the input is a square matrix, which fail because the array returned by numpy.asanyarray is not square, even though the underlying sparse matrix is square.
As far as I can tell, the problem stems from numpy.asanyarray preventing the dimensions of the sparse matrix being determined. The sparse matrix itself has a size attribute which would allow it to pass the dimension checks, but only if it's not run through asanyarray.
I think this might be a bug, but don't want to dive around filing issues until I've confirmed that I'm not just being an idiot! Please see below, to check.
If it is a bug, where would be the most appropriate place to raise an issue? NumPy? SciPy? or Scikit-Learn?
Minimal Example
Environment
Arch Linux
kernel 4.15.7-1
Python 3.6.4
numpy 1.14.1
scipy 1.0.0
sklearn 0.19.1
Code
import numpy as np
from scipy import sparse
from sklearn import model_selection
from sklearn.preprocessing import StandardScaler, Imputer
from sklearn.neural_network import MLPRegressor
## Generate some synthetic data
def fW(A, B, C):
return A * np.random.normal(.3, .1) + B * np.random.normal(.6, .1)
def fX(A, B, C):
return B * np.random.normal(-1, .1) + A * np.random.normal(-.9, .1) / C
# independent variables
N = int(1e4)
A = np.random.uniform(2, 12, N)
B = np.random.uniform(2, 12, N)
C = np.random.uniform(2, 12, N)
# synthetic data
mW = fW(A, B, C)
mX = fX(A, B, C)
# combine datasets
real = np.vstack([A, B, C]).T
meas = np.vstack([mW, mX]).T
# add noise to meas
meas *= np.random.normal(1, 0.0001, meas.shape)
## Make data sparse
prob_null = 0.2
real[np.random.choice([True, False], real.shape, p=[prob_null, 1-prob_null])] = np.nan
meas[np.random.choice([True, False], meas.shape, p=[prob_null, 1-prob_null])] = np.nan
# NB: problem persists whichever sparse matrix method is used.
real = sparse.csr_matrix(real)
meas = sparse.csr_matrix(meas)
# replace missing values with mean
rmnan = Imputer()
real = rmnan.fit_transform(real)
meas = rmnan.fit_transform(meas)
# split into test/training sets
real_train, real_test, meas_train, meas_test = model_selection.train_test_split(real, meas, test_size=0.3)
# create scalers and apply to data
real_scaler = StandardScaler(with_mean=False)
meas_scaler = StandardScaler(with_mean=False)
real_scaler.fit(real_train)
meas_scaler.fit(meas_train)
treal_train = real_scaler.transform(real_train)
tmeas_train = meas_scaler.transform(meas_train)
treal_test = real_scaler.transform(real_test)
tmeas_test = meas_scaler.transform(meas_test)
nn = MLPRegressor((100,100,10), solver='lbfgs', early_stopping=True, activation='tanh')
nn.fit(tmeas_train, treal_train)
## ERROR RAISED HERE
## The problem:
# the sparse matrix has a shape attribute that would pass the square matrix validation
tmeas_train.shape
# but not after it's been through asanyarray
np.asanyarray(tmeas_train).shape
MLPRegressor.fit() as given in documentation supports sparse matrix for X but not for y
Parameters:
X : array-like or sparse matrix, shape (n_samples, n_features)
The input data.
y : array-like, shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
I am able to successfully run your code with:
nn.fit(tmeas_train, treal_train.toarray())

Interpolating 2 numpy arrays

Is there any numpy or scipy or python function to interpolate between two 2D numpy array's? I have two 2D numpy arrays, and I want to apply changes to the first numpy array to make it similar to the second 2D array. The constraint is that I want the changes to be smooth. e.g., let the arrays be:
A
[[1 1 1
1 1 1
1 1 1]]
and
B
[[34 100 15
62 17 87
17 34 60]]
To make A similar to B, I could add 33 to the first grid cell of A and so on.. However, to make the changes smoother, I plan to compute a mean using a 2x2 window on array B and then apply the resulting changes to array A. Is there a built in numpy or scipy method to do this or follow this approach without using for loop.
You've just described a Kalman Filtering / data fusion problem. You have an initial state A that has some errors and you have some observations B that also have some noise. You want to improve your estimate of state A by injecting some information from B, all while accounting for spatially correlated errors in both datasets. We don't have any prior information about the errors in A and B, so we can just make it up. Here's an implementation:
import numpy as np
# Make a matrix of the distances between points in an array
def dist(M):
nx = M.shape[0]
ny = M.shape[1]
x = np.ravel(np.tile(np.arange(nx),(ny,1))).reshape((nx*ny,1))
y = np.ravel(np.tile(np.arange(ny),(nx,1))).reshape((nx*ny,1))
n,m = np.meshgrid(x,y)
d = np.sqrt((n-n.T)**2+(m-m.T)**2)
return d
# Turn a distance matrix into a covariance matrix. Here is a linear covariance matrix.
def covariance(d,scaling_factor):
c = (-d/np.amax(d) + 1)*scaling_factor
return c
A = np.array([[1,1,1],[1,1,1],[1,1,1]]) # background state
B = np.array([[34,100,15],[62,17,87],[17,34,60]]) # observations
x = np.ravel(A).reshape((9,1)) # vector representation
y = np.ravel(B).reshape((9,1)) # vector representation
P_a = np.eye(9)*50 # background error covariance matrix (set to diagonal here)
P_b = covariance(dist(B),2) # observation error covariance matrix (set to a function of distance here)
# Compute the Kalman gain matrix
K = P_a.dot(np.linalg.inv(P_a+P_b))
x_new = x + K.dot(y-x)
A_new = x_new.reshape(A.shape)
print(A)
print(B)
print(A_new)
Now, this method only works if your data are unbiased. So mean(A) must equal mean(B). But you'll still get okay results regardless. Also, you can play with the covariance matrices however you like. I'd recommend reading the Kalman filter wikipedia page for more details.
By the way, the example above yields:
[[ 27.92920141 90.65490699 7.17920141]
[ 55.92920141 7.65490699 79.17920141]
[ 10.92920141 24.65490699 52.17920141]]
One way of smoothing could be to use convolve2d:
import numpy as np
from scipy import signal
B = np.array([[34, 100, 15],
[62, 17, 87],
[17, 34, 60]])
kernel = np.full((2, 2), .25)
smoothed = signal.convolve2d(B, kernel)
# [[ 8.5 33.5 28.75 3.75]
# [ 24. 53.25 54.75 25.5 ]
# [ 19.75 32.5 49.5 36.75]
# [ 4.25 12.75 23.5 15. ]]
The above pads the matrix with zeros from all sides and then calculates the mean of each 2x2 window placing the value at the center of the window.
If the matrices were actually larger, then using a 3x3 kernel (such as np.full((3, 3), 1/9)) and passing mode='same' to convolve2d would give a smoothed B with its shape preserved and elements "matching" the original. Otherwise you may need to decide what to do with the boundary values to make the shapes the same again.
To move A towards the smoothed B, it can be set to a chosen affine combination of the matrices using standard arithmetic operations, for instance: A = .2 * A + .8 * smoothed.

Categories

Resources