Related
I am trying to implement PCA analysis using numpy to mimic the results from sklearn's decomposition.PCA classifier.
I am using as input vectors of N flattened images of fixed size M = 128x192 (image dimensions) joined horizontally into a single matrix D of dimensions MxN
I am aiming to use the Snapshot method, as other implementations (see here and here) crash my build while computing np.cov, since the size of the covariant matrix would be C = D(D^T) = MxM.
The snapshot method first computes C_acute = (D^T)D, then computes the (acute) eigenvectors and values of this NxN matrix. This gives eigenvectors that are (D^T)v, and eigenvalues that are the same.
To retrieve the eigenvectors v from the (acute) eigenvectors, we simply do v = (1/eigenvalue) * (D(v_acute)).
Here is the reference implementation I am using adapted from this SO post (which is known to work):
class TemplatePCA:
def __init__(self, n_components=None):
self.n_components = n_components
def fit_transform(self, X):
X -= np.mean(X, axis = 0)
R = np.cov(X, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
evals, evecs = np.linalg.eig(R)
# sort eigenvalue in decreasing order
idx = np.argsort(evals)[::-1]
evecs = evecs[:,idx]
# sort eigenvectors according to same index
evals = evals[idx]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
evecs = evecs[:, :self.n_components]
# carry out the transformation on the data using eigenvectors
# and return the re-scaled data
return -1 * np.dot(X, evecs) #
Here is the implementation I have so far.
class MyPCA:
def __init__(self, n_components=None):
self.n_components = n_components
def fit_transform(self, X):
X -= np.mean(X, axis = 0)
D = X.T
M, N = D.shape
D_T = X # D.T == (X.T).T == X
C_acute = np.dot(D_T, D)
eigen_values, eigen_vectors_acute = np.linalg.eig(C_acute)
eigen_vectors = []
for i in range(eigen_vectors_acute.shape[0]): # for each eigenvector
v = np.dot(D, eigen_vectors_acute[i]) / eigen_values[i]
eigen_vectors.append(v)
eigen_vectors = np.array(eigen_vectors)
# sort eigenvalues and eigenvectors in decreasing order
idx = np.argsort(eigen_values)[::-1]
eigen_vectors = eigen_vectors[:,idx]
eigen_values = eigen_values[idx]
# select the first n_components eigenvectors
eigen_vectors = eigen_vectors[:, :self.n_components]
# carry out the transformation on the data using eigenvectors
# return the re-scaled data (projection)
return np.dot(C_acute, eigen_vectors)
The reference text I am using notes that:
The eigenvector is now (D^T)v, so to do face detection we first multiply our test image vector by (D^T) before projecting onto the eigenimages.
I am not sure whether it is possible to retrieve the exact same principal components (i.e. eigenvectors) using this method, and it would seem impossible to even get the same eigenvectors back, since the size of the eigen_vectors_acute is only (4, 6) (meaning there are only 4 vectors), compared to the other method where it is (6, 6) (there are 6).
Running both on an input:
x = np.array([
[0.387,123, 789,256, 4878, 5.42],
[0.723,9.78,1.90,1234, 12104,5.25],
[1,123, 67.98,7.91,12756,5.52],
[1.524,1.34,23.456,1.23,6787,3.94],
])
# These two are the same
print(sklearn.decomposition.PCA(n_components=3).fit_transform(x))
print(TemplatePCA(n_components=3).fit_transform(x))
# This one is different
print(MyPCA(n_components=3).fit_transform(x))
Output:
[[ 4282.20163145 147.84415964 -267.73483211]
[-3025.62452358 683.58580386 67.76941319]
[-3599.15380006 -569.33984612 -148.62757658]
[ 2342.57669218 -262.09011737 348.5929955 ]]
[[-4282.20163145 -147.84415964 267.73483211]
[ 3025.62452358 -683.58580386 -67.76941319]
[ 3599.15380006 569.33984612 148.62757658]
[-2342.57669218 262.09011737 -348.5929955 ]]
[[ 3.35535639e+15, -5.70493660e+17, -8.57482740e+17],
[-2.45510474e+15, 4.17428591e+17, 6.27417685e+17],
[-2.82475918e+15, 4.80278997e+17, 7.21885236e+17],
[ 1.92450753e+15, -3.27213928e+17, -4.91820181e+17]]
I'm trying to code a demonstration of compressed sensing for my final year project but am getting poor image reconstruction when using the Lasso algorithm. I've relied on the following as a reference: http://www.pyrunner.com/weblog/2016/05/26/compressed-sensing-python/
However my code has some differences:
I use scikit-learn to perform a lasso optimisation (basis pursuit) as opposed to using cvxpy to perform an l_1 minimisation with an equality constraint as in the article.
I construct psi differently/more simply, testing seems to show that it's correct.
I use a different package to read and write the image.
import numpy as np
import scipy.fftpack as spfft
import scipy.ndimage as spimg
import imageio
from sklearn.linear_model import Lasso
x_orig = imageio.imread('gt40.jpg', pilmode='L') # read in grayscale
x = spimg.zoom(x_orig, 0.2) #zoom for speed
ny,nx = x.shape
k = round(nx * ny * 0.5) #50% sample
ri = np.random.choice(nx * ny, k, replace=False)
y = x.T.flat[ri] #y is the measured sample
# y = np.expand_dims(y, axis=1) ---- this doesn't seem to make a difference, was presumably required with cvxpy
psi = spfft.idct(np.identity(nx*ny), norm='ortho', axis=0) #my construction of psi
# psi = np.kron(
# spfft.idct(np.identity(nx), norm='ortho', axis=0),
# spfft.idct(np.identity(ny), norm='ortho', axis=0)
# )
# psi = 2*np.random.random_sample((nx*ny,nx*ny)) - 1
theta = psi[ri,:] #equivalent to phi*psi
lasso = Lasso(alpha=0.001, max_iter=10000)
lasso.fit(theta, y)
s = np.array(lasso.coef_)
x_recovered = psi#s
x_recovered = x_recovered.reshape(nx, ny).T
x_recovered_final = x_recovered.astype('uint8') #recovered image is float64 and has negative values..
imageio.imwrite('gt40_recovered.jpg', x_recovered_final)
Unfortunately I'm not allowed to post images yet so here is a link to the original zoomed image, the image recovered with lasso and the image recovered with cvxpy (described later):
https://imgur.com/a/LROSug6
As you can see not only is the recovery poor but the image completely corrupted - the colours seem to be negative and the detail from the 50% sample lost. I think I've managed to track down the problem to the Lasso regression - it returns a vector that, when inverse transformed, has values that are not necessarily in the 0-255 range as expected for the image. So the conversion to from dtype float64 to uint8 is rather random (e.g. -55 becomes 255-55=200).
Following this I tried swapping out lasso for the same optimisation as in the article (minimising the l_1 norm subject to theta*s=y using cvxpy):
import cvxpy as cvx
x_orig = imageio.imread('gt40.jpg', pilmode='L') # read in grayscale
x = spimg.zoom(x_orig, 0.2)
ny,nx = x.shape
k = round(nx * ny * 0.5)
ri = np.random.choice(nx * ny, k, replace=False)
y = x.T.flat[ri]
psi = spfft.idct(np.identity(nx*ny), norm='ortho', axis=0)
theta = psi[ri,:] #equivalent to phi*psi
#NEW CODE STARTS:
vx = cvx.Variable(nx * ny)
objective = cvx.Minimize(cvx.norm(vx, 1))
constraints = [theta#vx == y]
prob = cvx.Problem(objective, constraints)
result = prob.solve(verbose=True)
s = np.array(vx.value).squeeze()
x_recovered = psi#s
x_recovered = x_recovered.reshape(nx, ny).T
x_recovered_final = x_recovered.astype('uint8')
imageio.imwrite('gt40_recovered_altopt.jpg', x_recovered_final)
This took nearly 6 hours but finally I got a somewhat satisfactory result. However I would like to perform a demonstration of lasso if possible. Any help in getting the lasso to return appropriate values or somehow converting its result appropriately would be very much appreciated.
I try to implement Polynomial Regression with Gradient Descent. I want to fit the following function:
The code I use is:
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg
from sklearn.preprocessing import PolynomialFeatures
np.random.seed(seed=42)
def create_data():
x = PolynomialFeatures(degree=5).fit_transform(np.linspace(-10,10,100).reshape(100,-1))
l = lambda x_i: (1/3)*x_i**3-2*x_i**2+2*x_i+2
data = l(x[:,1])
noise = np.random.normal(0,0.1,size=np.shape(data))
y = data+noise
y= y.reshape(100,1)
return {'x':x,'y':y}
def plot_function(x,y):
fig = plt.figure(figsize=(10,10))
plt.plot(x[:,1],[(1/3)*x_i**3-2*x_i**2+2*x_i+2 for x_i in x[:,1]],c='lightgreen',linewidth=3,zorder=0)
plt.scatter(x[:,1],y)
plt.show()
def w_update(y,x,batch,w_old,eta):
derivative = np.sum([(y[i]-np.dot(w_old.T,x[i,:]))*x[i,:] for i in range(np.shape(x)[0])])
print(derivative)
return w_old+eta*(1/batch)*derivative
# initialize variables
w = np.random.normal(size=(6,1))
data = create_data()
x = data['x']
y = data['y']
plot_function(x,y)
# Update w
w_s = []
Error = []
for i in range(500):
error = (1/2)*np.sum([(y[i]-np.dot(w.T,x[i,:]))**2 for i in range(len(x))])
Error.append(error)
w_prime = w_update(y,x,np.shape(x)[0],w,0.001)
w = w_prime
w_s.append(w)
# Plot the predicted function
plt.plot(x[:,1],np.dot(x,w))
plt.show()
# Plot the error
fig3 = plt.figure()
plt.scatter(range(len(Error[10:])),Error[10:])
plt.show()
But as result I receive smth. strange which is completely out of bounds...I have also tried to alter the number of iterations as well as the parameter theta but it did not help. I assume I have made an mistake in the update of w.
I have found the solution. The Problem is indeed in the part where I calculate the weights. Specifically in:
np.sum([(y[d]-np.dot(w_old.T,x[d,:]))*x[d,:] for d in range(np.shape(x)[0])])
which should be like:
np.sum([-(y[d]-np.dot(w.T.copy(),x[d,:]))*x[d,:].reshape(np.shape(w)) for d in range(len(x))],axis=0)
We have to add np.sum(axis=0) to get the dimensionality we want --> Dimensionality must be equal to w. The numpy sum documentation sais
The default, axis=None, will sum all of the elements of the input
array.
This is not what we want to achieve. Adding axis = 0 sums over the first axis of our array which is of dimensionality (100,7,1) hence the 100 elements of dimensionality (7,1) are summed up and the resulting array is of dimensionality (7,1) which is exactly what we want. Implementing this and cleaning up the code yields:
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
np.random.seed(seed=42)
def create_data():
x = PolynomialFeatures(degree=6).fit_transform(np.linspace(-2,2,100).reshape(100,-1))
x[:,1:] = MinMaxScaler(feature_range=(-2,2),copy=False).fit_transform(x[:,1:])
l = lambda x_i: np.cos(0.8*np.pi*x_i)
data = l(x[:,1])
noise = np.random.normal(0,0.1,size=np.shape(data))
y = data+noise
y= y.reshape(100,1)
# Normalize Data
return {'x':x,'y':y}
def plot_function(x,y,w,Error,w_s):
fig,ax = plt.subplots(nrows=1,ncols=2,figsize=(40,10))
ax[0].plot(x[:,1],[np.cos(0.8*np.pi*x_i) for x_i in x[:,1]],c='lightgreen',linewidth=3,zorder=0)
ax[0].scatter(x[:,1],y)
ax[0].plot(x[:,1],np.dot(x,w))
ax[0].set_title('Function')
ax[1].scatter(range(iterations),Error)
ax[1].set_title('Error')
plt.show()
# initialize variables
data = create_data()
x = data['x']
y = data['y']
w = np.random.normal(size=(np.shape(x)[1],1))
eta = 0.1
iterations = 10000
batch = 10
def stochastic_gradient_descent(x,y,w,eta):
derivative = -(y-np.dot(w.T,x))*x.reshape(np.shape(w))
return eta*derivative
def batch_gradient_descent(x,y,w,eta):
derivative = np.sum([-(y[d]-np.dot(w.T.copy(),x[d,:]))*x[d,:].reshape(np.shape(w)) for d in range(len(x))],axis=0)
return eta*(1/len(x))*derivative
def mini_batch_gradient_descent(x,y,w,eta,batch):
gradient_sum = np.zeros(shape=np.shape(w))
for b in range(batch):
choice = np.random.choice(list(range(len(x))))
gradient_sum += -(y[choice]-np.dot(w.T,x[choice,:]))*x[choice,:].reshape(np.shape(w))
return eta*(1/batch)*gradient_sum
# Update w
w_s = []
Error = []
for i in range(iterations):
# Calculate error
error = (1/2)*np.sum([(y[i]-np.dot(w.T,x[i,:]))**2 for i in range(len(x))])
Error.append(error)
# Stochastic Gradient Descent
"""
for d in range(len(x)):
w-= stochastic_gradient_descent(x[d,:],y[d],w,eta)
w_s.append(w.copy())
"""
# Minibatch Gradient Descent
"""
w-= mini_batch_gradient_descent(x,y,w,eta,batch)
"""
# Batch Gradient Descent
w -= batch_gradient_descent(x,y,w,eta)
# Show predicted weights
print(w_s)
# Plot the predicted function and the Error
plot_function(x,y,w,Error,w_s)
As result we receive:
Which surely can be improved by altering eta and the number of iterations as well as switching to Stochastic or Mini Batch Gradient Descent or more sophisticated optimization algorithms.
I am learning about Linear Discriminant Analysis and am using the scikit-learn module. I am confused by the "coef_" attribute from the LinearDiscriminantAnalysis class. As far as I understand, these are the discriminant function coefficients (sklearn calls them weight vectors). Since there should be (n_classes-1) discriminant functions, I would expect the coef_ attribute to be an array with shape (n_components, n_features), but instead it prints an (n_classes, n_features) array. Below is an example of this using the Iris dataset example from sklearn. Since there are 3 classes and 2 components, I would expect print(lda.coef_) to give me a 2x4 array instead of a 3x4 array...
Maybe I'm misinterpreting what the weight vectors are, perhaps they are the coefficients for the classification function?
And how do I get the coefficients for each variable in each discriminant/canonical function?
screenshot of jupyter notebook
Code here:
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
import numpy as np
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
lda = LinearDiscriminantAnalysis(n_components=2,store_covariance=True)
X_r = lda.fit(X, y).transform(X)
plt.figure()
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], alpha=.8, color=color,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('Function 1 (%.2f%%)' %(lda.explained_variance_ratio_[0]*100))
plt.ylabel('Function 2 (%.2f%%)' %(lda.explained_variance_ratio_[1]*100))
plt.title('LDA of IRIS dataset')
print(lda.coef_)
#output -> [[ 6.24621637 12.24610757 -16.83743427 -21.13723331]
# [ -1.51666857 -4.36791652 4.64982565 3.18640594]
# [ -4.72954779 -7.87819105 12.18760862 17.95082737]]
You can calculate the coefficients with the following code:
def LDA_coefficients(X,lda):
nb_col = X.shape[1]
matrix= np.zeros((nb_col+1,nb_col), dtype=int)
Z=pd.DataFrame(data=matrix,columns=X.columns)
for j in range(0,nb_col):
Z.iloc[j,j] = 1
LD = lda.transform(Z)
nb_funct= LD.shape[1]
results = pd.DataFrame();
index = ['const']
for j in range(0,LD.shape[0]-1):
index = np.append(index,'C'+str(j+1))
for i in range(0,LD.shape[1]):
coef = [LD[-1][i]]
for j in range(0,LD.shape[0]-1):
coef = np.append(coef,LD[j][i]-LD[-1][i])
result = pd.Series(coef)
result.index = index
column_name = 'LD' + str(i+1)
results[column_name] = result
return results
Before calling this function you need to complete the linear discriminant analysis:
lda = LinearDiscriminantAnalysis()
lda.fit(X,y)
I am having trouble fitting a multivariate gaussian distribution to my dataset, more specifically, finding a mean vector (or multiple mean vectors). My dataset is an N x 8 matrix and currently I am using this code:
muVector = np.mean(Xtrain, axis=0) where Xtrain is my training data set.
For the covariance I am building it using a arbitrary variance value (.5) and doing:
covariance = np.dot(.5, np.eye(N,N) where N is the number of observations.
But when I construct my Phi matrix, I am getting all zeros. Here is my code:
muVector = np.mean(Xtrain, axis=0)
# get covariance matrix from Xtrain
cov = np.dot(var, np.eye(N,N))
cov = np.linalg.inv(cov)
# build Xtrain Phi
Phi = np.ones((N,M))
for row in range(N):
temp = Xtrain[row,:] - muVector
temp.shape = (1,M)
temp = np.dot((-.5), temp)
temp = np.dot(temp, cov)
temp = np.dot(temp, (Xtrain[row,:] - muVector))
Phi[row,:] = np.exp(temp)
Any help is appreciated. I think I might have to use np.random.multivariate_normal()? But I do not know how to use it in this case.
By "Phi" I believe that you mean the probability density function (pdf) that you want to estimate. In this case, the covariance matrix should be MxM and the output Phi will be Nx1:
# -*- coding: utf-8 -*-
import numpy as np
N = 1024
M = 8
var = 0.5
# Creating a Xtrain NxM observation matrix.
# Its muVector is [0, 1, 2, 3, 4, 5, 6, 7] and the variance for all
# independent random variables is 0.5.
Xtrain = np.random.multivariate_normal(np.arange(8), np.eye(8,8)*var, N)
# Estimating the mean vector.
muVector = np.mean(Xtrain, axis=0)
# Creating the estimated covariance matrix and its inverse.
cov = np.eye(M,M)*var
inv_cov = np.linalg.inv(cov)
# Normalization factor from the pdf.
norm_factor = 1/np.sqrt((2*np.pi)**M * np.linalg.det(cov))
# Estimating the pdf.
Phi = np.ones((N,1))
for row in range(N):
temp = Xtrain[row,:] - muVector
temp.shape = (1,M)
temp = np.dot(-0.5*temp, inv_cov)
temp = np.dot(temp, (Xtrain[row,:] - muVector))
Phi[row] = norm_factor*np.exp(temp)
Alternatively, you can use the pdf method from scipy.stats.multivariate_normal:
# -*- coding: utf-8 -*-
import numpy as np
from scipy.stats import multivariate_normal
N = 1024
M = 8
var = 0.5
# Creating a Xtrain NxM observation matrix.
# Its muVector is [0, 1, 2, 3, 4, 5, 6, 7] and the variance for all
# independent random variables is 0.5.
Xtrain = np.random.multivariate_normal(np.arange(8), np.eye(8,8)*var, N)
# Estimating the mean vector.
muVector = np.mean(Xtrain, axis=0)
# Creating the estimated covariance matrix.
cov = np.eye(M,M)*var
Phi2 = multivariate_normal.pdf(Xtrain, mean=muVector, cov=cov)
Both Phi and Phi2 output arrays will be equal.