I am trying to increase dimensionality of my inital array:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
x = 10*rng.rand(50)
y = np.sin(x) + 0.1*rng.rand(50)
poly = PolynomialFeatures(7, include_bias=False)
poly.fit_transform(x[:,np.newaxis])
First, I know np.newaxis is creating additional column. Why is this necessary?
Now I will train the updated x data(poly) with linear regression
test_x = np.linspace(0,10,1000)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# train with increased dimension(x=poly) with its target
model.fit(poly,y)
# testing
test_y = model.predict(x_test)
When I run this it give me :ValueError: Expected 2D array, got scalar array instead: on model.fit(poly,y) line. I've already added a dimension to poly, what is happening?
Also what's the difference between x[:,np.newaxis] Vs. x[:,None]?
In [55]: x=10*np.random.rand(5)
In [56]: x
Out[56]: array([6.47634068, 6.25520837, 7.58822106, 4.65466951, 2.35783624])
In [57]: x.shape
Out[57]: (5,)
newaxis does not add a column, it adds a dimension:
In [58]: x1 = x[:,np.newaxis]
In [59]: x1
Out[59]:
array([[6.47634068],
[6.25520837],
[7.58822106],
[4.65466951],
[2.35783624]])
In [60]: x1.shape
Out[60]: (5, 1)
np.newaxis has the value of None, so both work the same.
In[61]: x[:,None].shape
Out[61]: (5, 1)
One is a little clearer to human readers, the other a little easier to type.
https://www.numpy.org/devdocs/reference/constants.html
Whether x or x1 works depends on the expectations of the learning code. Some learning code expects inputs of the shape (samples, features). It could assume that a (50,) shape array is 50 samples, 1 feature, or 1 case, 50 features. But it's better if you tell exactly what you mean.
Look at the docs:
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html#sklearn.preprocessing.PolynomialFeatures.fit_transform
poly.fit_transform
X : numpy array of shape [n_samples, n_features]
Sure looks like fit_transform expects a 2d input.
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.fit
Both X and y are supposed to be 2d.
Related
Let's consider data :
import numpy as np
from sklearn.linear_model import LogisticRegression
x=np.linspace(0,2*np.pi,80)
x = x.reshape(-1,1)
y = np.sin(x)+np.random.normal(0,0.4,80)
y[y<1/2] = 0
y[y>1/2] = 1
clf=LogisticRegression(solver="saga", max_iter = 1000)
I want to fit logistic regression where y is dependent variable, and x is independent variable. But while I'm using :
clf.fit(x,y)
I see error
'y should be a 1d array, got an array of shape (80, 80) instead'.
I tried to reshape data by using
y=y.reshape(-1,1)
But I end up with array of length 6400! (How come?)
Could you please give me a hand with performing this regression ?
Change the order of your operations:
First geneate x and y as 1-D arrays:
x = np.linspace(0, 2*np.pi, 8)
y = np.sin(x) + np.random.normal(0, 0.4, 8)
Then (after y was generated) reshape x:
x = x.reshape(-1, 1)
Edit following a comment as of 2022-02-20
The source of the problem in the original code is that;
x = np.linspace(0,2*np.pi,80) - generates a 1-D array.
x = x.reshape(-1,1) - reshapes it into a 2-D array, with one column and
as many rows as needed.
y = np.sin(x) + np.random.normal(0,0.4,80) - operates on a columnar array and
a 1-D array (treated here as a single row array).
the effect is that y is a 2-D array (80 * 80).
then the attempt to reshape y gives a single column array with 6400 rows.
The proper solution is that both x and y should be initially 1-D
(single row) arrays and my code does just this.
Then both arrays can be reshaped.
I encountered this error and solving it via reshape but it didn't work
ValueError: y should be a 1d array, got an array of shape () instead.
Actually, this was happening due to the wrong placement of [] brackets around np.argmax, below is the wrong code and correct one, notice the positioning of [] around the np.argmax in both the snippets
Wrong Code
ax[i,j].set_title("Predicted Watch : "+str(le.inverse_transform([pred_digits[prop_class[count]]])) +"\n"+"Actual Watch : "+str(le.inverse_transform(np.argmax([y_test[prop_class[count]]])).reshape(-1,1)))
Correct Code
ax[i,j].set_title("Predicted Watch :"+str(le.inverse_transform([pred_digits[prop_class[count]]]))+"\n"+"Actual Watch : "+str(le.inverse_transform([np.argmax(y_test[prop_class[count]])])))
I am trying to reshape an (N, 1) array d to an (N,) vector. According to this solution and my own experience with numpy, the following code should convert it to a vector:
from sklearn.neighbors import kneighbors_graph
from sklearn.datasets import make_circles
X, labels = make_circles(n_samples=150, noise=0.1, factor=0.2)
A = kneighbors_graph(X, n_neighbors=5)
d = np.sum(A, axis=1)
d = d.reshape(-1)
However, d.shape gives (1, 150)
The same happens when I exactly replicate the code for the linked solution. Why is the numpy array not reshaping?
The issue is that the sklearn functions returned the nearest neighbor graph as a sparse.csr.csr_matrix. Applying np.sum returned a numpy.matrix, a data type that (in my opinion) should no longer exist. numpy.matrixs are incompatible with just about everything, and numpy operations on them return unexpected results.
The solution was casting the numpy.csr.csr_matrix to a numpy.array:
A = kneighbors_graph(X, n_neighbors=5)
A = A.toarray()
d = np.sum(A, axis=1)
d = d.reshape(-1)
Now we have d.shape = (150,)
I want to calculate the categorical crossentropy of two numpy arrays. Both arrays have the same length.
y_true contains around 10000 2D arrays, which are the labels
y_pred contains 10000 2D arrays, which are my predictions
The result should be a 1D numpy array which contains all the categorical crossentropy values for the arrays. The formular is:
Here x_true is the i-th element of one true vector and x_pred is the i-th element of the prediction vector.
My implementation looks like this, but it is very slow. The reshaping is done to convert the 2D arrays to 1D arrays to simple iterate over them.
def categorical_cross_entropy(y_true, y_pred):
losses = np.zeros(len(y_true))
for i in range(len(y_true)):
single_sequence = y_true[i].reshape(y_true.shape[1]*y_true.shape[2])
single_pred = y_pred[i].reshape(y_pred.shape[1]*y_pred.shape[2])
sum = 0
for j in range(len(single_sequence)):
log = math.log(single_pred[j])
sum = sum + single_sequence[j] * log
sum = sum * (-1)
losses[i] = sum
return losses
A conversion to tensors is not possible, since tf.constant(y_pred) fails in a MemoryError, because every 2D array in y_true and y_pred has roughly the dimensions 190 x 190. So any ideas?
You can use scipy.special.xlogy. For example,
In [10]: import numpy as np
In [11]: from scipy.special import xlogy
Create some data:
In [12]: y_true = np.random.randint(1, 10, size=(8, 200, 200))
In [13]: y_pred = np.random.randint(1, 10, size=(8, 200, 200))
Compute the result using xlogy:
In [14]: -xlogy(y_true, y_pred).sum(axis=(1, 2))
Out[14]:
array([-283574.67634307, -283388.18672431, -284720.65206688,
-285517.06983709, -286383.26148469, -282200.33634505,
-285781.78641736, -285862.91148953])
Verify the result by computing it with your function:
In [15]: categorical_cross_entropy(y_true, y_pred)
Out[15]:
array([-283574.67634309, -283388.18672432, -284720.65206689,
-285517.0698371 , -286383.2614847 , -282200.33634506,
-285781.78641737, -285862.91148954])
If you don't want the dependence on scipy, you can do the same thing with np.log, but you might get a warning if any value in y_pred is 0:
In [20]: -(y_true*np.log(y_pred)).sum(axis=(1, 2))
Out[20]:
array([-283574.67634307, -283388.18672431, -284720.65206688,
-285517.06983709, -286383.26148469, -282200.33634505,
-285781.78641736, -285862.91148953])
I try to use Linear Discriminant Analysis from scikit-learn library, in order to perform dimensionality reduction on my data which has more than 200 features. But I could not find the inverse_transform function in the LDA class.
I just wanted to ask, how can I reconstruct the original data from a point in LDA domain?
Edit base on #bogatron and #kazemakase answer:
I think the term "original data" was wrong and instead I should use "original coordinate" or "original space". I know without all PCAs we can't reconstruct the original data, but when we build the shape space we project the data down to lower dimension with help of PCA. The PCA try to explain the data with only 2 or 3 components which could capture the most of the variance of the data and if we reconstruct the data base on them it should show us the parts of the shape that causes this separation.
I checked the source code of the scikit-learn LDA again and I noticed that the eigenvectors are store in scalings_ variable. when we use the svd solver, it's not possible to inverse the eigenvectors (scalings_) matrix, but when I tried the pseudo-inverse of the matrix, I could reconstruct the shape.
Here, there are two images which are reconstructed from [ 4.28, 0.52] and [0, 0] points respectively:
I think that would be great if someone explain the mathematical limitation of the LDA inverse transform in depth.
The inverse of the LDA does not necessarily make sense beause it loses a lot of information.
For comparison, consider the PCA. Here we get a coefficient matrix that is used to transform the data. We can do dimensionality reduction by stripping rows from the matrix. To get the inverse transform, we first invert the full matrix and then remove the columns corresponding to the removed rows.
The LDA does not give us a full matrix. We only get a reduced matrix that cannot be directly inverted. It is possible to take the pseudo inverse, but this is much less efficient than if we had the full matrix at our disposal.
Consider a simple example:
C = np.ones((3, 3)) + np.eye(3) # full transform matrix
U = C[:2, :] # dimensionality reduction matrix
V1 = np.linalg.inv(C)[:, :2] # PCA-style reconstruction matrix
print(V1)
#array([[ 0.75, -0.25],
# [-0.25, 0.75],
# [-0.25, -0.25]])
V2 = np.linalg.pinv(U) # LDA-style reconstruction matrix
print(V2)
#array([[ 0.63636364, -0.36363636],
# [-0.36363636, 0.63636364],
# [ 0.09090909, 0.09090909]])
If we have the full matrix we get a different inverse transform (V1) than if we simple invert the transform (V2). That is because in the second case we lost all information about the discarded components.
You have been warned. If you still want to do the inverse LDA transform, here is a function:
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.utils.validation import check_is_fitted
from sklearn.utils import check_array, check_X_y
import numpy as np
def inverse_transform(lda, x):
if lda.solver == 'lsqr':
raise NotImplementedError("(inverse) transform not implemented for 'lsqr' "
"solver (use 'svd' or 'eigen').")
check_is_fitted(lda, ['xbar_', 'scalings_'], all_or_any=any)
inv = np.linalg.pinv(lda.scalings_)
x = check_array(x)
if lda.solver == 'svd':
x_back = np.dot(x, inv) + lda.xbar_
elif lda.solver == 'eigen':
x_back = np.dot(x, inv)
return x_back
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
lda = LinearDiscriminantAnalysis()
Z = lda.fit(X, y).transform(X)
Xr = inverse_transform(lda, Z)
# plot first two dimensions of original and reconstructed data
plt.plot(X[:, 0], X[:, 1], '.', label='original')
plt.plot(Xr[:, 0], Xr[:, 1], '.', label='reconstructed')
plt.legend()
You see, the result of the inverse transform does not have much to do with the original data (well, it's possible to guess the direction of the projection). A considerable part of the variation is gone for good.
There is no inverse transform because in general, you can not return from the lower dimensional feature space to your original coordinate space.
Think of it like looking at your 2-dimensional shadow projected on a wall. You can't get back to your 3-dimensional geometry from a single shadow because information is lost during the projection.
To address your comment regarding PCA, consider a data set of 10 random 3-dimensional vectors:
In [1]: import numpy as np
In [2]: from sklearn.decomposition import PCA
In [3]: X = np.random.rand(30).reshape(10, 3)
Now, what happens if we apply the Principal Components Transformation (PCT) and apply dimensionality reduction by keeping only the top 2 (out of 3) PCs, then apply the inverse transform?
In [4]: pca = PCA(n_components=2)
In [5]: pca.fit(X)
Out[5]:
PCA(copy=True, iterated_power='auto', n_components=2, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
In [6]: Y = pca.transform(X)
In [7]: X.shape
Out[7]: (10, 3)
In [8]: Y.shape
Out[8]: (10, 2)
In [9]: XX = pca.inverse_transform(Y)
In [10]: X[0]
Out[10]: array([ 0.95780971, 0.23739785, 0.06678655])
In [11]: XX[0]
Out[11]: array([ 0.87931369, 0.34958407, -0.01145125])
Obviously, the inverse transform did not reconstruct the original data. The reason is that by dropping the lowest PC, we lost information. Next, let's see what happens if we retain all PCs (i.e., we do not apply any dimensionality reduction):
In [12]: pca2 = PCA(n_components=3)
In [13]: pca2.fit(X)
Out[13]:
PCA(copy=True, iterated_power='auto', n_components=3, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
In [14]: Y = pca2.transform(X)
In [15]: XX = pca2.inverse_transform(Y)
In [16]: X[0]
Out[16]: array([ 0.95780971, 0.23739785, 0.06678655])
In [17]: XX[0]
Out[17]: array([ 0.95780971, 0.23739785, 0.06678655])
In this case, we were able to reconstruct the original data because we didn't throw away any information (since we retained all the PCs).
The situation with LDA is even worse because the maximum number of components that can be retained is not 200 (the number of features for your input data); rather, the maximum number of components you can retain is n_classes - 1. So if, for example, you were doing a binary classification problem (2 classes), the LDA transform would be going from 200 input dimensions down to just a single dimension.
I have trained a machine learning binary classifier on a 100x85 array in sklearn. I would like to be able to vary 2 of the features in the array, say column 0 and column 1, and generate contour or surface graph, showing how the predicted probability of falling in one category varies across the surface.
It seems reasonable to me that I would use something like the following:
X = 100 x 85 array of data used for training set
clf = Trained 2-class classifier
x = np.array(X)
y = np.array(X)
x[:,0] = np.linspace(0, 100, 100)
y[:,1] = np.linspace(0, 100, 100)
xx, yy = meshgrid(x,y)
The next step would be to use
clf.predict_proba(<input arrays>)
followed by plotting, but using meshgrid results in two 8500x8500 matrices that can't be used in my classifier.
How do I get the necessary 100x85 vector at each point in the grid to use pred_proba with my classifier?
Thanks for any help you can provide.
As #wflynny says above, you need to give np.meshgrid two 1D arrays. We can use X.shape to create your x and y arrays, like this:
X=np.zeros((100,85)) # just to get the right shape here
print X.shape
# (100, 85)
x=np.arange(X.shape[0])
y=np.arange(X.shape[1])
print x.shape
# (100,)
print y.shape
# (85,)
xx,yy=np.meshgrid(x,y,indexing='ij')
print xx.shape
#(100, 85)
print yy.shape
#(100, 85)