Related
This is my problem with sklearn's OneHotEncoder.
with an array a = [1,2,3,4,5,6,7,8,9,22] i.e ALL UNIQUE of a.shape=[10,1] (after reshape(-1,1), a [10,10] matrix of OneHotEncoded values is returned.
array([[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]])
But with an array like a = [1,2,2,4,4,6,7,8,9,22] i.e NON UNIQUE of a.shape=[10,1] (after reshape(-1,1), a [10,8] matrix of OneHotEncoded values is returned.
array([[ 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 1.]])
But I cannot use this as my input placeholder expects a [10,10] matrix as input. Can anyone help me handle non-unique values in sklearn's OneHotEncoder?
P.S Adding the parameter n_values= 10 gives an error saying ValueError: Feature out of bounds for n_values=10
Do you know all the values your categorical feature can take? If so, you can do something like this:
enc = OneHotEncoder()
enc.fit(np.asarray([1,2,3,4,5,6,7,8,9,22]).reshape(-1, 1)) #fit your encoder to the values
data_for_encoding = np.asarray([1,2,2,4,4,6,7,8,9,22]).reshape(-1, 1) #your data
sparse_matrix = enc.transform(data_for_encoding) #encoded data
New at Python and Numpy, trying to create 263-dimensional arrays.
I need so much dimensions for Machine Learning model.
Of course one way is using numpy.zeros or numpy.ones and writing code as below :
x=np.zeros((1,1,1,1,1,1,1,1,1,1,1)) #and more 1,1,1,1
Is there an easier way to create arrays with many dimensions?
You don't need 263-dimensions. If every dimension had only size 2, you'd still have 2 ** 263 elements, which are:
14821387422376473014217086081112052205218558037201992197050570753012880593911808
You wouldn't be able to do anything with such a matrix : not even initializing on Google servers.
You either need an array with 263 values :
>>> np.zeros(263)
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0.])
or a matrix with 263 vectors of M elements (let's say 3):
>>> np.zeros((263, 3))
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
...
...
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
There are many advanced research centers that are perfectly happy with vanilla Numpy. Having to use less than 32 dimensions doesn't seem to bother them much for quantum mechanics or machine learning.
Let's start with the numpy documentation, help(np.zeros) gives
zeros(shape, dtype=float, order='C')
Return a new array of given shape and type, filled with zeros.
Parameters
----------
shape : int or sequence of ints
Shape of the new array, e.g., ``(2, 3)`` or ``2``.
...
Returns
-------
out : ndarray
Array of zeros with the given shape, dtype, and order.
...
The shape argument is just a list of the size of each dimension (but you probably knew that). There are lots of ways to easily create such a list in python, one quick way is
np.zeros(np.ones(263, dtype=int))
But, as others have mentioned, numpy has a somewhat arbitrary limitation of 32 dimensions. In my experience, you can get similar and more flexible behavior by keeping an index array showing which "dimension" each row belongs to.
Most likely, for ML applications you don't actually want this:
shape = np.random.randint(1,10,(263,))
arr = np.zeros(shape) # causes a ValueError anyway
You actually want something sparse
for i, value in enumerate(nonzero_values):
arr[idx[i]] = value
idx in this case is a (num_samples, 263) array and nonzero_values is a (num_samples,) array.
ML algorithms usually work on these idx and value arrays (usually called X and Y) since the actual arrays would be enormous otherwise.
Sometimes you need a "one-hot" array of your dimensions, which will make idx.shape == (num_samples, shape.sum()), with idx containting only 0 or 1 values. But that's still smaller than any sort of high-dimetnsional array.
There is a new package called DimPy which can create multi-dimensional arrays in python very easily. To install use
pip install dimpy
Use example
from dimpy import *
a=dim(4,5,6) # This is a 3 dimensional array of 4x5x6 elements. Use any number of dimensions within '( ) ' separated by comma
print(a)
By default every element will be zero. To change it use dfv(a, 'New value')
To express it into numpy style array, use
a=npary(a)
See in more details here: https://www.respt.in/p/python-package-dimpy.html?m=1
Say I have two matrices A and B. For example,
A = numpy.zeros((5,5))
B = np.eye(5)
Is there a way to append A and B?
It sounds to me like you're looking for np.hstack:
>>> import numpy as np
>>> a = np.zeros((5, 5))
>>> b = np.eye(5)
>>> np.hstack((a, b))
array([[ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])
np.vstack will work if you want to stack them downward:
>>> np.vstack((a, b))
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 1., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 1.]])
I've a 2 arrays:
np.array(y_pred_list).shape
# returns (5, 47151, 10)
np.array(y_val_lst).shape
# returns (5, 47151, 10)
np.array(y_pred_list)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
np.array(y_val_lst)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
I would like to go through all 47151 examples, and calculate the "accuracy". Meaning the sum of those in y_pred_list that matches y_val_lst over 47151. What's the comparison function for this?
You can find a lot of useful classification scores in sklearn.metrics, particularly accuracy_score(). See the doc here, you would use it as:
import sklearn
acc = sklearn.metrics.accuracy_score(np.array(y_val_list)[:, 2, :],
np.array(y_pred_list)[:, 2, :])
Sounds like you want something like this:
accuracy = (y_pred_list == y_val_lst).all(axis=(0,2)).mean()
...though since your arrays are clearly floating-point arrays, you might want to allow for numerical-precision errors rather than insisting on exact equality:
accuracy = (numpy.abs(y_pred_list - y_val_lst) < tolerance ).all(axis=(0,2)).mean()
(where, for example, tolerance = 1e-10)
The .all(axis=(0,2)) call records cases in which everything in its input is True (i.e. everything matches) when working along the dimension 0 (i.e. the one that has extent 5) and dimension 2 (the one that has extent 10). It outputs a one-dimensional array of length 47151. The .mean() call then gives you the proportion of matches in that sequence, which is my best guess as to what you mean by "over 47151".
I'm trying to do the following with numpy (python newbie here)
Create a zeroed matrix of the rigth dimensions
num_rows = 80
num_cols = 23
A = numpy.zeros(shape=(num_rows, num_cols))
Operate on the matrix
k = 5
numpy.transpose(A)
U,s,V = linalg.svd(A)
Extract sub-matrix
sk = s[0:(k-1), 0:(k-1)]
Results on error
Traceback (most recent call last):
File "tdm2svd.py", line 40, in <module>
sk = s[0:(k-1), 0:(k-1)]
IndexError: too many indices
What am I doing wrong?
to answer your question s is only a 1d array ... (even if you did actually transpose it ... which you did not)
>>> u,s,v = linalg.svd(A)
>>> s
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
>>>
for selecting a submatrix
I think this does what you want ... there may be a better way
>>> rows = range(10,15)
>>> cols = range(5,8)
>>> A[rows][:,cols]
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
or probably better
>>> A[15:32, 2:7]
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])