Comparing Arrays for Accuracy - python

I've a 2 arrays:
np.array(y_pred_list).shape
# returns (5, 47151, 10)
np.array(y_val_lst).shape
# returns (5, 47151, 10)
np.array(y_pred_list)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
np.array(y_val_lst)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
I would like to go through all 47151 examples, and calculate the "accuracy". Meaning the sum of those in y_pred_list that matches y_val_lst over 47151. What's the comparison function for this?

You can find a lot of useful classification scores in sklearn.metrics, particularly accuracy_score(). See the doc here, you would use it as:
import sklearn
acc = sklearn.metrics.accuracy_score(np.array(y_val_list)[:, 2, :],
np.array(y_pred_list)[:, 2, :])

Sounds like you want something like this:
accuracy = (y_pred_list == y_val_lst).all(axis=(0,2)).mean()
...though since your arrays are clearly floating-point arrays, you might want to allow for numerical-precision errors rather than insisting on exact equality:
accuracy = (numpy.abs(y_pred_list - y_val_lst) < tolerance ).all(axis=(0,2)).mean()
(where, for example, tolerance = 1e-10)
The .all(axis=(0,2)) call records cases in which everything in its input is True (i.e. everything matches) when working along the dimension 0 (i.e. the one that has extent 5) and dimension 2 (the one that has extent 10). It outputs a one-dimensional array of length 47151. The .mean() call then gives you the proportion of matches in that sequence, which is my best guess as to what you mean by "over 47151".

Related

Is there an efficient way of representing a 2D numpy array for the purpose of fitting a GMM to it?

I have been using Gaussian Mixture Models (GMM) to model a set of peaks in a 2D numpy array (a).
a = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 100., 1000., 100., 2., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 1., 100., 100., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 2., 1., 2., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 1., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
The problem is that in order to fit a GMM to my data with sklearn I have to first generate a density_array, which holds a huge amount of data points depending on the height of the peaks in a.
def convert_to_density_array(array):
"""
Convert an array to a density array
"""
density_list = []
# iterate over each i,j coordinate in the array
for (i, j), value in np.ndenumerate(array):
for x in range(int(value)):
density_list.append((i, j))
return np.array(density_list)
density_array = convert_to_density_array(a)
gmm = mixture.GaussianMixture(n_components=2,covariance_type='full').fit(density_array)
Is there an efficient way of representing a 2D numpy array for the purpose of fitting a GMM to it?
you can store data using less precision by adding dtype=np.float32 to your np.array call, which is okay as long as you are fine with 8 digits of precision instead of 15 (which is totally acceptable in your case), but that's the only way to store the same data in memory in less footprint and still pass it to gmm.
what you are trying to do is curve fitting, not data modelling , so you can use scipy curve fit on your original data without making density_array to start with, you just have to pass it a function of two gaussians and in a loop change the initial estimate randomly until you get the least error, but as writing the code for it will take some time, consider this approach only if you cannot get your data in memory using any other method.

Finding the start/stop positions and length of the longest and shortest sequence of 1s or 0s in a numpy matrix

I have a numpy matrix that looks like:
matrix = [[0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]
How would I get the length of the longest sequence of 1s or 0s? Also how would I get their start and stop positions?
Is there an easier numpy-way to get this done?
Output format is flexible as long as it denotes the inner list index, the length value, and value's list indices.
Example:
LONGEST ONES: 1, 16, 2, 17 (index of inner list, length, longest 1s sequence index start, longest 1s sequence end pos.).
or [1, 16, 2, 17]/(1, 16, 2, 17)
LONGEST ZEROS: 2, 45, 0, 45
Not a duplicate of these questions as this concerns a matrix:
find the start position of the longest sequence of 1's
The result(longest) should be considered among all lists.
A sequence count does not continue when it reaches the end of an inner list.
Using Divakar's base answer, you can adapt by using np.vectorize, setting the argument signature and doing simple math operations to get what you're looking for.
Take, for instance,
m = np.array(matrix)
def get_longest_ones_matrix(b):
idx_pairs = np.where(np.diff(np.hstack(([False], b==1, [False]))))[0].reshape(-1,2)
if not idx_pairs.size: return(np.array([0,0,0]))
d = np.diff(idx_pairs, axis=1).argmax()
start_longest_seq = idx_pairs[d,0]
end_longest_seq = idx_pairs[d,1]
l = end_longest_seq - start_longest_seq
p = start_longest_seq % 45
e = end_longest_seq - 1
return(np.array([l,p,e]))
s = m.shape[-1]
v = np.vectorize(get_longest_ones_matrix, signature=f'(s)->(1)')
x = v(m)
Which yields
[[ 3 26 28]
[16 2 17]
[ 0 0 0]]
Then,
a = x[:,0].argmax()
print(a,x[a])
1 [16 2 17]

Python sklearn OneHotEncoding categorical and sometimes repeated values

This is my problem with sklearn's OneHotEncoder.
with an array a = [1,2,3,4,5,6,7,8,9,22] i.e ALL UNIQUE of a.shape=[10,1] (after reshape(-1,1), a [10,10] matrix of OneHotEncoded values is returned.
array([[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]])
But with an array like a = [1,2,2,4,4,6,7,8,9,22] i.e NON UNIQUE of a.shape=[10,1] (after reshape(-1,1), a [10,8] matrix of OneHotEncoded values is returned.
array([[ 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 1.]])
But I cannot use this as my input placeholder expects a [10,10] matrix as input. Can anyone help me handle non-unique values in sklearn's OneHotEncoder?
P.S Adding the parameter n_values= 10 gives an error saying ValueError: Feature out of bounds for n_values=10
Do you know all the values your categorical feature can take? If so, you can do something like this:
enc = OneHotEncoder()
enc.fit(np.asarray([1,2,3,4,5,6,7,8,9,22]).reshape(-1, 1)) #fit your encoder to the values
data_for_encoding = np.asarray([1,2,2,4,4,6,7,8,9,22]).reshape(-1, 1) #your data
sparse_matrix = enc.transform(data_for_encoding) #encoded data

N dimensional array in python

New at Python and Numpy, trying to create 263-dimensional arrays.
I need so much dimensions for Machine Learning model.
Of course one way is using numpy.zeros or numpy.ones and writing code as below :
x=np.zeros((1,1,1,1,1,1,1,1,1,1,1)) #and more 1,1,1,1
Is there an easier way to create arrays with many dimensions?
You don't need 263-dimensions. If every dimension had only size 2, you'd still have 2 ** 263 elements, which are:
14821387422376473014217086081112052205218558037201992197050570753012880593911808
You wouldn't be able to do anything with such a matrix : not even initializing on Google servers.
You either need an array with 263 values :
>>> np.zeros(263)
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0.])
or a matrix with 263 vectors of M elements (let's say 3):
>>> np.zeros((263, 3))
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
...
...
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
There are many advanced research centers that are perfectly happy with vanilla Numpy. Having to use less than 32 dimensions doesn't seem to bother them much for quantum mechanics or machine learning.
Let's start with the numpy documentation, help(np.zeros) gives
zeros(shape, dtype=float, order='C')
Return a new array of given shape and type, filled with zeros.
Parameters
----------
shape : int or sequence of ints
Shape of the new array, e.g., ``(2, 3)`` or ``2``.
...
Returns
-------
out : ndarray
Array of zeros with the given shape, dtype, and order.
...
The shape argument is just a list of the size of each dimension (but you probably knew that). There are lots of ways to easily create such a list in python, one quick way is
np.zeros(np.ones(263, dtype=int))
But, as others have mentioned, numpy has a somewhat arbitrary limitation of 32 dimensions. In my experience, you can get similar and more flexible behavior by keeping an index array showing which "dimension" each row belongs to.
Most likely, for ML applications you don't actually want this:
shape = np.random.randint(1,10,(263,))
arr = np.zeros(shape) # causes a ValueError anyway
You actually want something sparse
for i, value in enumerate(nonzero_values):
arr[idx[i]] = value
idx in this case is a (num_samples, 263) array and nonzero_values is a (num_samples,) array.
ML algorithms usually work on these idx and value arrays (usually called X and Y) since the actual arrays would be enormous otherwise.
Sometimes you need a "one-hot" array of your dimensions, which will make idx.shape == (num_samples, shape.sum()), with idx containting only 0 or 1 values. But that's still smaller than any sort of high-dimetnsional array.
There is a new package called DimPy which can create multi-dimensional arrays in python very easily. To install use
pip install dimpy
Use example
from dimpy import *
a=dim(4,5,6) # This is a 3 dimensional array of 4x5x6 elements. Use any number of dimensions within '( ) ' separated by comma
print(a)
By default every element will be zero. To change it use dfv(a, 'New value')
To express it into numpy style array, use
a=npary(a)
See in more details here: https://www.respt.in/p/python-package-dimpy.html?m=1

Replace values in bigger numpy array with smaller array

I have 2 numpy arrays: The bigger one is a 10 x 10 numpy array and the smaller one is a 2 x 2 array.
I would like to substitute the values in the bigger array with those from the smaller array, at a user specified location. E.g. Replace the values of the 10 x 10 array starting from its center point by replacing 4 values with the 2 x 2 array.
Right now, I am doing this by using a nested for loop, and figuring out which pixels in the bigger array overlap those of the smaller array. Is there a more pythonic way to do it?
In [1]: import numpy as np
In [2]: a = np.zeros(100).reshape(10,10)
In [3]: b = np.ones(4).reshape(2,2)
In [4]: a[4:6, 4:6] = b
In [5]: a
Out[5]:
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

Categories

Resources