How to convert python dict to 3D numpy array? - python

I have a python dict with n_keys where each value is a 2D array (dim1,dim2).
I want to transfer this into a 3D numpy array of (dim1,dim2,n_keys).
How can I do it fast without a lot of nested loops?
EDIT:
Example:
featureMatrix = np.empty((len(featureDict.values()[0]),
len(featureDict.values()[0][0,:]),
len(featureDict.keys())))
for k,keys in enumerate(featureDict.keys()):
value=featureDict[keys]
for i in range(0,len(value[:,0]),1):
for j in range(0,len(value[0,:]),1):
featureMatrix[i,j,k]=value[i,j]

dict-ionaries are unordered so you probably don't want to simply stack them but you can simply stack the values nevertheless with array3d = np.dstack(somedict.values()).
Here is some example case:
>>> somedict = dict(a = np.arange(4).reshape(2,2),
b = np.arange(4).reshape(2,2) + 10,
c = np.arange(4).reshape(2,2) + 100,
d = np.arange(4).reshape(2,2) + 1000)
>>> array3d = np.dstack(somedict.values())
>>> array3d.shape
(2, 2, 4)
>>> array3d # unordered because of dict unorderedness, order depends for all practical purposes on chance
array([[[ 10, 0, 1000, 100],
[ 11, 1, 1001, 101]],
[[ 12, 2, 1002, 102],
[ 13, 3, 1003, 103]]])
or in case you want to stack it sorted by the key of the dictionary:
>>> array3d = np.dstack((somedict[i] for i in sorted(somedict.keys())))
>>> array3d # sorted by the keys!
array([[[ 0, 10, 100, 1000],
[ 1, 11, 101, 1001]],
[[ 2, 12, 102, 1002],
[ 3, 13, 103, 1003]]])

Related

How to raise every element of a vector to the power of every element of another vector?

I would like to raise a vector by ascending powers form 0 to 5:
import numpy as np
a = np.array([1, 2, 3]) # list of 11 components
b = np.array([0, 1, 2, 3, 4]) # power
c = np.power(a,b)
desired results are:
c = [[1**0, 1**1, 1**2, 1**3, 1**4], [2**0, 2**1, ...], ...]
I keep getting this error:
ValueError: operands could not be broadcast together with shapes (3,) (5,)
One solution will be to add a new dimension to your array a
c = a[:,None]**b
# Using broadcasting :
# (3,1)**(4,) --> (3,4)
#
# [[1],
# c = [2], ** [0,1,2,3,4]
# [3]]
For more information check the numpy broadcasting documentation
Here's a solution:
num_of_powers = 5
num_of_components = 11
a = []
for i in range(1,num_of_components + 1):
a.append(np.repeat(i,num_of_powers))
b = list(range(num_of_powers))
c = np.power(a,b)
The output c would look like:
array([[ 1, 1, 1, 1, 1],
[ 1, 2, 4, 8, 16],
[ 1, 3, 9, 27, 81],
[ 1, 4, 16, 64, 256],
[ 1, 5, 25, 125, 625],
[ 1, 6, 36, 216, 1296],
[ 1, 7, 49, 343, 2401],
[ 1, 8, 64, 512, 4096],
[ 1, 9, 81, 729, 6561],
[ 1, 10, 100, 1000, 10000],
[ 1, 11, 121, 1331, 14641]], dtype=int32)
Your solution shows a broadcast error because as per the documentation:
If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
c = [[x**y for y in b] for x in a]
c = np.asarray(list(map(lambda x: np.power(a,x), b))).transpose()
You need to first create a matrix where the rows are repetitions of each number. This can be done with np.tile:
mat = np.tile(a, (len(b), 1)).transpose()
And then raise that to the power of b elementwise:
np.power(mat, b)
All together:
import numpy as np
nums = np.array([1, 2, 3]) # list of 11 components
powers = np.array([0, 1, 2, 3, 4]) # power
print(np.power(np.tile(nums, (len(powers), 1)).transpose(), powers))
Which will give:
[[ 1 1 1 1 1] # == [1**0, 1**1, 1**2, 1**3, 1**4]
[ 1 2 4 8 16] # == [2**0, 2**1, 2**2, 2**3, 2**4]
[ 1 3 9 27 81]] # == [3**0, 3**1, 3**2, 3**3, 3**4]

Merging rows in numpy to form new array

This is a sample of what I am trying to accomplish. I am very new to python and have searched for hours to find out what I am doing wrong. I haven't been able to find what my issue is. I am still new enough that I may be searching for the wrong phrases. If so, could you please point me in the right direction?
I want to combine n mumber of arrays to make one array. I want to have the first row from x as the first row in the combined the first row from y as the second row in combined, the first row from z as the third row in combined the the second row in x as the fourth row in combined, etc.
so I would look something like this.
x = [x1 x2 x3]
[x4 x5 x6]
[x7 x8 x9]
y = [y1 y2 y3]
[y4 y5 y6]
[y7 y8 y9]
x = [z1 z2 z3]
[z4 z5 z6]
[z7 z8 z9]
combined = [x1 x2 x3]
[y1 y2 y3]
[z1 z2 z3]
[x4 x5 x6]
[...]
[z7 z8 z9]
The best I can come up with is the
import numpy as np
x = np.random.rand(6,3)
y = np.random.rand(6,3)
z = np.random.rand(6,3)
combined = np.zeros((9,3))
for rows in range(len(x)):
combined[0::3] = x[rows,:]
combined[1::3] = y[rows,:]
combined[2::3] = z[rows,:]
print(combined)
All this does is write the last value of the input array to every third row in the output array instead of what I wanted. I am not sure if this is even the best way to do this. Any advice would help out.
*I just figure out this works but if someone knows a higher performance method, *please let me know.
import numpy as np
x = np.random.rand(6,3)
y = np.random.rand(6,3)
z = np.random.rand(6,3)
combined = np.zeros((18,3))
for rows in range(6):
combined[rows*3,:] = x[rows,:]
combined[rows*3+1,:] = y[rows,:]
combined[rows*3+2,:] = z[rows,:]
print(combined)
You can do this using a list comprehension and zip:
combined = np.array([row for row_group in zip(x, y, z) for row in row_group])
Using vectorised operations only:
A = np.vstack((x, y, z))
idx = np.arange(A.shape[0]).reshape(-1, x.shape[0]).T.flatten()
A = A[idx]
Here's a demo:
import numpy as np
x, y, z = np.random.rand(3,3), np.random.rand(3,3), np.random.rand(3,3)
print(x, y, z)
[[ 0.88259564 0.17609363 0.01067734]
[ 0.50299357 0.35075811 0.47230915]
[ 0.751129 0.81839586 0.80554345]]
[[ 0.09469396 0.33848691 0.51550685]
[ 0.38233976 0.05280427 0.37778962]
[ 0.7169351 0.17752571 0.49581777]]
[[ 0.06056544 0.70273453 0.60681583]
[ 0.57830566 0.71375038 0.14446909]
[ 0.23799775 0.03571076 0.26917939]]
A = np.vstack((x, y, z))
idx = np.arange(A.shape[0]).reshape(-1, x.shape[0]).T.flatten()
print(idx) # [0 3 6 1 4 7 2 5 8]
A = A[idx]
print(A)
[[ 0.88259564 0.17609363 0.01067734]
[ 0.09469396 0.33848691 0.51550685]
[ 0.06056544 0.70273453 0.60681583]
[ 0.50299357 0.35075811 0.47230915]
[ 0.38233976 0.05280427 0.37778962]
[ 0.57830566 0.71375038 0.14446909]
[ 0.751129 0.81839586 0.80554345]
[ 0.7169351 0.17752571 0.49581777]
[ 0.23799775 0.03571076 0.26917939]]
I have changed your code a little bit to get the desired output
import numpy as np
x = np.random.rand(6,3)
y = np.random.rand(6,3)
z = np.random.rand(6,3)
combined = np.zeros((18,3))
combined[0::3] = x
combined[1::3] = y
combined[2::3] = z
print(combined)
You had the shape of the combined matrix wrong and there is no real need for the for loop.
This might not be the most pythonic way to do it but you could
for block in range(len(combined)/3):
for rows in range(len(x)):
combined[block*3+0::3] = x[rows,:]
combined[block*3+1::3] = y[rows,:]
combined[block*3+2::3] = z[rows,:]
A simple numpy solution is to stack the arrays on a new middle axis, and reshape the result to 2d:
In [5]: x = np.arange(9).reshape(3,3)
In [6]: y = np.arange(9).reshape(3,3)+10
In [7]: z = np.arange(9).reshape(3,3)+100
In [8]: np.stack((x,y,z),axis=1).reshape(-1,3)
Out[8]:
array([[ 0, 1, 2],
[ 10, 11, 12],
[100, 101, 102],
[ 3, 4, 5],
[ 13, 14, 15],
[103, 104, 105],
[ 6, 7, 8],
[ 16, 17, 18],
[106, 107, 108]])
It may be easier to see what's happening if we give each dimension a different value; e.g. 2 3x4 arrays:
In [9]: x = np.arange(12).reshape(3,4)
In [10]: y = np.arange(12).reshape(3,4)+10
np.array combines them on a new 1st axis, making a 2x3x4 array. To get the interleaving you want, we can transpose the first 2 dimensions, producing a 3x2x4. Then reshape to a 6x4.
In [13]: np.array((x,y))
Out[13]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[10, 11, 12, 13],
[14, 15, 16, 17],
[18, 19, 20, 21]]])
In [14]: np.array((x,y)).transpose(1,0,2)
Out[14]:
array([[[ 0, 1, 2, 3],
[10, 11, 12, 13]],
[[ 4, 5, 6, 7],
[14, 15, 16, 17]],
[[ 8, 9, 10, 11],
[18, 19, 20, 21]]])
In [15]: np.array((x,y)).transpose(1,0,2).reshape(-1,4)
Out[15]:
array([[ 0, 1, 2, 3],
[10, 11, 12, 13],
[ 4, 5, 6, 7],
[14, 15, 16, 17],
[ 8, 9, 10, 11],
[18, 19, 20, 21]])
np.vstack produces a 6x4, but with the wrong order. We can't transpose that directly.
np.stack with default axis behaves just like np.array. But with axis=1, it creates a 3x2x4, which we can reshape:
In [16]: np.stack((x,y), 1)
Out[16]:
array([[[ 0, 1, 2, 3],
[10, 11, 12, 13]],
[[ 4, 5, 6, 7],
[14, 15, 16, 17]],
[[ 8, 9, 10, 11],
[18, 19, 20, 21]]])
The list zip in the accepted answer is a list version of transpose, creating a list of 3 2-element tuples.
In [17]: list(zip(x,y))
Out[17]:
[(array([0, 1, 2, 3]), array([10, 11, 12, 13])),
(array([4, 5, 6, 7]), array([14, 15, 16, 17])),
(array([ 8, 9, 10, 11]), array([18, 19, 20, 21]))]
np.array(list(zip(x,y))) produces the same thing as the stack, a 3x2x4 array.
As for speed, I suspect the allocate and assign (as in Ash's answer) is fastest:
In [27]: z = np.zeros((6,4),int)
...: for i, arr in enumerate((x,y)):
...: z[i::2,:] = arr
...:
In [28]: z
Out[28]:
array([[ 0, 1, 2, 3],
[10, 11, 12, 13],
[ 4, 5, 6, 7],
[14, 15, 16, 17],
[ 8, 9, 10, 11],
[18, 19, 20, 21]])
For serious timings, use much larger examples than this.

python most common pair of indices in 3 x n array

I have a numpy array with shape (3, 600219), which is a list of indices.
i.e.
array([[ 0, 0, 0, ..., 2879, 2879, 2879],
[ 40, 40, 40, ..., 162, 165, 168],
[ 249, 250, 251, ..., 195, 196, 198]])
The first row are time indices, the second and third rows are indices of the coordinates. I am trying to figure out which pair of coordinates most frequently occurred, disregarding the time.
e.g. Was it (49,249) or (40,250)...etc.?
I just used a small sample of your data, but I think you'll get the point:
import numpy as np
array = np.array([[ 0, 0, 0, 2879, 2879, 2879],
[ 40, 40, 40, 162, 165, 168],
[ 249, 250, 251, 195, 196, 198]])
# Zip together only the second and third rows
only_coords = zip(array[1,:], array[2,:])
from collections import Counter
Counter(only_coords).most_common()
Produces:
Out[11]:
[((40, 249), 1),
((165, 196), 1),
((162, 195), 1),
((168, 198), 1),
((40, 251), 1),
((40, 250), 1)]
Here's one vectorized approach -
IDs = a[1].max()+1 + a[2]
unq, idx, count = np.unique(IDs, return_index=1,return_counts=1)
out = a[1:,idx[count.argmax()]]
If there could be negative coordinates, use a[1].max()-a[1].min()+1 + a[2] to compute IDs.
Sample run -
In [44]: a
Out[44]:
array([[8, 3, 6, 6, 8, 5, 1, 6, 6, 5],
[5, 2, 1, 1, 5, 1, 5, 1, 1, 4],
[8, 2, 3, 3, 8, 1, 7, 3, 3, 3]])
In [47]: IDs = a[1].max()+1 + a[2]
In [48]: unq, idx, count = np.unique(IDs, return_index=1,return_counts=1)
In [49]: a[1:,idx[count.argmax()]]
Out[49]: array([1, 3])
This might seem a little abstract, but you could try saving each co-ordinate as a number, e.g. [2,1] = 2.1. And put your data into a list of these co-ordinates. For example, a 2nd row of [1,1,2] and 3rd row of [2,2,1] would be [1.2, 1.2, 2.1] You could then use the code:
from collections import Counter
list1=[1.2,1.2,2.1]
data = Counter(list1)
print (data.most_common(1)) # Returns the highest occurring item
which prints the most common number, and how many times it occurs, then you can simply convert the number back to a co-ordinate if you need to use it in your code.
Here is a sample code that does the count:
import numpy as np
import collections
a = np.array([[0, 1, 2, 3], [10, 10, 30 ,40], [25, 25, 10, 50]])
# You don't care about time
b = np.transpose(a[1:])
# convert list items to tuples
c = map(lambda v:tuple(v), b)
collections.Counter(c)
The output:
Counter({(10, 25): 2, (30, 10): 1, (40, 50): 1})

How can I create an numpy array from two different numpy arrays?

I want to create a bumpy array from two different bumpy arrays. For example:
Say I have 2 arrays a and b.
a = np.array([1,3,4])
b = np.array([[1,5,51,52],[2,6,61,62],[3,7,71,72],[4,8,81,82],[5,9,91,92]])
I want it to loop through each indices in array a and find it in array b and then save the row of b into c. Like below:
c = np.array([[1,5,51,52],
[3,7,71,72],
[4,8,81,82]])
I have tried doing:
c=np.zeros(shape=(len(b),4))
for i in b:
c[i]=a[b[i][:]]
but get this error "arrays used as indices must be of integer (or boolean) type"
Approach #1
If a is sorted, we can use np.searchsorted, like so -
idx = np.searchsorted(a,b[:,0])
idx[idx==a.size] = 0
out = b[a[idx] == b[:,0]]
Sample run -
In [160]: a
Out[160]: array([1, 3, 4])
In [161]: b
Out[161]:
array([[ 1, 5, 51, 52],
[ 2, 6, 61, 62],
[ 3, 7, 71, 72],
[ 4, 8, 81, 82],
[ 5, 9, 91, 92]])
In [162]: out
Out[162]:
array([[ 1, 5, 51, 52],
[ 3, 7, 71, 72],
[ 4, 8, 81, 82]])
If a is not sorted, we need to use sorter argument with searchsorted.
Approach #2
We can also use np.in1d -
b[np.in1d(b[:,0],a)]

How to reverse sklearn.OneHotEncoder transform to recover original data?

I encoded my categorical data using sklearn.OneHotEncoder and fed them to a random forest classifier. Everything seems to work and I got my predicted output back.
Is there a way to reverse the encoding and convert my output back to its original state?
A good systematic way to figure this out is to start with some test data and work through the sklearn.OneHotEncoder source with it. If you don't much care about how it works and simply want a quick answer, skip to the bottom.
X = np.array([
[3, 10, 15, 33, 54, 55, 78, 79, 80, 99],
[5, 1, 3, 7, 8, 12, 15, 19, 20, 8]
]).T
n_values_
Lines 1763-1786 determine the n_values_ parameter. This will be determined automatically if you set n_values='auto' (the default). Alternatively you can specify a maximum value for all features (int) or a maximum value per feature (array). Let's assume that we're using the default. So the following lines execute:
n_samples, n_features = X.shape # 10, 2
n_values = np.max(X, axis=0) + 1 # [100, 21]
self.n_values_ = n_values
feature_indices_
Next the feature_indices_ parameter is calculated.
n_values = np.hstack([[0], n_values]) # [0, 100, 21]
indices = np.cumsum(n_values) # [0, 100, 121]
self.feature_indices_ = indices
So feature_indices_ is merely the cumulative sum of n_values_ with a 0 prepended.
Sparse Matrix Construction
Next, a scipy.sparse.coo_matrix is constructed from the data. It is initialized from three arrays: the sparse data (all ones), the row indices, and the column indices.
column_indices = (X + indices[:-1]).ravel()
# array([ 3, 105, 10, 101, 15, 103, 33, 107, 54, 108, 55, 112, 78, 115, 79, 119, 80, 120, 99, 108])
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32), n_features)
# array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9], dtype=int32)
data = np.ones(n_samples * n_features)
# array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
# <10x121 sparse matrix of type '<type 'numpy.float64'>' with 20 stored elements in Compressed Sparse Row format>
Note that the coo_matrix is immediately converted to a scipy.sparse.csr_matrix. The coo_matrix is used as an intermediate format because it "facilitates fast conversion among sparse formats."
active_features_
Now, if n_values='auto', the sparse csr matrix is compressed down to only the columns with active features. The sparse csr_matrix is returned if sparse=True, otherwise it is densified before returning.
if self.n_values == 'auto':
mask = np.array(out.sum(axis=0)).ravel() != 0
active_features = np.where(mask)[0] # array([ 3, 10, 15, 33, 54, 55, 78, 79, 80, 99, 101, 103, 105, 107, 108, 112, 115, 119, 120])
out = out[:, active_features] # <10x19 sparse matrix of type '<type 'numpy.float64'>' with 20 stored elements in Compressed Sparse Row format>
self.active_features_ = active_features
return out if self.sparse else out.toarray()
Decoding
Now let's work in reverse. We'd like to know how to recover X given the sparse matrix that is returned along with the OneHotEncoder features detailed above. Let's assume we actually ran the code above by instantiating a new OneHotEncoder and running fit_transform on our data X.
from sklearn import preprocessing
ohc = preprocessing.OneHotEncoder() # all default params
out = ohc.fit_transform(X)
The key insight to solving this problem is understanding the relationship between active_features_ and out.indices. For a csr_matrix, the indices array contains the column numbers for each data point. However, these column numbers are not guaranteed to be sorted. To sort them, we can use the sorted_indices method.
out.indices # array([12, 0, 10, 1, 11, 2, 13, 3, 14, 4, 15, 5, 16, 6, 17, 7, 18, 8, 14, 9], dtype=int32)
out = out.sorted_indices()
out.indices # array([ 0, 12, 1, 10, 2, 11, 3, 13, 4, 14, 5, 15, 6, 16, 7, 17, 8, 18, 9, 14], dtype=int32)
We can see that before sorting, the indices are actually reversed along the rows. In other words, they are ordered with the last column first and the first column last. This is evident from the first two elements: [12, 0]. 0 corresponds to the 3 in the first column of X, since 3 is the minimum element it was assigned to the first active column. 12 corresponds to the 5 in the second column of X. Since the first row occupies 10 distinct columns, the minimum element of the second column (1) gets index 10. The next smallest (3) gets index 11, and the third smallest (5) gets index 12. After sorting, the indices are ordered as we would expect.
Next we look at active_features_:
ohc.active_features_ # array([ 3, 10, 15, 33, 54, 55, 78, 79, 80, 99, 101, 103, 105, 107, 108, 112, 115, 119, 120])
Notice that there are 19 elements, which corresponds to the number of distinct elements in our data (one element, 8, was repeated once). Notice also that these are arranged in order. The features that were in the first column of X are the same, and the features in the second column have simply been summed with 100, which corresponds to ohc.feature_indices_[1].
Looking back at out.indices, we can see that the maximum column number is 18, which is one minus the 19 active features in our encoding. A little thought about the relationship here shows that the indices of ohc.active_features_ correspond to the column numbers in ohc.indices. With this, we can decode:
import numpy as np
decode_columns = np.vectorize(lambda col: ohc.active_features_[col])
decoded = decode_columns(out.indices).reshape(X.shape)
This gives us:
array([[ 3, 105],
[ 10, 101],
[ 15, 103],
[ 33, 107],
[ 54, 108],
[ 55, 112],
[ 78, 115],
[ 79, 119],
[ 80, 120],
[ 99, 108]])
And we can get back to the original feature values by subtracting off the offsets from ohc.feature_indices_:
recovered_X = decoded - ohc.feature_indices_[:-1]
array([[ 3, 5],
[10, 1],
[15, 3],
[33, 7],
[54, 8],
[55, 12],
[78, 15],
[79, 19],
[80, 20],
[99, 8]])
Note that you will need to have the original shape of X, which is simply (n_samples, n_features).
TL;DR
Given the sklearn.OneHotEncoder instance called ohc, the encoded data (scipy.sparse.csr_matrix) output from ohc.fit_transform or ohc.transform called out, and the shape of the original data (n_samples, n_feature), recover the original data X with:
recovered_X = np.array([ohc.active_features_[col] for col in out.sorted_indices().indices])
.reshape(n_samples, n_features) - ohc.feature_indices_[:-1]
Just compute dot-product of the encoded values with ohe.active_features_. It works both for sparse and dense representation. Example:
from sklearn.preprocessing import OneHotEncoder
import numpy as np
orig = np.array([6, 9, 8, 2, 5, 4, 5, 3, 3, 6])
ohe = OneHotEncoder()
encoded = ohe.fit_transform(orig.reshape(-1, 1)) # input needs to be column-wise
decoded = encoded.dot(ohe.active_features_).astype(int)
assert np.allclose(orig, decoded)
The key insight is that the active_features_ attribute of the OHE model represents the original values for each binary column. Thus we can decode the binary-encoded number by simply computing a dot-product with active_features_. For each data point there's just a single 1 the position of the original value.
Use numpy.argmax() with axis = 1.
Example:
ohe_encoded = np.array([[0, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 0]])
ohe_encoded
> array([[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[1, 0, 0]])
np.argmax(ohe_encoded, axis = 1)
> array([2, 1, 1, 0], dtype=int64)
Since version 0.20 of scikit-learn, the active_features_ attribute of the OneHotEncoder class has been deprecated, so I suggest to rely on the categories_ attribute instead.
The below function can help you recover the original data from a matrix that has been one-hot encoded:
def reverse_one_hot(X, y, encoder):
reversed_data = [{} for _ in range(len(y))]
all_categories = list(itertools.chain(*encoder.categories_))
category_names = ['category_{}'.format(i+1) for i in range(len(encoder.categories_))]
category_lengths = [len(encoder.categories_[i]) for i in range(len(encoder.categories_))]
for row_index, feature_index in zip(*X.nonzero()):
category_value = all_categories[feature_index]
category_name = get_category_name(feature_index, category_names, category_lengths)
reversed_data[row_index][category_name] = category_value
reversed_data[row_index]['target'] = y[row_index]
return reversed_data
def get_category_name(index, names, lengths):
counter = 0
for i in range(len(lengths)):
counter += lengths[i]
if index < counter:
return names[i]
raise ValueError('The index is higher than the number of categorical values')
To test it, I have created a small data set that includes the ratings that users have given to users
data = [
{'user_id': 'John', 'item_id': 'The Matrix', 'rating': 5},
{'user_id': 'John', 'item_id': 'Titanic', 'rating': 1},
{'user_id': 'John', 'item_id': 'Forrest Gump', 'rating': 2},
{'user_id': 'John', 'item_id': 'Wall-E', 'rating': 2},
{'user_id': 'Lucy', 'item_id': 'The Matrix', 'rating': 5},
{'user_id': 'Lucy', 'item_id': 'Titanic', 'rating': 1},
{'user_id': 'Lucy', 'item_id': 'Die Hard', 'rating': 5},
{'user_id': 'Lucy', 'item_id': 'Forrest Gump', 'rating': 2},
{'user_id': 'Lucy', 'item_id': 'Wall-E', 'rating': 2},
{'user_id': 'Eric', 'item_id': 'The Matrix', 'rating': 2},
{'user_id': 'Eric', 'item_id': 'Die Hard', 'rating': 3},
{'user_id': 'Eric', 'item_id': 'Forrest Gump', 'rating': 5},
{'user_id': 'Eric', 'item_id': 'Wall-E', 'rating': 4},
{'user_id': 'Diane', 'item_id': 'The Matrix', 'rating': 4},
{'user_id': 'Diane', 'item_id': 'Titanic', 'rating': 3},
{'user_id': 'Diane', 'item_id': 'Die Hard', 'rating': 5},
{'user_id': 'Diane', 'item_id': 'Forrest Gump', 'rating': 3},
]
data_frame = pandas.DataFrame(data)
data_frame = data_frame[['user_id', 'item_id', 'rating']]
ratings = data_frame['rating']
data_frame.drop(columns=['rating'], inplace=True)
If we are building a prediction model, we have to remember to delete the dependent variable (in this case the rating) from the DataFrame before we encode it.
ratings = data_frame['rating']
data_frame.drop(columns=['rating'], inplace=True)
Then we proceed to do the encoding
ohc = OneHotEncoder()
encoded_data = ohc.fit_transform(data_frame)
print(encoded_data)
Which results in:
(0, 2) 1.0
(0, 6) 1.0
(1, 2) 1.0
(1, 7) 1.0
(2, 2) 1.0
(2, 5) 1.0
(3, 2) 1.0
(3, 8) 1.0
(4, 3) 1.0
(4, 6) 1.0
(5, 3) 1.0
(5, 7) 1.0
(6, 3) 1.0
(6, 4) 1.0
(7, 3) 1.0
(7, 5) 1.0
(8, 3) 1.0
(8, 8) 1.0
(9, 1) 1.0
(9, 6) 1.0
(10, 1) 1.0
(10, 4) 1.0
(11, 1) 1.0
(11, 5) 1.0
(12, 1) 1.0
(12, 8) 1.0
(13, 0) 1.0
(13, 6) 1.0
(14, 0) 1.0
(14, 7) 1.0
(15, 0) 1.0
(15, 4) 1.0
(16, 0) 1.0
(16, 5) 1.0
After encoding the we can reverse using the reverse_one_hot function we defined above, like this:
reverse_data = reverse_one_hot(encoded_data, ratings, ohc)
print(pandas.DataFrame(reverse_data))
Which gives us:
category_1 category_2 target
0 John The Matrix 5
1 John Titanic 1
2 John Forrest Gump 2
3 John Wall-E 2
4 Lucy The Matrix 5
5 Lucy Titanic 1
6 Lucy Die Hard 5
7 Lucy Forrest Gump 2
8 Lucy Wall-E 2
9 Eric The Matrix 2
10 Eric Die Hard 3
11 Eric Forrest Gump 5
12 Eric Wall-E 4
13 Diane The Matrix 4
14 Diane Titanic 3
15 Diane Die Hard 5
16 Diane Forrest Gump 3
If the features are dense, like [1,2,4,5,6], with several number missed. Then, we can mapping them to corresponding positions.
>>> import numpy as np
>>> from scipy import sparse
>>> def _sparse_binary(y):
... # one-hot codes of y with scipy.sparse matrix.
... row = np.arange(len(y))
... col = y - y.min()
... data = np.ones(len(y))
... return sparse.csr_matrix((data, (row, col)))
...
>>> y = np.random.randint(-2,2, 8).reshape([4,2])
>>> y
array([[ 0, -2],
[-2, 1],
[ 1, 0],
[ 0, -2]])
>>> yc = [_sparse_binary(y[:,i]) for i in xrange(2)]
>>> for i in yc: print i.todense()
...
[[ 0. 0. 1. 0.]
[ 1. 0. 0. 0.]
[ 0. 0. 0. 1.]
[ 0. 0. 1. 0.]]
[[ 1. 0. 0. 0.]
[ 0. 0. 0. 1.]
[ 0. 0. 1. 0.]
[ 1. 0. 0. 0.]]
>>> [i.shape for i in yc]
[(4, 4), (4, 4)]
This is a compromised and simple method, but works and easy to reverse by argmax(), e.g.:
>>> np.argmax(yc[0].todense(), 1) + y.min(0)[0]
matrix([[ 0],
[-2],
[ 1],
[ 0]])
How to one-hot encode
See https://stackoverflow.com/a/42874726/562769
import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]
def indices_to_one_hot(data, nb_classes):
"""Convert an iterable of indices to one-hot encoded labels."""
targets = np.array(data).reshape(-1)
return np.eye(nb_classes)[targets]
How to reverse
def one_hot_to_indices(data):
indices = []
for el in data:
indices.append(list(el).index(1))
return indices
hot = indices_to_one_hot(orig_data, nb_classes)
indices = one_hot_to_indices(hot)
print(orig_data)
print(indices)
gives:
[[2, 3, 4, 0]]
[2, 3, 4, 0]
The short answer is "no". The encoder takes your categorical data and automagically transforms it to a reasonable set of numbers.
The longer answer is "not automatically". If you provide an explicit mapping using the n_values parameter, though, you can probably implement own decoding at the other side. See the documentation for some hints on how that might be done.
That said, this is a fairly strange question. You may want to, instead, use a DictVectorizer
Pandas approach :
To convert categorical variables to binary variables, pd.get_dummies does that and to convert them back, you can find the index of the value where there is 1 using pd.Series.idxmax(). Then you can map to a list(index in according to original data) or dictionary.
import pandas as pd
import numpy as np
col = np.random.randint(1,5,20)
df = pd.DataFrame({'A': col})
df.head()
A
0 2
1 2
2 1
3 1
4 3
df_dum = pd.get_dummies(df['A'])
df_dum.head()
1 2 3 4
0 0 1 0 0
1 0 1 0 0
2 1 0 0 0
3 1 0 0 0
4 0 0 1 0
df_n = df_dum.apply(lambda x: x.idxmax(), axis = 1)
df_n.head()
0 2
1 2
2 1
3 1
4 3

Categories

Resources