numpy wrong shape of imported data and separating the y value - python

I have a large csv file ~90k rows and 355 columns. The first 354 columns correspond to the presence of different words, showing a 1 or 0 and the last column to a numerical value.
Eg:
table, box, cups, glasses, total
1,0,0,1,30
0,1,1,1,28
1,1,0,1,55
When I use:
d = np.recfromcsv('clean.csv', dtype=None, delimiter=',', names=True)
d.shape
# I get: (89460,)
So my question is:
How do I get a 2d array/matrix? Does it matter?
How can I separate the 'total' column so I can create train,
cross_validation and test sets and train a model?

np.recfromcsv returns a 1-dimensional record array.
When you have a structured array, you can access the columns by field title:
d['total']
returns the totals column.
You can access rows using integer indexing:
d[0]
returns the first row, for example.
If you wish to select all the columns except the last row, then you'd be better off using a 2D plain NumPy array. With a plain NumPy array (as opposed to a structured array) you can select all the rows except the last on using integer indexing:
You could use np.genfromtxt to load the data into a 2D array:
import numpy as np
d = np.genfromtxt('data', dtype=None, delimiter=',', skiprows=1)
print(d.shape)
# (3, 5)
print(d)
# [[ 1 0 0 1 30]
# [ 0 1 1 1 28]
# [ 1 1 0 1 55]]
This select the last column:
print(d[:,-1])
# [30 28 55]
This select everything but the last column:
print(d[:,:-1])
# [[1 0 0 1]
# [0 1 1 1]
# [1 1 0 1]]

Ok after much googling and time wasting this is what anyone trying to get numpy out of the way so they can read a CSV and get on with Scikit Learn needs to do:
# Say your csv file has 10 columns, 1-9 are features and 10
# is the Y you're trying to predict.
cols = range(0,10)
X = np.loadtxt('data.csv', delimiter=',', dtype=float, usecols=cols, ndmin=2, skiprows=1)
Y = np.loadtxt('data.csv', delimiter=',', dtype=float, usecols=(9,), ndmin=2, skiprows=1)
# note how for Y the usecols argument only takes a sequence,
# even though I only want 1 column I have to give it a sequence.

Related

Multiplying each row of a pandas dataframe by another row dataframe

So I want to multiply each row of a dataframe with a multiplier vector, and I am managing, but it looks ugly. Can this be improved?
import pandas as pd
import numpy as np
# original data
df_a = pd.DataFrame([[1,2,3],[4,5,6]])
print(df_a, '\n')
# multiplier vector
df_b = pd.DataFrame([2,2,1])
print(df_b, '\n')
# multiply by a list - it works
df_c = df_a*[2,2,1]
print(df_c, '\n')
# multiply by the dataframe - it works
df_c = df_a*df_b.T.to_numpy()
print(df_c, '\n')
"It looks ugly" is subjective, that said, if you want to multiply all rows of a dataframe with something else you either need:
a dataframe of a compatible shape (and compatible indices, as those are aligned before operations in pandas, which is why df_a*df_b.T would only work for the common index: 0)
a 1D vector, which in pandas is a Series
Using a Series:
df_a*df_b[0]
output:
0 1 2
0 2 4 3
1 8 10 6
Of course, better define a Series directly if you don't really need a 2D container:
s = pd.Series([2,2,1])
df_a*s
Just for the beauty, you can use Einstein summation:
>>> np.einsum('ij,ji->ij', df_a, df_b)
array([[ 2, 4, 3],
[ 8, 10, 6]])

Filter only certain words from sklearn CountVectorizer sparse matrix

I have a pandas series with full of text inside it. Using CountVectorizer function in sklearn package, I have calculated the sparse matrix. I have identified the top words as well. Now I want to filter my sparse matrix for only those top words.
The original data contains more than 7000 rows and contains more than 75000 words. Hence I am creating a sample data here
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
words = pd.Series(['This is first row of the text column',
'This is second row of the text column',
'This is third row of the text column',
'This is fourth row of the text column',
'This is fifth row of the text column'])
count_vec = CountVectorizer(stop_words='english')
sparse_matrix = count_vec.fit_transform(words)
I have created the sparse matrix for all the words in that column. Here just to print my sparse matrix, i am converting it to array using .toarray() function.
print count_vec.get_feature_names()
print sparse_matrix.toarray()
[u'column', u'fifth', u'fourth', u'row', u'second', u'text']
[[1 0 0 1 0 1]
[1 0 0 1 1 1]
[1 0 0 1 0 1]
[1 0 1 1 0 1]
[1 1 0 1 0 1]]
Now I am looking for frequently appearing words using the following
# Get frequency count of all features
features_count = sparse_matrix.sum(axis=0).tolist()[0]
features_names = count_vec.get_feature_names()
features = pd.DataFrame(zip(features_names, features_count),
columns=['features', 'count']
).sort_values(by=['count'], ascending=False)
features count
0 column 5
3 row 5
5 text 5
1 fifth 1
2 fourth 1
4 second 1
From the above result we know that the frequently appearing words are column, row & text. Now I want to filter my sparse matrix only for these words. I dont to convert my sparse matrix to array and then filter. Because I get memory error in my original data, since the number of words are quite high.
The only way I was able to get the sparse matrix is to again repeat the steps with those specific words using vocabulary attribute, like this
countvec_subset = CountVectorizer(vocabulary= ['column', 'text', 'row'])
Instead I am looking for a better solution, where I can filter the sparse matrix directly for those words, instead of creating it again from scratch.
You can work with slicing the sparse matrix. You'd need to derive columns for slicing. sparse_matrix[:, columns]
In [56]: feature_count = sparse_matrix.sum(axis=0)
In [57]: columns = tuple(np.where(feature_count == feature_count.max())[1])
In [58]: columns
Out[58]: (0, 3, 5)
In [59]: sparse_matrix[:, columns].toarray()
Out[59]:
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]], dtype=int64)
In [60]: type(sparse_matrix[:, columns])
Out[60]: scipy.sparse.csr.csr_matrix
In [71]: np.array(features_names)[list(columns)]
Out[71]:
array([u'column', u'row', u'text'],
dtype='<U6')
The sliced subset is still a scipy.sparse.csr.csr_matrix

Transform Pandas DataFrame with n-level hierarchical index into n-D Numpy array

Question
Is there a good way to transform a DataFrame with an n-level index into an n-D Numpy array (a.k.a n-tensor)?
Example
Suppose I set up a DataFrame like
from pandas import DataFrame, MultiIndex
index = range(2), range(3)
value = range(2 * 3)
frame = DataFrame(value, columns=['value'],
index=MultiIndex.from_product(index)).drop((1, 0))
print frame
which outputs
value
0 0 0
1 1
2 3
1 1 5
2 6
The index is a 2-level hierarchical index. I can extract a 2-D Numpy array from the data using
print frame.unstack().values
which outputs
[[ 0. 1. 2.]
[ nan 4. 5.]]
How does this generalize to an n-level index?
Playing with unstack(), it seems that it can only be used to massage the 2-D shape of the DataFrame, but not to add an axis.
I cannot use e.g. frame.values.reshape(x, y, z), since this would require that the frame contains exactly x * y * z rows, which cannot be guaranteed. This is what I tried to demonstrate by drop()ing a row in the above example.
Any suggestions are highly appreciated.
Edit. This approach is much more elegant (and two orders of magnitude faster) than the one I gave below.
# create an empty array of NaN of the right dimensions
shape = map(len, frame.index.levels)
arr = np.full(shape, np.nan)
# fill it using Numpy's advanced indexing
arr[frame.index.codes] = frame.values.flat
# ...or in Pandas < 0.24.0, use
# arr[frame.index.labels] = frame.values.flat
Original solution. Given a setup similar to above, but in 3-D,
from pandas import DataFrame, MultiIndex
from itertools import product
index = range(2), range(2), range(2)
value = range(2 * 2 * 2)
frame = DataFrame(value, columns=['value'],
index=MultiIndex.from_product(index)).drop((1, 0, 1))
print(frame)
we have
value
0 0 0 0
1 1
1 0 2
1 3
1 0 0 4
1 0 6
1 7
Now, we proceed using the reshape() route, but with some preprocessing to ensure that the length along each dimension will be consistent.
First, reindex the data frame with the full cartesian product of all dimensions. NaN values will be inserted as needed. This operation can be both slow and consume a lot of memory, depending on the number of dimensions and on the size of the data frame.
levels = map(tuple, frame.index.levels)
index = list(product(*levels))
frame = frame.reindex(index)
print(frame)
which outputs
value
0 0 0 0
1 1
1 0 2
1 3
1 0 0 4
1 NaN
1 0 6
1 7
Now, reshape() will work as intended.
shape = map(len, frame.index.levels)
print(frame.values.reshape(shape))
which outputs
[[[ 0. 1.]
[ 2. 3.]]
[[ 4. nan]
[ 6. 7.]]]
The (rather ugly) one-liner is
frame.reindex(list(product(*map(tuple, frame.index.levels)))).values\
.reshape(map(len, frame.index.levels))
This can be done quite nicely using the Python xarray package which can be found here: http://xarray.pydata.org/en/stable/. It has great integration with Pandas and is quite intuitive once you get to grips with it.
If you have a multiindex series you can call the built-in method multiindex_series.to_xarray() (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_xarray.html). This will generate a DataArray object, which is essentially a name-indexed numpy array, using the index values and names as coordinates. Following this you can call .values on the DataArray object to get the underlying numpy array.
If you need your tensor to conform to a set of keys in a specific order, you can also call .reindex(index_name = index_values_in_order) (http://xarray.pydata.org/en/stable/generated/xarray.DataArray.reindex.html) on the DataArray. This can be extremely useful and makes working with the newly generated tensor much easier!

Perform function on multiple columns in python

I have a data array of 30 trials(columns) each of 256 data points (rows) and would like to run a wavelet transform (which requires a 1D array) on each column with the eventual aim of obtaining the mean coefficients of the 30 trials.
Can someone point me in the right direction please?
If you have a multidimensional numpy array then you can use a for loop:
import numpy as np
A = np.array([[1,2,3], [4,5,6]])
# A is the matrix: 1 2 3
# 4 5 6
for col in A.transpose():
print("Column:", col)
# Perform your wavelet transform here, you can save the
# results to another multidimensional array.
This gives you access to each column as a 1D array.
Output:
Column: [1 4]
Column: [2 5]
Column: [3 6]
If you want to access the rows rather than the columns then loop through A rather than A.transpose().

Inflating a 1D array into a 2D array in numpy

Say I have a 1D array:
import numpy as np
my_array = np.arange(0,10)
my_array.shape
(10, )
In Pandas I would like to create a DataFrame with only one row and 10 columns using this array. FOr example:
import pandas as pd
import random, string
# Random list of characters to be used as columns
cols = [random.choice(string.ascii_uppercase) for x in range(10)]
But when I try:
pd.DataFrame(my_array, columns = cols)
I get:
ValueError: Shape of passed values is (1,10), indices imply (10,10)
I presume this is because Pandas expects a 2D array, and I have a (flat) 1D array. Is there a way to inflate my 1D array into a 2D array or have Panda use a 1D array in the creation of the dataframe?
Note: I am using the latest stable version of Pandas (0.11.0)
Your value array has length 9, (values from 1 till 9), and your cols list has length 10.
I dont understand your error message, based on your code, i get:
ValueError: Shape of passed values is (1, 9), indices imply (10, 9)
Which makes sense.
Try:
my_array = np.arange(10).reshape(1,10)
cols = [random.choice(string.ascii_uppercase) for x in range(10)]
pd.DataFrame(my_array, columns=cols)
Which results in:
F H L N M X B R S N
0 0 1 2 3 4 5 6 7 8 9
Either these should do it:
my_array2 = my_array[None] # same as myarray2 = my_array[numpy.newaxis]
or
my_array2 = my_array.reshape((1,10))
A single-row, many-columned DataFrame is unusual. A more natural, idiomatic choice would be a Series indexed by what you call cols:
pd.Series(my_array, index=cols)
But, to answer your question, the DataFrame constructor is assuming that my_array is a column of 10 data points. Try DataFrame(my_array.reshape((1, 10)), columns=cols). That works for me.
By using one of the alternate DataFrame constructors it is possible to create a DataFrame without needing to reshape my_array.
import numpy as np
import pandas as pd
import random, string
my_array = np.arange(0,10)
cols = [random.choice(string.ascii_uppercase) for x in range(10)]
pd.DataFrame.from_records([my_array], columns=cols)
Out[22]:
H H P Q C A G N T W
0 0 1 2 3 4 5 6 7 8 9

Categories

Resources