Pandas get_dummies generates multiple columns for the same feature - python

I'm using a pandas series and trying to convert it to one hot encoding. I'm using the describe method in order to check how many unique categories the series has. The output is:
input['pattern'].describe(include='all')
count 9725
unique 7
top 1
freq 4580
Name: pattern, dtype: object
When I'm trying:
x = pd.get_dummies(input['pattern'])
x.describe(include= 'all')
I get 18 classes with 12 classes which are completely zeros. How come did get_dummies produced classes which did not occur even once in the input?

From a discussion in the comments, it was deduced that your column contained a mixture of strings and integers.
For example,
s = pd.Series(['0', 0, '0', '6', 6, '6', '3', '3'])
s
0 0
1 0
2 0
3 6
4 6
5 6
6 3
7 3
dtype: object
Now, calling pd.get_dummies would result in multiple such columns of the same feature.
pd.get_dummies(s)
0 6 0 3 6
0 0 0 1 0 0
1 1 0 0 0 0
2 0 0 1 0 0
3 0 0 0 0 1
4 0 1 0 0 0
5 0 0 0 0 1
6 0 0 0 1 0
7 0 0 0 1 0
The fix is to ensure that all elements are of the same type. I'd recommend, for this case, converting to str.
s.astype(str).str.get_dummies()
0 3 6
0 1 0 0
1 1 0 0
2 1 0 0
3 0 0 1
4 0 0 1
5 0 0 1
6 0 1 0
7 0 1 0

Related

How to separate entries based on rows and columns in pandas dataframe

I have a dataframe that looks like this:
'0' '1' '2'
0 5 4 0
1 3 0 0
2 1 0 2
Where the name of the columns ('0', '1', '2', ...) represent user ids, the index represents movie ids, and each entry denotes the rating given by the user to that movie.
I would like to make a new dataframe, based on the previous one, that is like this:
user_id movie_id rating
0 0 0 5
1 0 1 3
2 0 2 1
3 1 0 4
4 1 1 0
5 1 2 0
6 2 0 0
7 2 1 0
8 2 2 2
I am new to pandas and was wondering how to do this without iterating through all the entries.
You can get it with stack(), and then reset_index():
df = df.stack().reset_index()
df.columns = ['user_id','movie_id','rating']
print(df)
user_id movie_id rating
0 0 0 5
1 0 1 4
2 0 2 0
3 1 0 3
4 1 1 0
5 1 2 0
6 2 0 1
7 2 1 0
8 2 2 2

Group one column of dataframe by variable index

I have a dataframe which consists of PartialRoutes (which result together in full routes) and a treatment variable and I am trying to reduce the dataframe to the full routes by grouping these together and keeping the treatment variable.
To make this more clear, the df looks like
PartialRoute Treatment
0 1
1 0
0 0
0 0
1 0
2 0
3 0
0 0
1 1
2 0
where every 0 in 'Partial Route' starts a new group, which means I always want to group all values until a new route starts/ a new 0 in index.
So in this example there exists 4 groups
PartialRoute Treatment
0 1
1 0
-----------------
0 0
-----------------
0 0
1 0
2 0
3 0
-----------------
0 0
1 1
2 0
-----------------
and the result should look like
Route Treatment
0 1
1 0
2 0
3 1
Is there any solution to solve this elegant?
Create groups by comparing by Series.eq with cumulative sum by Series.cumsum and then aggregate per groups, e.g. by sum or max:
df1 = df.groupby(df['PartialRoute'].eq(0).cumsum())['Treatment'].sum().reset_index()
print (df1)
PartialRoute Treatment
0 1 1
1 2 0
2 3 0
3 4 1
Detail:
print (df['PartialRoute'].eq(0).cumsum())
0 1
1 1
2 2
3 3
4 3
5 3
6 3
7 4
8 4
9 4
Name: PartialRoute, dtype: int32
If first value of DataFrame is not 0 get different groups - starting by 0:
print (df)
PartialRoute Treatment
0 1 1
1 1 0
2 0 0
3 0 0
4 1 0
5 2 0
6 3 0
7 0 0
8 1 1
9 2 0
print (df['PartialRoute'].eq(0).cumsum())
0 0
1 0
2 1
3 2
4 2
5 2
6 2
7 3
8 3
9 3
Name: PartialRoute, dtype: int32
df1 = df.groupby(df['PartialRoute'].eq(0).cumsum())['Treatment'].sum().reset_index()
print (df1)
PartialRoute Treatment
0 0 1
1 1 0
2 2 0
3 3 1

Create new column where two columns from two different data frames are the same

I have two dataframes.
1:
pid cluster
7993355 0 0
8180238 0 5
8174589 0 7
8168267 0 10
8264548 0 10
8252159 0 0
8388741 0 6
8346358 0 2
8194226 0 8
8187866 0 3
8133728 0 1
8215624 0 6
8124250 0 0
8382996 0 5
8151852 0 0
8130044 0 2
8017035 0 5
8108438 0 0
8245152 0 1
8047538 0 3
8070691 0 7
8344660 0 5
8148647 0 6
8157608 0 10
8352127 0 8
2:
pid cluster count
0 0 0 8
1 0 1 2
2 0 2 3
3 0 3 2
4 0 4 1
5 0 5 5
6 0 6 4
7 0 7 3
8 0 8 4
9 0 10 3
My goal is to join these two dataframes when both pid and cluster are the same for example if the pid and cluster are both 0 I would like the next dataframe to have the value 8 for count.
I would like to this autonomously.
I have tried using a fucntion :train['count'] = np.where(((sample['pid'] == train['pid'])&(sample['cluster']==train['cluster'])), sample['count'], 0) But it doesnt work.
Pd.Merge etc will not work as both are of different dimensions I have only provide a small snippet of the dataframes.
Any help would be appreciated!!
Try this
df2[df2['pid'].isin(pd.unique(df1['pid'])) & df2['count'].isin(pd.unique(df1['count'])) ]

Convert the last non-zero value to 0 for each row in a pandas DataFrame

I'm trying to modify my data frame in a way that the last variable of a label encoded feature is converted to 0. For example, I have this data frame, top row being the labels and the first column as the index:
df
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 1
1 0 0 0 1 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 1 0
Columns 1-10 are the ones that have been encoded. What I want to convert this data frame to, without changing anything else is:
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0
So the last values occurring in each row should be converted to 0. I was thinking of using the last_valid_index method, but that would take in the other remaining columns and change that as well, which I don't want. Any help is appreciated
You can use cumsum to build a boolean mask, and set to zero.
v = df.cumsum(axis=1)
df[v.lt(v.max(axis=1), axis=0)].fillna(0, downcast='infer')
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0
Another similar option is reversing before calling cumsum, you can now do this in a single line.
df[~df.iloc[:, ::-1].cumsum(1).le(1)].fillna(0, downcast='infer')
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0
If you have more columns, just apply these operations on the slice. Later, assign back.
u = df.iloc[:, :10]
df[u.columns] = u[~u.iloc[:, ::-1].cumsum(1).le(1)].fillna(0, downcast='infer')

Finding efficiently pandas (part of) rows with unique values

Given a pandas dataframe with a row per individual/record. A row includes a property value and its evolution across time (0 to N).
A schedule includes the estimated values of a variable 'property' for a number of entities from day 1 to day 10 in the following example.
I want to filter entities with unique values for a given period and get those values
csv=',property,1,2,3,4,5,6,7,8,9,10\n0,100011,0,0,0,0,3,3,3,3,3,0\n1,100012,0,0,0,0,2,2,2,8,8,0\n2, \
100012,0,0,0,0,2,2,2,2,2,0\n3,100012,0,0,0,0,0,0,0,0,0,0\n4,100011,0,0,0,0,2,2,2,2,2,0\n5, \
180011,0,0,0,0,2,2,2,2,2,0\n6,110012,0,0,0,0,0,0,0,0,0,0\n7,110011,0,0,0,0,3,3,3,3,3,0\n8, \
110012,0,0,0,0,3,3,3,3,3,0\n9,110013,0,0,0,0,0,0,0,0,0,0\n10,100011,0,0,0,0,3,3,3,3,4,0'
from StringIO import StringIO
import numpy as np
schedule = pd.read_csv(StringIO(csv), index_col=0)
print schedule
property 1 2 3 4 5 6 7 8 9 10
0 100011 0 0 0 0 3 3 3 3 3 0
1 100012 0 0 0 0 2 2 2 8 8 0
2 100012 0 0 0 0 2 2 2 2 2 0
3 100012 0 0 0 0 0 0 0 0 0 0
4 100011 0 0 0 0 2 2 2 2 2 0
5 180011 0 0 0 0 2 2 2 2 2 0
6 110012 0 0 0 0 0 0 0 0 0 0
7 110011 0 0 0 0 3 3 3 3 3 0
8 110012 0 0 0 0 3 3 3 3 3 0
9 110013 0 0 0 0 0 0 0 0 0 0
10 100011 0 0 0 0 3 3 3 3 4 0
I want to find records/individuals for who property has not changed during a given period and the corresponding unique values
Here is what i came with : I want to locate individuals with property in [100011, 100012, 1100012] between days 7 and 10
props = [100011, 100012, 1100012]
begin = 7
end = 10
res = schedule['property'].isin(props)
df = schedule.ix[res, begin:end]
print "df \n%s " %df
We have :
df
7 8 9
0 3 3 3
1 2 8 8
2 2 2 2
3 0 0 0
4 2 2 2
10 3 3 4
res = df.apply(lambda x: np.unique(x).size == 1, axis=1)
print "res : %s\n" %res
df_f = df.ix[res,]
print "df filtered %s \n" % df_f
res = pd.Series(df_f.values.ravel()).unique().tolist()
print "unique values : %s " %res
Giving :
res :
0 True
1 False
2 True
3 True
4 True
10 False
dtype: bool
df filtered
7 8 9
0 3 3 3
2 2 2 2
3 0 0 0
4 2 2 2
unique values : [3, 2, 0]
As those operations need to be run many times (in millions) on a million rows dataframe, i need to be able to run it as quickly as possible.
(#MaxU) : schedule can be seen as a database/repository updated many times. The repository is then requested as well many times for unique values
Would you have some ideas for improvements/ alternate ways ?
Given your df
7 8 9
0 3 3 3
1 2 8 8
2 2 2 2
3 0 0 0
4 2 2 2
10 3 3 4
You can simplify your code to:
df_f = df[df.apply(pd.Series.nunique, axis=1) == 1]
print(df_f)
7 8 9
0 3 3 3
2 2 2 2
3 0 0 0
4 2 2 2
And the final step to:
res = df_f.iloc[:,0].unique().tolist()
print(res)
[3, 2, 0]
It's not fully vectorised, but maybe this clarifies things a bit towards that?

Categories

Resources