My pandas dataframe is as follows:
df = pd.DataFrame({"PAR NAME":['abc','def','def','def','abc'], "value":[1,2,3,4,5],"DESTCD":['E','N','E','E','S']})
I need to pivot df for PAR NAME and find out what %age of its value comes from places where DESTCD is 'E'. Something like this (which obviously didnt work!)
df.pivot_table(index="PAR NAME",values=["value"],aggfunc={'value':lambda x: (x.sum() if x["DESTCD"]=="E")*100.0/x.sum()})
I am currently doing this through adding a conditional column and then summing it along with 'value' in pivot and then dividing, but my database is huge (1gb+) and there has got to be an easier way.
Edit: Expected Output
abc 16.67 (since abc and E is 1 out of total abc which is 6)
def 77.78 (since def and E is 7 out of total def of 9);
(Note: Please dont recommend slicing multiple dataframes as mentioned my data is huge and efficiency is critical :) )
I tried to solve the problem without specifically referencing 'E' so it is generalizable to any alphabet letter. The output is a dataframe that you can then index on E to get your answer. I simply did the aggregation separately and then used an efficient join method.
df = pd.DataFrame({"PAR NAME":['abc','def','def','def','abc'], "value":[1,2,3,4,5],"DESTCD":['E','N','E','E','S']})
# First groupby 'DESTCD' and 'PAR NAME'
gb = df.groupby(['DESTCD', 'PAR NAME'], as_index=False).sum()
print(gb)
DESTCD PAR NAME value
0 E abc 1
1 E def 7
2 N def 2
3 S abc 5
gb_parname = gb.groupby(['PAR NAME']).sum()
out = gb.join(gb_parname, on='PAR NAME', rsuffix='Total')
print(out)
DESTCD PAR NAME value valueTotal
0 E abc 1 6
1 E def 7 9
2 N def 2 9
3 S abc 5 6
out.loc[:, 'derived']= out.apply(lambda df: df.value/df.valueTotal, axis=1)
print(out)
DESTCD PAR NAME value valueTotal derived
0 E abc 1 6 0.166667
1 E def 7 9 0.777778
2 N def 2 9 0.222222
3 S abc 5 6 0.833333
It's also a relatively efficient operation
%%timeit
gb = df.groupby(['DESTCD', 'PAR NAME'], as_index=False).sum()
gb_parname = gb.groupby(['PAR NAME']).sum()
out = gb.join(gb_parname, on='PAR NAME', rsuffix='Total')
out.loc[:, 'derived']= out.apply(lambda df: df.value/df.valueTotal, axis=1)
100 loops, best of 3: 6.31 ms per loop
Instead of pivot table you can use multiple groupby methods based on PAR NAME and then apply the operation you want. i.e
new = df[df['DESTCD']=='E'].groupby('PAR NAME')['value'].sum()*100/df.groupby('PAR NAME')['value'].sum()
Output:
PAR NAME
abc 16.666667
def 77.777778
Name: value, dtype: float64
If you want timings
%%timeit
df[df['DESTCD']=='E'].groupby('PAR NAME')['value'].sum()*100/df.groupby('PAR NAME')['value'].sum()
100 loops, best of 3: 4.03 ms per loop
%%timeit
df = pd.concat([df]*10000)
df[df['DESTCD']=='E'].groupby('PAR NAME')['value'].sum()*100/df.groupby('PAR NAME')['value'].sum()
100 loops, best of 3: 15.6 ms per loop
I also found a way to answer the question via pivot which is equally efficient as the selected answer! Adding here for convenience of others:
df.pivot_table(index="PAR NAME",values=["value"],aggfunc={'value':lambda x: x[df.iloc[x.index]['DESTCD']=='E'].sum()*100.0/x.sum()})
Logic being that aggfunc only works with series in question and cannot reference any other series till you get them via indexing the main df.
Related
I have a pandas dataframe which contains duplicates values according to two columns (A and B):
A B C
1 2 1
1 2 4
2 7 1
3 4 0
3 4 8
I want to remove duplicates keeping the row with max value in column C. This would lead to:
A B C
1 2 4
2 7 1
3 4 8
I cannot figure out how to do that. Should I use drop_duplicates(), something else?
You can do it using group by:
c_maxes = df.groupby(['A', 'B']).C.transform(max)
df = df.loc[df.C == c_maxes]
c_maxes is a Series of the maximum values of C in each group but which is of the same length and with the same index as df. If you haven't used .transform then printing c_maxes might be a good idea to see how it works.
Another approach using drop_duplicates would be
df.sort('C').drop_duplicates(subset=['A', 'B'], take_last=True)
Not sure which is more efficient but I guess the first approach as it doesn't involve sorting.
EDIT:
From pandas 0.18 up the second solution would be
df.sort_values('C').drop_duplicates(subset=['A', 'B'], keep='last')
or, alternatively,
df.sort_values('C', ascending=False).drop_duplicates(subset=['A', 'B'])
In any case, the groupby solution seems to be significantly more performing:
%timeit -n 10 df.loc[df.groupby(['A', 'B']).C.max == df.C]
10 loops, best of 3: 25.7 ms per loop
%timeit -n 10 df.sort_values('C').drop_duplicates(subset=['A', 'B'], keep='last')
10 loops, best of 3: 101 ms per loop
You can do this simply by using pandas drop duplicates function
df.drop_duplicates(['A','B'],keep= 'last')
I think groupby should work.
df.groupby(['A', 'B']).max()['C']
If you need a dataframe back you can chain the reset index call.
df.groupby(['A', 'B']).max()['C'].reset_index()
You can do it with drop_duplicates as you wanted
# initialisation
d = pd.DataFrame({'A' : [1,1,2,3,3], 'B' : [2,2,7,4,4], 'C' : [1,4,1,0,8]})
d = d.sort_values("C", ascending=False)
d = d.drop_duplicates(["A","B"])
If it's important to get the same order
d = d.sort_index()
I have a dataset
category
cat a
cat b
cat a
I'd like to be able to return something like (showing unique values and frequency)
category freq
cat a 2
cat b 1
Use value_counts() as #DSM commented.
In [37]:
df = pd.DataFrame({'a':list('abssbab')})
df['a'].value_counts()
Out[37]:
b 3
a 2
s 2
dtype: int64
Also groupby and count. Many ways to skin a cat here.
In [38]:
df.groupby('a').count()
Out[38]:
a
a
a 2
b 3
s 2
[3 rows x 1 columns]
See the online docs.
If you wanted to add frequency back to the original dataframe use transform to return an aligned index:
In [41]:
df['freq'] = df.groupby('a')['a'].transform('count')
df
Out[41]:
a freq
0 a 2
1 b 3
2 s 2
3 s 2
4 b 3
5 a 2
6 b 3
[7 rows x 2 columns]
If you want to apply to all columns you can use:
df.apply(pd.value_counts)
This will apply a column based aggregation function (in this case value_counts) to each of the columns.
df.category.value_counts()
This short little line of code will give you the output you want.
If your column name has spaces you can use
df['category'].value_counts()
df.apply(pd.value_counts).fillna(0)
value_counts - Returns object containing counts of unique values
apply - count frequency in every column. If you set axis=1, you get frequency in every row
fillna(0) - make output more fancy. Changed NaN to 0
In 0.18.1 groupby together with count does not give the frequency of unique values:
>>> df
a
0 a
1 b
2 s
3 s
4 b
5 a
6 b
>>> df.groupby('a').count()
Empty DataFrame
Columns: []
Index: [a, b, s]
However, the unique values and their frequencies are easily determined using size:
>>> df.groupby('a').size()
a
a 2
b 3
s 2
With df.a.value_counts() sorted values (in descending order, i.e. largest value first) are returned by default.
Using list comprehension and value_counts for multiple columns in a df
[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]
https://stackoverflow.com/a/28192263/786326
As everyone said, the faster solution is to do:
df.column_to_analyze.value_counts()
But if you want to use the output in your dataframe, with this schema:
df input:
category
cat a
cat b
cat a
df output:
category counts
cat a 2
cat b 1
cat a 2
you can do this:
df['counts'] = df.category.map(df.category.value_counts())
df
If your DataFrame has values with the same type, you can also set return_counts=True in numpy.unique().
index, counts = np.unique(df.values,return_counts=True)
np.bincount() could be faster if your values are integers.
You can also do this with pandas by broadcasting your columns as categories first, e.g. dtype="category" e.g.
cats = ['client', 'hotel', 'currency', 'ota', 'user_country']
df[cats] = df[cats].astype('category')
and then calling describe:
df[cats].describe()
This will give you a nice table of value counts and a bit more :):
client hotel currency ota user_country
count 852845 852845 852845 852845 852845
unique 2554 17477 132 14 219
top 2198 13202 USD Hades US
freq 102562 8847 516500 242734 340992
Without any libraries, you could do this instead:
def to_frequency_table(data):
frequencytable = {}
for key in data:
if key in frequencytable:
frequencytable[key] += 1
else:
frequencytable[key] = 1
return frequencytable
Example:
to_frequency_table([1,1,1,1,2,3,4,4])
>>> {1: 4, 2: 1, 3: 1, 4: 2}
I believe this should work fine for any DataFrame columns list.
def column_list(x):
column_list_df = []
for col_name in x.columns:
y = col_name, len(x[col_name].unique())
column_list_df.append(y)
return pd.DataFrame(column_list_df)
column_list_df.rename(columns={0: "Feature", 1: "Value_count"})
The function "column_list" checks the columns names and then checks the uniqueness of each column values.
#metatoaster has already pointed this out.
Go for Counter. It's blazing fast.
import pandas as pd
from collections import Counter
import timeit
import numpy as np
df = pd.DataFrame(np.random.randint(1, 10000, (100, 2)), columns=["NumA", "NumB"])
Timers
%timeit -n 10000 df['NumA'].value_counts()
# 10000 loops, best of 3: 715 µs per loop
%timeit -n 10000 df['NumA'].value_counts().to_dict()
# 10000 loops, best of 3: 796 µs per loop
%timeit -n 10000 Counter(df['NumA'])
# 10000 loops, best of 3: 74 µs per loop
%timeit -n 10000 df.groupby(['NumA']).count()
# 10000 loops, best of 3: 1.29 ms per loop
Cheers!
The following code creates frequency table for the various values in a column called "Total_score" in a dataframe called "smaller_dat1", and then returns the number of times the value "300" appears in the column.
valuec = smaller_dat1.Total_score.value_counts()
valuec.loc[300]
n_values = data.income.value_counts()
First unique value count
n_at_most_50k = n_values[0]
Second unique value count
n_greater_50k = n_values[1]
n_values
Output:
<=50K 34014
>50K 11208
Name: income, dtype: int64
Output:
n_greater_50k,n_at_most_50k:-
(11208, 34014)
your data:
|category|
cat a
cat b
cat a
solution:
df['freq'] = df.groupby('category')['category'].transform('count')
df = df.drop_duplicates()
I have a dataset
category
cat a
cat b
cat a
I'd like to be able to return something like (showing unique values and frequency)
category freq
cat a 2
cat b 1
Use value_counts() as #DSM commented.
In [37]:
df = pd.DataFrame({'a':list('abssbab')})
df['a'].value_counts()
Out[37]:
b 3
a 2
s 2
dtype: int64
Also groupby and count. Many ways to skin a cat here.
In [38]:
df.groupby('a').count()
Out[38]:
a
a
a 2
b 3
s 2
[3 rows x 1 columns]
See the online docs.
If you wanted to add frequency back to the original dataframe use transform to return an aligned index:
In [41]:
df['freq'] = df.groupby('a')['a'].transform('count')
df
Out[41]:
a freq
0 a 2
1 b 3
2 s 2
3 s 2
4 b 3
5 a 2
6 b 3
[7 rows x 2 columns]
If you want to apply to all columns you can use:
df.apply(pd.value_counts)
This will apply a column based aggregation function (in this case value_counts) to each of the columns.
df.category.value_counts()
This short little line of code will give you the output you want.
If your column name has spaces you can use
df['category'].value_counts()
df.apply(pd.value_counts).fillna(0)
value_counts - Returns object containing counts of unique values
apply - count frequency in every column. If you set axis=1, you get frequency in every row
fillna(0) - make output more fancy. Changed NaN to 0
In 0.18.1 groupby together with count does not give the frequency of unique values:
>>> df
a
0 a
1 b
2 s
3 s
4 b
5 a
6 b
>>> df.groupby('a').count()
Empty DataFrame
Columns: []
Index: [a, b, s]
However, the unique values and their frequencies are easily determined using size:
>>> df.groupby('a').size()
a
a 2
b 3
s 2
With df.a.value_counts() sorted values (in descending order, i.e. largest value first) are returned by default.
Using list comprehension and value_counts for multiple columns in a df
[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]
https://stackoverflow.com/a/28192263/786326
As everyone said, the faster solution is to do:
df.column_to_analyze.value_counts()
But if you want to use the output in your dataframe, with this schema:
df input:
category
cat a
cat b
cat a
df output:
category counts
cat a 2
cat b 1
cat a 2
you can do this:
df['counts'] = df.category.map(df.category.value_counts())
df
If your DataFrame has values with the same type, you can also set return_counts=True in numpy.unique().
index, counts = np.unique(df.values,return_counts=True)
np.bincount() could be faster if your values are integers.
You can also do this with pandas by broadcasting your columns as categories first, e.g. dtype="category" e.g.
cats = ['client', 'hotel', 'currency', 'ota', 'user_country']
df[cats] = df[cats].astype('category')
and then calling describe:
df[cats].describe()
This will give you a nice table of value counts and a bit more :):
client hotel currency ota user_country
count 852845 852845 852845 852845 852845
unique 2554 17477 132 14 219
top 2198 13202 USD Hades US
freq 102562 8847 516500 242734 340992
Without any libraries, you could do this instead:
def to_frequency_table(data):
frequencytable = {}
for key in data:
if key in frequencytable:
frequencytable[key] += 1
else:
frequencytable[key] = 1
return frequencytable
Example:
to_frequency_table([1,1,1,1,2,3,4,4])
>>> {1: 4, 2: 1, 3: 1, 4: 2}
I believe this should work fine for any DataFrame columns list.
def column_list(x):
column_list_df = []
for col_name in x.columns:
y = col_name, len(x[col_name].unique())
column_list_df.append(y)
return pd.DataFrame(column_list_df)
column_list_df.rename(columns={0: "Feature", 1: "Value_count"})
The function "column_list" checks the columns names and then checks the uniqueness of each column values.
#metatoaster has already pointed this out.
Go for Counter. It's blazing fast.
import pandas as pd
from collections import Counter
import timeit
import numpy as np
df = pd.DataFrame(np.random.randint(1, 10000, (100, 2)), columns=["NumA", "NumB"])
Timers
%timeit -n 10000 df['NumA'].value_counts()
# 10000 loops, best of 3: 715 µs per loop
%timeit -n 10000 df['NumA'].value_counts().to_dict()
# 10000 loops, best of 3: 796 µs per loop
%timeit -n 10000 Counter(df['NumA'])
# 10000 loops, best of 3: 74 µs per loop
%timeit -n 10000 df.groupby(['NumA']).count()
# 10000 loops, best of 3: 1.29 ms per loop
Cheers!
The following code creates frequency table for the various values in a column called "Total_score" in a dataframe called "smaller_dat1", and then returns the number of times the value "300" appears in the column.
valuec = smaller_dat1.Total_score.value_counts()
valuec.loc[300]
n_values = data.income.value_counts()
First unique value count
n_at_most_50k = n_values[0]
Second unique value count
n_greater_50k = n_values[1]
n_values
Output:
<=50K 34014
>50K 11208
Name: income, dtype: int64
Output:
n_greater_50k,n_at_most_50k:-
(11208, 34014)
your data:
|category|
cat a
cat b
cat a
solution:
df['freq'] = df.groupby('category')['category'].transform('count')
df = df.drop_duplicates()
i have a dataframe
id store val1 val2
1 abc 20 30
1 abc 20 40
1 qwe 78 45
2 dfd 34 45
2 sad 43 45
from this i have to group by on id and create a new df, with column, total_store and unique stores and non-repeating_stores, which contains count of such store occurances.
my final output should be
id total_store unique stores non-repeating_stores
1 3 2 1
2 2 2 2
i can get total stores by doing
df.groupby('id')['store'].count()
But how do i get others and form a dataframe out of it
You can use a groupby + agg.
df = df.groupby('id').store.agg(['count', 'nunique', \
lambda x: x.drop_duplicates(keep=False).size])
df.columns = ['total_store', 'unique stores', 'non-repeating_stores']
df
total_store unique stores non-repeating_stores
id
1 3 2 1
2 2 2 2
For older pandas versions, passing a dict allows simplifying your code (deprecated in 0.20 and onwards):
agg_funcs = {'total_stores' : 'count', 'unique_stores' : 'nunique',
'non-repeating_stores' : lambda x: x.drop_duplicates(keep=False).size
}
df = df.groupby('id').store.agg(agg_funcs)
df
total_stores non-repeating_stores unique_stores
id
1 3 1 2
2 2 2 2
As a slight improvement with speed, you can employ the use of drop_duplicates' sister method, duplicated, in this fashion, as documented by jezrael:
lambda x: (~x.duplicated(keep=False)).sum()
This would replace the third function in agg, with a 20% speed boost over large data of size 1000000:
1 loop, best of 3: 7.31 s per loop
v/s
1 loop, best of 3: 5.19 s per loop
Use groupby with agg with count and nunique. Last function is a bit complicated - need count all non dupes using inverting duplicated with sum:
If need count NaNs use size instead count:
df = df.groupby('id')['store'].agg(['count',
'nunique',
lambda x: (~x.duplicated(keep=False)).sum()])
df.columns = ['total_store', 'unique stores', 'non-repeating_stores']
print (df)
total_store unique stores non-repeating_stores
id
1 3 2 1
2 2 2 2
Timings:
np.random.seed(123)
N = 1000000
L = np.random.randint(10000,size=N).astype(str)
df = pd.DataFrame({'store': np.random.choice(L, N),
'id': np.random.randint(10000, size=N)})
print (df)
In [120]: %timeit (df.groupby('id')['store'].agg(['count', 'nunique', lambda x: (~x.duplicated(keep=False)).sum()]))
1 loop, best of 3: 4.47 s per loop
In [122]: %timeit (df.groupby('id').store.agg(['count', 'nunique', lambda x: x.drop_duplicates(keep=False).size]))
1 loop, best of 3: 11 s per loop
I am trying to find the count of distinct values in each column using Pandas. This is what I did.
import pandas as pd
import numpy as np
# Generate data.
NROW = 10000
NCOL = 100
df = pd.DataFrame(np.random.randint(1, 100000, (NROW, NCOL)),
columns=['col' + x for x in np.arange(NCOL).astype(str)])
I need to count the number of distinct elements for each column, like this:
col0 9538
col1 9505
col2 9524
What would be the most efficient way to do this, as this method will be applied to files which have size greater than 1.5GB?
Based upon the answers, df.apply(lambda x: len(x.unique())) is the fastest (notebook).
%timeit df.apply(lambda x: len(x.unique()))
10 loops, best of 3: 49.5 ms per loop
%timeit df.nunique()
10 loops, best of 3: 59.7 ms per loop
%timeit df.apply(pd.Series.nunique)
10 loops, best of 3: 60.3 ms per loop
%timeit df.T.apply(lambda x: x.nunique(), axis=1)
10 loops, best of 3: 60.5 ms per loop
As of pandas 0.20 we can use nunique directly on DataFrames, i.e.:
df.nunique()
a 4
b 5
c 1
dtype: int64
Other legacy options:
You could do a transpose of the df and then using apply call nunique row-wise:
In [205]:
df = pd.DataFrame({'a':[0,1,1,2,3],'b':[1,2,3,4,5],'c':[1,1,1,1,1]})
df
Out[205]:
a b c
0 0 1 1
1 1 2 1
2 1 3 1
3 2 4 1
4 3 5 1
In [206]:
df.T.apply(lambda x: x.nunique(), axis=1)
Out[206]:
a 4
b 5
c 1
dtype: int64
EDIT
As pointed out by #ajcr the transpose is unnecessary:
In [208]:
df.apply(pd.Series.nunique)
Out[208]:
a 4
b 5
c 1
dtype: int64
A Pandas.Series has a .value_counts() function that provides exactly what you want to. Check out the documentation for the function.
Already some great answers here :) but this one seems to be missing:
df.apply(lambda x: x.nunique())
As of pandas 0.20.0, DataFrame.nunique() is also available.
Recently, I have same issues of counting unique value of each column in DataFrame, and I found some other function that runs faster than the apply function:
#Select the way how you want to store the output, could be pd.DataFrame or Dict, I will use Dict to demonstrate:
col_uni_val={}
for i in df.columns:
col_uni_val[i] = len(df[i].unique())
#Import pprint to display dic nicely:
import pprint
pprint.pprint(col_uni_val)
This works for me almost twice faster than df.apply(lambda x: len(x.unique()))
I found:
df.agg(['nunique']).T
much faster
df.apply(lambda x: len(x.unique()))
Need to segregate only the columns with more than 20 unique values for all the columns in pandas_python:
enter code here
col_with_morethan_20_unique_values_cat=[]
for col in data.columns:
if data[col].dtype =='O':
if len(data[col].unique()) >20:
....col_with_morethan_20_unique_values_cat.append(data[col].name)
else:
continue
print(col_with_morethan_20_unique_values_cat)
print('total number of columns with more than 20 number of unique value is',len(col_with_morethan_20_unique_values_cat))
# The o/p will be as:
['CONTRACT NO', 'X2','X3',,,,,,,..]
total number of columns with more than 20 number of unique value is 25
Adding the example code for the answer given by #CaMaDuPe85
df = pd.DataFrame({'a':[0,1,1,2,3],'b':[1,2,3,4,5],'c':[1,1,1,1,1]})
df
# df
a b c
0 0 1 1
1 1 2 1
2 1 3 1
3 2 4 1
4 3 5 1
for cs in df.columns:
print(cs,df[cs].value_counts().count())
# using value_counts in each column and count it
# Output
a 4
b 5
c 1