I am looking to update many columns based on the values in one column; this is easy with a loop but takes far too long for my application when there are many columns and many rows. What is the most elegant way to get the desired counts for each letter?
Desired Output:
Things count_A count_B count_C count_D
['A','B','C'] 1 1 1 0
['A','A','A'] 3 0 0 0
['B','A'] 1 1 0 0
['D','D'] 0 0 0 2
The most elegant is definitely the CountVectorizer from sklearn.
I'll show you how it works first, then I'll do everything in one line, so you can see how elegant it is.
First, we'll do it step by step:
let's create some data
raw = ['ABC', 'AAA', 'BA', 'DD']
things = [list(s) for s in raw]
Then read in some packages and initialize count vectorizer
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
cv = CountVectorizer(tokenizer=lambda doc: doc, lowercase=False)
Next we generate a matrix of counts
matrix = cv.fit_transform(things)
names = ["count_"+n for n in cv.get_feature_names()]
And save as a data frame
df = pd.DataFrame(data=matrix.toarray(), columns=names, index=raw)
Generating a data frame like this:
count_A count_B count_C count_D
ABC 1 1 1 0
AAA 3 0 0 0
BA 1 1 0 0
DD 0 0 0 2
Elegant version:
Everything above in one line
df = pd.DataFrame(data=cv.fit_transform(things).toarray(), columns=["count_"+n for n in cv.get_feature_names()], index=raw)
Timing:
You mentioned that you're working with a rather large dataset, so I used the %%timeit function to give a time estimate.
Previous response by #piRSquared (which otherwise looks very good!)
pd.concat([s, s.apply(lambda x: pd.Series(x).value_counts()).fillna(0)], axis=1)
100 loops, best of 3: 3.27 ms per loop
My answer:
pd.DataFrame(data=cv.fit_transform(things).toarray(), columns=["count_"+n for n in cv.get_feature_names()], index=raw)
1000 loops, best of 3: 1.08 ms per loop
According to my testing, CountVectorizer is about 3x faster.
option 1
apply + value_counts
s = pd.Series([list('ABC'), list('AAA'), list('BA'), list('DD')], name='Things')
pd.concat([s, s.apply(lambda x: pd.Series(x).value_counts()).fillna(0)], axis=1)
option 2
use pd.DataFrame(s.tolist()) + stack / groupby / unstack
pd.concat([s,
pd.DataFrame(s.tolist()).stack() \
.groupby(level=0).value_counts() \
.unstack(fill_value=0)],
axis=1)
Related
I have a dataset
category
cat a
cat b
cat a
I'd like to be able to return something like (showing unique values and frequency)
category freq
cat a 2
cat b 1
Use value_counts() as #DSM commented.
In [37]:
df = pd.DataFrame({'a':list('abssbab')})
df['a'].value_counts()
Out[37]:
b 3
a 2
s 2
dtype: int64
Also groupby and count. Many ways to skin a cat here.
In [38]:
df.groupby('a').count()
Out[38]:
a
a
a 2
b 3
s 2
[3 rows x 1 columns]
See the online docs.
If you wanted to add frequency back to the original dataframe use transform to return an aligned index:
In [41]:
df['freq'] = df.groupby('a')['a'].transform('count')
df
Out[41]:
a freq
0 a 2
1 b 3
2 s 2
3 s 2
4 b 3
5 a 2
6 b 3
[7 rows x 2 columns]
If you want to apply to all columns you can use:
df.apply(pd.value_counts)
This will apply a column based aggregation function (in this case value_counts) to each of the columns.
df.category.value_counts()
This short little line of code will give you the output you want.
If your column name has spaces you can use
df['category'].value_counts()
df.apply(pd.value_counts).fillna(0)
value_counts - Returns object containing counts of unique values
apply - count frequency in every column. If you set axis=1, you get frequency in every row
fillna(0) - make output more fancy. Changed NaN to 0
In 0.18.1 groupby together with count does not give the frequency of unique values:
>>> df
a
0 a
1 b
2 s
3 s
4 b
5 a
6 b
>>> df.groupby('a').count()
Empty DataFrame
Columns: []
Index: [a, b, s]
However, the unique values and their frequencies are easily determined using size:
>>> df.groupby('a').size()
a
a 2
b 3
s 2
With df.a.value_counts() sorted values (in descending order, i.e. largest value first) are returned by default.
Using list comprehension and value_counts for multiple columns in a df
[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]
https://stackoverflow.com/a/28192263/786326
As everyone said, the faster solution is to do:
df.column_to_analyze.value_counts()
But if you want to use the output in your dataframe, with this schema:
df input:
category
cat a
cat b
cat a
df output:
category counts
cat a 2
cat b 1
cat a 2
you can do this:
df['counts'] = df.category.map(df.category.value_counts())
df
If your DataFrame has values with the same type, you can also set return_counts=True in numpy.unique().
index, counts = np.unique(df.values,return_counts=True)
np.bincount() could be faster if your values are integers.
You can also do this with pandas by broadcasting your columns as categories first, e.g. dtype="category" e.g.
cats = ['client', 'hotel', 'currency', 'ota', 'user_country']
df[cats] = df[cats].astype('category')
and then calling describe:
df[cats].describe()
This will give you a nice table of value counts and a bit more :):
client hotel currency ota user_country
count 852845 852845 852845 852845 852845
unique 2554 17477 132 14 219
top 2198 13202 USD Hades US
freq 102562 8847 516500 242734 340992
Without any libraries, you could do this instead:
def to_frequency_table(data):
frequencytable = {}
for key in data:
if key in frequencytable:
frequencytable[key] += 1
else:
frequencytable[key] = 1
return frequencytable
Example:
to_frequency_table([1,1,1,1,2,3,4,4])
>>> {1: 4, 2: 1, 3: 1, 4: 2}
I believe this should work fine for any DataFrame columns list.
def column_list(x):
column_list_df = []
for col_name in x.columns:
y = col_name, len(x[col_name].unique())
column_list_df.append(y)
return pd.DataFrame(column_list_df)
column_list_df.rename(columns={0: "Feature", 1: "Value_count"})
The function "column_list" checks the columns names and then checks the uniqueness of each column values.
#metatoaster has already pointed this out.
Go for Counter. It's blazing fast.
import pandas as pd
from collections import Counter
import timeit
import numpy as np
df = pd.DataFrame(np.random.randint(1, 10000, (100, 2)), columns=["NumA", "NumB"])
Timers
%timeit -n 10000 df['NumA'].value_counts()
# 10000 loops, best of 3: 715 µs per loop
%timeit -n 10000 df['NumA'].value_counts().to_dict()
# 10000 loops, best of 3: 796 µs per loop
%timeit -n 10000 Counter(df['NumA'])
# 10000 loops, best of 3: 74 µs per loop
%timeit -n 10000 df.groupby(['NumA']).count()
# 10000 loops, best of 3: 1.29 ms per loop
Cheers!
The following code creates frequency table for the various values in a column called "Total_score" in a dataframe called "smaller_dat1", and then returns the number of times the value "300" appears in the column.
valuec = smaller_dat1.Total_score.value_counts()
valuec.loc[300]
n_values = data.income.value_counts()
First unique value count
n_at_most_50k = n_values[0]
Second unique value count
n_greater_50k = n_values[1]
n_values
Output:
<=50K 34014
>50K 11208
Name: income, dtype: int64
Output:
n_greater_50k,n_at_most_50k:-
(11208, 34014)
your data:
|category|
cat a
cat b
cat a
solution:
df['freq'] = df.groupby('category')['category'].transform('count')
df = df.drop_duplicates()
I have a dataset
category
cat a
cat b
cat a
I'd like to be able to return something like (showing unique values and frequency)
category freq
cat a 2
cat b 1
Use value_counts() as #DSM commented.
In [37]:
df = pd.DataFrame({'a':list('abssbab')})
df['a'].value_counts()
Out[37]:
b 3
a 2
s 2
dtype: int64
Also groupby and count. Many ways to skin a cat here.
In [38]:
df.groupby('a').count()
Out[38]:
a
a
a 2
b 3
s 2
[3 rows x 1 columns]
See the online docs.
If you wanted to add frequency back to the original dataframe use transform to return an aligned index:
In [41]:
df['freq'] = df.groupby('a')['a'].transform('count')
df
Out[41]:
a freq
0 a 2
1 b 3
2 s 2
3 s 2
4 b 3
5 a 2
6 b 3
[7 rows x 2 columns]
If you want to apply to all columns you can use:
df.apply(pd.value_counts)
This will apply a column based aggregation function (in this case value_counts) to each of the columns.
df.category.value_counts()
This short little line of code will give you the output you want.
If your column name has spaces you can use
df['category'].value_counts()
df.apply(pd.value_counts).fillna(0)
value_counts - Returns object containing counts of unique values
apply - count frequency in every column. If you set axis=1, you get frequency in every row
fillna(0) - make output more fancy. Changed NaN to 0
In 0.18.1 groupby together with count does not give the frequency of unique values:
>>> df
a
0 a
1 b
2 s
3 s
4 b
5 a
6 b
>>> df.groupby('a').count()
Empty DataFrame
Columns: []
Index: [a, b, s]
However, the unique values and their frequencies are easily determined using size:
>>> df.groupby('a').size()
a
a 2
b 3
s 2
With df.a.value_counts() sorted values (in descending order, i.e. largest value first) are returned by default.
Using list comprehension and value_counts for multiple columns in a df
[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]
https://stackoverflow.com/a/28192263/786326
As everyone said, the faster solution is to do:
df.column_to_analyze.value_counts()
But if you want to use the output in your dataframe, with this schema:
df input:
category
cat a
cat b
cat a
df output:
category counts
cat a 2
cat b 1
cat a 2
you can do this:
df['counts'] = df.category.map(df.category.value_counts())
df
If your DataFrame has values with the same type, you can also set return_counts=True in numpy.unique().
index, counts = np.unique(df.values,return_counts=True)
np.bincount() could be faster if your values are integers.
You can also do this with pandas by broadcasting your columns as categories first, e.g. dtype="category" e.g.
cats = ['client', 'hotel', 'currency', 'ota', 'user_country']
df[cats] = df[cats].astype('category')
and then calling describe:
df[cats].describe()
This will give you a nice table of value counts and a bit more :):
client hotel currency ota user_country
count 852845 852845 852845 852845 852845
unique 2554 17477 132 14 219
top 2198 13202 USD Hades US
freq 102562 8847 516500 242734 340992
Without any libraries, you could do this instead:
def to_frequency_table(data):
frequencytable = {}
for key in data:
if key in frequencytable:
frequencytable[key] += 1
else:
frequencytable[key] = 1
return frequencytable
Example:
to_frequency_table([1,1,1,1,2,3,4,4])
>>> {1: 4, 2: 1, 3: 1, 4: 2}
I believe this should work fine for any DataFrame columns list.
def column_list(x):
column_list_df = []
for col_name in x.columns:
y = col_name, len(x[col_name].unique())
column_list_df.append(y)
return pd.DataFrame(column_list_df)
column_list_df.rename(columns={0: "Feature", 1: "Value_count"})
The function "column_list" checks the columns names and then checks the uniqueness of each column values.
#metatoaster has already pointed this out.
Go for Counter. It's blazing fast.
import pandas as pd
from collections import Counter
import timeit
import numpy as np
df = pd.DataFrame(np.random.randint(1, 10000, (100, 2)), columns=["NumA", "NumB"])
Timers
%timeit -n 10000 df['NumA'].value_counts()
# 10000 loops, best of 3: 715 µs per loop
%timeit -n 10000 df['NumA'].value_counts().to_dict()
# 10000 loops, best of 3: 796 µs per loop
%timeit -n 10000 Counter(df['NumA'])
# 10000 loops, best of 3: 74 µs per loop
%timeit -n 10000 df.groupby(['NumA']).count()
# 10000 loops, best of 3: 1.29 ms per loop
Cheers!
The following code creates frequency table for the various values in a column called "Total_score" in a dataframe called "smaller_dat1", and then returns the number of times the value "300" appears in the column.
valuec = smaller_dat1.Total_score.value_counts()
valuec.loc[300]
n_values = data.income.value_counts()
First unique value count
n_at_most_50k = n_values[0]
Second unique value count
n_greater_50k = n_values[1]
n_values
Output:
<=50K 34014
>50K 11208
Name: income, dtype: int64
Output:
n_greater_50k,n_at_most_50k:-
(11208, 34014)
your data:
|category|
cat a
cat b
cat a
solution:
df['freq'] = df.groupby('category')['category'].transform('count')
df = df.drop_duplicates()
i have a dataframe
id store val1 val2
1 abc 20 30
1 abc 20 40
1 qwe 78 45
2 dfd 34 45
2 sad 43 45
from this i have to group by on id and create a new df, with column, total_store and unique stores and non-repeating_stores, which contains count of such store occurances.
my final output should be
id total_store unique stores non-repeating_stores
1 3 2 1
2 2 2 2
i can get total stores by doing
df.groupby('id')['store'].count()
But how do i get others and form a dataframe out of it
You can use a groupby + agg.
df = df.groupby('id').store.agg(['count', 'nunique', \
lambda x: x.drop_duplicates(keep=False).size])
df.columns = ['total_store', 'unique stores', 'non-repeating_stores']
df
total_store unique stores non-repeating_stores
id
1 3 2 1
2 2 2 2
For older pandas versions, passing a dict allows simplifying your code (deprecated in 0.20 and onwards):
agg_funcs = {'total_stores' : 'count', 'unique_stores' : 'nunique',
'non-repeating_stores' : lambda x: x.drop_duplicates(keep=False).size
}
df = df.groupby('id').store.agg(agg_funcs)
df
total_stores non-repeating_stores unique_stores
id
1 3 1 2
2 2 2 2
As a slight improvement with speed, you can employ the use of drop_duplicates' sister method, duplicated, in this fashion, as documented by jezrael:
lambda x: (~x.duplicated(keep=False)).sum()
This would replace the third function in agg, with a 20% speed boost over large data of size 1000000:
1 loop, best of 3: 7.31 s per loop
v/s
1 loop, best of 3: 5.19 s per loop
Use groupby with agg with count and nunique. Last function is a bit complicated - need count all non dupes using inverting duplicated with sum:
If need count NaNs use size instead count:
df = df.groupby('id')['store'].agg(['count',
'nunique',
lambda x: (~x.duplicated(keep=False)).sum()])
df.columns = ['total_store', 'unique stores', 'non-repeating_stores']
print (df)
total_store unique stores non-repeating_stores
id
1 3 2 1
2 2 2 2
Timings:
np.random.seed(123)
N = 1000000
L = np.random.randint(10000,size=N).astype(str)
df = pd.DataFrame({'store': np.random.choice(L, N),
'id': np.random.randint(10000, size=N)})
print (df)
In [120]: %timeit (df.groupby('id')['store'].agg(['count', 'nunique', lambda x: (~x.duplicated(keep=False)).sum()]))
1 loop, best of 3: 4.47 s per loop
In [122]: %timeit (df.groupby('id').store.agg(['count', 'nunique', lambda x: x.drop_duplicates(keep=False).size]))
1 loop, best of 3: 11 s per loop
I have a large csv with three strings per row in this form:
a,c,d
c,a,e
f,g,f
a,c,b
c,a,d
b,f,s
c,a,c
I read in the first two columns recode the strings to integers and then remove duplicates counting how many copies of each row there were as follows:
import pandas as pd
df = pd.read_csv("test.csv", usecols=[0,1], prefix="ID_", header=None)
letters = set(df.values.flat)
df.replace(to_replace=letters, value=range(len(letters)), inplace=True)
df1 = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
print df1
This gives:
ID_0 ID_1 count
0 0 1 2
1 1 0 3
2 2 4 1
3 4 3 1
which is exactly what I need.
However as my data is large I would like to make two improvements.
How can I do the groupby and then recode instead of the other way round? The problem is that I can't do df1[['ID_0','ID_0']].replace(to_replace=letters, value=range(len(letters)), inplace = True). This gives the error
"A value is trying to be set on a copy of a slice from a DataFrame"
How can I avoid creating df1? That is do the whole thing inplace.
I like to use sklearn.preprocessing.LabelEncoder to do the letter to digit conversion:
from sklearn.preprocessing import LabelEncoder
# Perform the groupby (before converting letters to digits).
df = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
# Initialize the LabelEncoder.
le = LabelEncoder()
le.fit(df[['ID_0', 'ID_1']].values.flat)
# Convert to digits.
df[['ID_0', 'ID_1']] = df[['ID_0', 'ID_1']].apply(le.transform)
The resulting output:
ID_0 ID_1 count
0 0 2 2
1 1 3 1
2 2 0 3
3 3 4 1
If you want to convert back to letters at a later point in time, you can use le.inverse_transform:
df[['ID_0', 'ID_1']] = df[['ID_0', 'ID_1']].apply(le.inverse_transform)
Which maps back as expected:
ID_0 ID_1 count
0 a c 2
1 b f 1
2 c a 3
3 f g 1
If you just want to know which digit corresponds to which letter, you can look at the le.classes_ attribute. This will give you an array of letters, which is indexed by the digit it encodes to:
le.classes_
['a' 'b' 'c' 'f' 'g']
For a more visual representation, you can cast as a Series:
pd.Series(le.classes_)
0 a
1 b
2 c
3 f
4 g
Timings
Using a larger version of the sample data and the following setup:
df2 = pd.concat([df]*10**5, ignore_index=True)
def root(df):
df = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
le = LabelEncoder()
le.fit(df[['ID_0', 'ID_1']].values.flat)
df[['ID_0', 'ID_1']] = df[['ID_0', 'ID_1']].apply(le.transform)
return df
def pir2(df):
unq = np.unique(df)
mapping = pd.Series(np.arange(unq.size), unq)
return df.stack().map(mapping).unstack() \
.groupby(df.columns.tolist()).size().reset_index(name='count')
I get the following timings:
%timeit root(df2)
10 loops, best of 3: 101 ms per loop
%timeit pir2(df2)
1 loops, best of 3: 1.69 s per loop
New Answer
unq = np.unique(df)
mapping = pd.Series(np.arange(unq.size), unq)
df.stack().map(mapping).unstack() \
.groupby(df.columns.tolist()).size().reset_index(name='count')
Old Answer
df.stack().rank(method='dense').astype(int).unstack() \
.groupby(df.columns.tolist()).size().reset_index(name='count')
I am trying to find the count of distinct values in each column using Pandas. This is what I did.
import pandas as pd
import numpy as np
# Generate data.
NROW = 10000
NCOL = 100
df = pd.DataFrame(np.random.randint(1, 100000, (NROW, NCOL)),
columns=['col' + x for x in np.arange(NCOL).astype(str)])
I need to count the number of distinct elements for each column, like this:
col0 9538
col1 9505
col2 9524
What would be the most efficient way to do this, as this method will be applied to files which have size greater than 1.5GB?
Based upon the answers, df.apply(lambda x: len(x.unique())) is the fastest (notebook).
%timeit df.apply(lambda x: len(x.unique()))
10 loops, best of 3: 49.5 ms per loop
%timeit df.nunique()
10 loops, best of 3: 59.7 ms per loop
%timeit df.apply(pd.Series.nunique)
10 loops, best of 3: 60.3 ms per loop
%timeit df.T.apply(lambda x: x.nunique(), axis=1)
10 loops, best of 3: 60.5 ms per loop
As of pandas 0.20 we can use nunique directly on DataFrames, i.e.:
df.nunique()
a 4
b 5
c 1
dtype: int64
Other legacy options:
You could do a transpose of the df and then using apply call nunique row-wise:
In [205]:
df = pd.DataFrame({'a':[0,1,1,2,3],'b':[1,2,3,4,5],'c':[1,1,1,1,1]})
df
Out[205]:
a b c
0 0 1 1
1 1 2 1
2 1 3 1
3 2 4 1
4 3 5 1
In [206]:
df.T.apply(lambda x: x.nunique(), axis=1)
Out[206]:
a 4
b 5
c 1
dtype: int64
EDIT
As pointed out by #ajcr the transpose is unnecessary:
In [208]:
df.apply(pd.Series.nunique)
Out[208]:
a 4
b 5
c 1
dtype: int64
A Pandas.Series has a .value_counts() function that provides exactly what you want to. Check out the documentation for the function.
Already some great answers here :) but this one seems to be missing:
df.apply(lambda x: x.nunique())
As of pandas 0.20.0, DataFrame.nunique() is also available.
Recently, I have same issues of counting unique value of each column in DataFrame, and I found some other function that runs faster than the apply function:
#Select the way how you want to store the output, could be pd.DataFrame or Dict, I will use Dict to demonstrate:
col_uni_val={}
for i in df.columns:
col_uni_val[i] = len(df[i].unique())
#Import pprint to display dic nicely:
import pprint
pprint.pprint(col_uni_val)
This works for me almost twice faster than df.apply(lambda x: len(x.unique()))
I found:
df.agg(['nunique']).T
much faster
df.apply(lambda x: len(x.unique()))
Need to segregate only the columns with more than 20 unique values for all the columns in pandas_python:
enter code here
col_with_morethan_20_unique_values_cat=[]
for col in data.columns:
if data[col].dtype =='O':
if len(data[col].unique()) >20:
....col_with_morethan_20_unique_values_cat.append(data[col].name)
else:
continue
print(col_with_morethan_20_unique_values_cat)
print('total number of columns with more than 20 number of unique value is',len(col_with_morethan_20_unique_values_cat))
# The o/p will be as:
['CONTRACT NO', 'X2','X3',,,,,,,..]
total number of columns with more than 20 number of unique value is 25
Adding the example code for the answer given by #CaMaDuPe85
df = pd.DataFrame({'a':[0,1,1,2,3],'b':[1,2,3,4,5],'c':[1,1,1,1,1]})
df
# df
a b c
0 0 1 1
1 1 2 1
2 1 3 1
3 2 4 1
4 3 5 1
for cs in df.columns:
print(cs,df[cs].value_counts().count())
# using value_counts in each column and count it
# Output
a 4
b 5
c 1