Sum based on grouping in pandas dataframe? - python

I have a pandas dataframe df which contains:
major men women rank
Art 5 4 1
Art 3 5 3
Art 2 4 2
Engineer 7 8 3
Engineer 7 4 4
Business 5 5 4
Business 3 4 2
Basically I am needing to find the total number of students including both men and women as one per major regardless of the rank column. So for Art for example, the total should be all men + women totaling 23, Engineer 26, Business 17.
I have tried
df.groupby(['major_category']).sum()
But this separately sums the men and women rather than combining their totals.

Just add both columns and then groupby:
(df.men+df.women).groupby(df.major).sum()
major
Art 23
Business 17
Engineer 26
dtype: int64

melt() then groupby():
df.drop('rank',1).melt('major').groupby('major',as_index=False).sum()
major value
0 Art 23
1 Business 17
2 Engineer 26

Related

Rename the less frequent categories by "OTHER" python

In my dataframe I have some categorical columns with over 100 different categories. I want to rank the categories by the most frequent. I keep the first 9 most frequent categories and the less frequent categories rename them automatically by: OTHER
Example:
Here my df :
print(df)
Employee_number Jobrol
0 1 Sales Executive
1 2 Research Scientist
2 3 Laboratory Technician
3 4 Sales Executive
4 5 Research Scientist
5 6 Laboratory Technician
6 7 Sales Executive
7 8 Research Scientist
8 9 Laboratory Technician
9 10 Sales Executive
10 11 Research Scientist
11 12 Laboratory Technician
12 13 Sales Executive
13 14 Research Scientist
14 15 Laboratory Technician
15 16 Sales Executive
16 17 Research Scientist
17 18 Research Scientist
18 19 Manager
19 20 Human Resources
20 21 Sales Executive
valCount = df['Jobrol'].value_counts()
valCount
Sales Executive 7
Research Scientist 7
Laboratory Technician 5
Manager 1
Human Resources 1
I keep the first 3 categories then I rename the rest by "OTHER", how should I proceed?
Thanks.
Convert your series to categorical, extract categories whose counts are not in the top 3, add a new category e.g. 'Other', then replace the previously calculated categories:
df['Jobrol'] = df['Jobrol'].astype('category')
others = df['Jobrol'].value_counts().index[3:]
label = 'Other'
df['Jobrol'] = df['Jobrol'].cat.add_categories([label])
df['Jobrol'] = df['Jobrol'].replace(others, label)
Note: It's tempting to combine categories by renaming them via df['Jobrol'].cat.rename_categories(dict.fromkeys(others, label)), but this won't work as this will imply multiple identically labeled categories, which isn't possible.
The above solution can be adapted to filter by count. For example, to include only categories with a count of 1 you can define others as so:
counts = df['Jobrol'].value_counts()
others = counts[counts == 1].index
Use value_counts with numpy.where:
need = df['Jobrol'].value_counts().index[:3]
df['Jobrol'] = np.where(df['Jobrol'].isin(need), df['Jobrol'], 'OTHER')
valCount = df['Jobrol'].value_counts()
print (valCount)
Research Scientist 7
Sales Executive 7
Laboratory Technician 5
OTHER 2
Name: Jobrol, dtype: int64
Another solution:
N = 3
s = df['Jobrol'].value_counts()
valCount = s.iloc[:N].append(pd.Series(s.iloc[N:].sum(), index=['OTHER']))
print (valCount)
Research Scientist 7
Sales Executive 7
Laboratory Technician 5
OTHER 2
dtype: int64
One line solution:
limit = 500
df['Jobrol'] = df['Jobrol'].map({x[0]: x[0] if x[1] > limit else 'other' for x in dict(df['Jobrol'].value_counts()).items()})

Get unique values from pandas series of lists

I have a column in DataFrame containing list of categories. For example:
0 [Pizza]
1 [Mexican, Bars, Nightlife]
2 [American, New, Barbeque]
3 [Thai]
4 [Desserts, Asian, Fusion, Mexican, Hawaiian, F...
6 [Thai, Barbeque]
7 [Asian, Fusion, Korean, Mexican]
8 [Barbeque, Bars, Pubs, American, Traditional, ...
9 [Diners, Burgers, Breakfast, Brunch]
11 [Pakistani, Halal, Indian]
I am attempting to do two things:
1) Get unique categories - My approach is have a empty set, iterate through series and append each list.
my code:
unique_categories = {'Pizza'}
for lst in restaurant_review_df['categories_arr']:
unique_categories = unique_categories | set(lst)
This give me a set of unique categories contained in all the lists in the column.
2) Generate pie plot of category counts and each restaurant can belong to multiple categories. For example: restaurant 11 belongs to Pakistani, Indian and Halal categories. My approach is again iterate through categories and one more iteration through series to get counts.
Are there simpler or elegant ways of doing this?
Thanks in advance.
Update using pandas 0.25.0+ with explode
df['category'].explode().value_counts()
Output:
Barbeque 3
Mexican 3
Fusion 2
Thai 2
American 2
Bars 2
Asian 2
Hawaiian 1
New 1
Brunch 1
Pizza 1
Traditional 1
Pubs 1
Korean 1
Pakistani 1
Burgers 1
Diners 1
Indian 1
Desserts 1
Halal 1
Nightlife 1
Breakfast 1
Name: Places, dtype: int64
And with plotting:
df['category'].explode().value_counts().plot.pie(figsize=(8,8))
Output:
For older verions of pandas before 0.25.0
Try:
df['category'].apply(pd.Series).stack().value_counts()
Output:
Mexican 3
Barbeque 3
Thai 2
Fusion 2
American 2
Bars 2
Asian 2
Pubs 1
Burgers 1
Traditional 1
Brunch 1
Indian 1
Korean 1
Halal 1
Pakistani 1
Hawaiian 1
Diners 1
Pizza 1
Nightlife 1
New 1
Desserts 1
Breakfast 1
dtype: int64
With plotting:
df['category'].apply(pd.Series).stack().value_counts().plot.pie()
Output:
Per #coldspeed's comments
from itertools import chain
from collections import Counter
pd.DataFrame.from_dict(Counter(chain(*df['category'])), orient='index').sort_values(0, ascending=False)
Output:
Barbeque 3
Mexican 3
Bars 2
American 2
Thai 2
Asian 2
Fusion 2
Pizza 1
Diners 1
Halal 1
Pakistani 1
Brunch 1
Breakfast 1
Burgers 1
Hawaiian 1
Traditional 1
Pubs 1
Korean 1
Desserts 1
New 1
Nightlife 1
Indian 1

How do I find the highest count of the same values with Pandas Python?

I am trying finding the most popular major for each university.
Here is a sample of the table:
Institution Major_Name Count Major
School 1 Art 2 First
School 1 English 12 First
School 1 Math 7 First
School 1 Art 6 Second
School 1 English 4 Second
School 1 Math 3 Second
School 2 Art 9
School 2 English 4
School 2 Math 13
I want the final outcome to look like this where the rest of the rows will disappear:
Institution Major_Name Count Major
School 1 English 12 First
School 1 Art 6 Second
School 2 Math 13
Thanks in advance. Very new to using Pandas!
You can do a groupby on Institution and then apply the max function:
In [547]: df.groupby('Institution', as_index=False).max()
Out[547]:
Institution Major Count
0 School 1 Math 12
1 School 2 Math 13
The as_index=False attribute will prevent the resultant GroupBy object from assigning Institution as the new index.
Based on your edit: To group by Institution as well as Major, you can specify multiple columns to group by:
In [563]: df.fillna('').groupby(['Institution', 'Major'], as_index=False).max()
Out[563]:
Institution Major Major_Name Count
0 School1 First Math 12
1 School1 Second Math 6
2 School2 Math 13

Pandas: Fill in missing indexes with specific ordered values that are already in column.

I have extracted a one-column dataframe with specific values. Now this is what the dataframe looks like:
Commodity
0 Cocoa
4 Coffee
6 Maize
7 Rice
10 Sugar
12 Wheat
Now I want to respectively fill each index that has no value with the value above it in the column so It should look something like this:
Commodity
0 Cocoa
1 Cocoa
2 Cocoa
3 Cocoa
4 Coffee
5 Coffee
6 Maize
7 Rice
8 Rice
9 Rice
10 Sugar
11 Sugar
12 Wheat
I don't seem to get anything from the pandas documentation Working with Text Data. Thanks for your help!
I create a new index with pd.RangeIndex. It works like range so I need to pass it a number one greater than the max number in the current index.
df.reindex(pd.RangeIndex(df.index.max() + 1)).ffill()
Commodity
0 Cocoa
1 Cocoa
2 Cocoa
3 Cocoa
4 Coffee
5 Coffee
6 Maize
7 Rice
8 Rice
9 Rice
10 Sugar
11 Sugar
12 Wheat
First expand the index to include all numbers
s = pd.Series(['Cocoa', 'Coffee', 'Maize', 'Rice', 'Sugar', 'Wheat',], index=[0,4,6,7,10, 12], name='Commodity')
s = s.reindex(range(s.index.max() + 1))
Then do a backfill
s.bfill()

Selecting data based on number of occurences using Python / Pandas

My dataset is based on the results of Food Inspections in the City of Chicago.
import pandas as pd
df = pd.read_csv("C:/~/Food_Inspections.csv")
df.head()
Out[1]:
Inspection ID DBA Name \
0 1609238 JR'SJAMAICAN TROPICAL CAFE,INC
1 1609245 BURGER KING
2 1609237 DUNKIN DONUTS / BASKIN ROBINS
3 1609258 CHIPOTLE MEXICAN GRILL
4 1609244 ATARDECER ACAPULQUENO INC.
AKA Name License # Facility Type Risk \
0 NaN 2442496.0 Restaurant Risk 1 (High)
1 BURGER KING 2411124.0 Restaurant Risk 2 (Medium)
2 DUNKIN DONUTS / BASKIN ROBINS 1717126.0 Restaurant Risk 2 (Medium)
3 CHIPOTLE MEXICAN GRILL 1335044.0 Restaurant Risk 1 (High)
4 ATARDECER ACAPULQUENO INC. 1910118.0 Restaurant Risk 1 (High)
Here is how often each of the facilities appear in the dataset:
df['Facility Type'].value_counts()
Out[3]:
Restaurant 14304
Grocery Store 2647
School 1155
Daycare (2 - 6 Years) 367
Bakery 316
Children's Services Facility 262
Daycare Above and Under 2 Years 248
Long Term Care 169
Daycare Combo 1586 142
Catering 123
Liquor 78
Hospital 68
Mobile Food Preparer 67
Golden Diner 65
Mobile Food Dispenser 51
Special Event 25
Shared Kitchen User (Long Term) 22
Daycare (Under 2 Years) 18
I am trying to create a new set of data containing those rows where its Facility Type has over 50 occurrences in the dataset. How would I approach this?
Please note the list of facility counts is MUCH LARGER as I have cut out most of the information as it did not contribute to the question at hand (so simply removing occurrences of "Special Event", " Shared Kitchen User", and "Daycare" is not what I'm looking for).
IIUC then you want to filter:
df.groupby('Facility Type').filter(lambda x: len(x) > 50)
Example:
In [9]:
df = pd.DataFrame({'type':list('aabcddddee'), 'value':np.random.randn(10)})
df
Out[9]:
type value
0 a -0.160041
1 a -0.042310
2 b 0.530609
3 c 1.238046
4 d -0.754779
5 d -0.197309
6 d 1.704829
7 d -0.706467
8 e -1.039818
9 e 0.511638
In [10]:
df.groupby('type').filter(lambda x: len(x) > 1)
Out[10]:
type value
0 a -0.160041
1 a -0.042310
4 d -0.754779
5 d -0.197309
6 d 1.704829
7 d -0.706467
8 e -1.039818
9 e 0.511638
Not tested, but should work.
FT=df['Facility Type'].value_counts()
df[df['Facility Type'].isin(FT.index[FT>50])]

Categories

Resources