Converting a multi-index pandas dataframe to single index - python

I have a data frame like below:
i_id q_id
month category_bucket
Aug Algebra Tutoring 187 64
Balloon Artistry 459 401
Carpet Installation or Replacement 427 243
Dance Lessons 181 46
Landscaping 166 60
Others 9344 4987
Tennis Instruction 161 61
Tree and Shrub Service 383 269
Wedding Photography 161 49
Window Repair 140 80
Wiring 439 206
July Algebra Tutoring 555 222
Balloon Artistry 229 202
Carpet Installation or Replacement 140 106
Dance Lessons 354 115
Landscaping 511 243
Others 9019 4470
Tennis Instruction 613 324
Tree and Shrub Service 130 100
Wedding Photography 425 191
Window Repair 444 282
Wiring 154 98
It's a multi-index data frame with month and category bucket as index. And i_id, q_id as columns
I got this by doing a groupby operation on a normal data frame like below
invites_combined.groupby(['month', 'category_bucket'])[["i_id","q_id"]].count()
I basically want a data frame where I have 4 columns 2 each for i_id, q-Id for both the months and a column for category_bucket. So basically converting the above multi-index data frame to single index so that I can access the values.
Currently it's difficult for me to access the values of i_id, q_id along for a particular month and category value.
If you feel there is an easier way to access the i_id and q_id values for each category and month without having to convert to single index that is fine too.
Single index would be easier to loop into each value for each combination of month and category though.

It seems you need reset_index for convert MultiIndex to columns:
df = df.reset_index()

Related

Create new Pandas.DataFrame with .groupby(...).agg(sum) then recover unsummed columns

I'm starting with a dataframe of baseabll seasons a section of which looks similar to this:
Name Season AB H SB playerid
13047 A.J. Pierzynski 2013 503 137 1 746
6891 A.J. Pierzynski 2006 509 150 1 746
1374 Rod Carew 1977 616 239 23 1001942
1422 Stan Musial 1948 611 230 7 1009405
1507 Todd Helton 2000 580 216 5 432
1508 Nomar Garciaparra 2000 529 197 5 190
1509 Ichiro Suzuki 2004 704 262 36 1101
From these seasons, I want to create a dataframe of career stats; that is, one row for each player which is a sum of their AB, H, etc. This dataframe should still include the names of the players. The playerid in the above is a unique key for each player and should either be an index or an unchanged value in a column after creating the career stats dataframe.
My hypothetical starting point is df_careers = df_seasons.groupby('playerid').agg(sum) but this leaves out all the non-numeric data. With numeric_only = False I can get some sort of mess in the names columns like 'Ichiro SuzukiIchiro SuzukiIchiro Suzuki' from concatenation, but that just requires a bunch of cleaning. This is something I'd like to be able to do with other data sets and the actually data I have is more like 25 columns, so I'd rather understand a specific routine for getting the Name data back or preserving it from the outset rather than write a specific function and use groupby('playerid').agg(func) (or a similar process) to do it, if possible.
I'm guessing there's a fairly simply way to do this, but I only started learning Pandas a week ago, so there are gaps in my knowledge.
You can write your own condition how do you want to include non summed columns.
col = df.columns.tolist()
col.remove('playerid')
df.groupby('playerid').agg({i : lambda x: x.iloc[0] if x.dtypes=='object' else x.sum() for i in df.columns})
df:
Name Season AB H SB playerid
playerid
190 Nomar_Garciaparra 2000 529 197 5 190
432 Todd_Helton 2000 580 216 5 432
746 A.J._Pierzynski 4019 1012 287 2 1492
1101 Ichiro_Suzuki 2004 704 262 36 1101
1001942 Rod_Carew 1977 616 239 23 1001942
1009405 Stan_Musial 1948 611 230 7 1009405
If there is a one-to-one relationship between 'playerid' and 'Name', as appears to be the case, you can just include 'Name' in the groupby columns:
stat_cols = ['AB', 'H', 'SB']
groupby_cols = ['playerid', 'Name']
results = df.groupby(groupby_cols)[stat_cols].sum()
Results:
AB H SB
playerid Name
190 Nomar Garciaparra 529 197 5
432 Todd Helton 580 216 5
746 A.J. Pierzynski 1012 287 2
1101 Ichiro Suzuki 704 262 36
1001942 Rod Carew 616 239 23
1009405 Stan Musial 611 230 7
If you'd prefer to group only by 'playerid' and add the 'Name' data back in afterwards, you can instead create a 'playerId' to 'Name' mapping as a dictionary, and look it up using map:
results = df.groupby('playerid')[stat_cols].sum()
name_map = pd.Series(df.Name.to_numpy(), df.playerid).to_dict()
results['Name'] = results.index.map(name_map)
Results:
AB H SB Name
playerid
190 529 197 5 Nomar Garciaparra
432 580 216 5 Todd Helton
746 1012 287 2 A.J. Pierzynski
1101 704 262 36 Ichiro Suzuki
1001942 616 239 23 Rod Carew
1009405 611 230 7 Stan Musial
groupy.agg() can accept a dictionary that maps column names to functions. So, one solution is to pass a dictionary to agg, specifying which functions to apply to each column.
Using the sample data above, one might use
mapping = { 'AB': sum,'H': sum, 'SB': sum, 'Season': max, 'Name': max }
df_1 = df.groupby('playerid').agg(mapping)
The choice to use 'max' for those that shouldn't be summed is arbitrary. You could define a lambda function to apply to a column if you want to handle it in a certain way. DataFrameGroupBy.agg can work with any function that will work with DataFrame.apply.
To expand this to larger data sets, you might use a dictionary comprehension. This would work well:
dictionary = { x : sum for x in df.columns}
dont_sum = {'Name': max, 'Season': max}
dictionary.update(dont_sum)
df_1 = df.groupby('playerid').agg(dictionary)

Using the first and last values of two columns and generate new data frame based on conditions

Say I have the following data frame,
df.head()
ID start end symbol symbol_id type
1 146 291 bus bus-201 CDS
1 146 314 bus bus-201 trans
1 603 243 bus bus-201 CDS
1 1058 2123 car car-203 CDS
1 910 81 car car-203 ex
1 2623 2686 car car-203 CDS
1 5948 6043 car car-203 CDS
1 6348 6474 car car-203 CDS
1 910 81 car car-201 ex
1 910 81 car car-201 ex
1 636 650 car car-203 CDS
1 202 790 train train-204 CDS
1 200 314 train train-204 CDS
1 202 837 train train-204 CDS
Now from the above data frame, I need to group by items based on column symbol_id if column type is CDS. Then, I need to use the first value from the column start as the value in start column of the new data frame and last value from columnendas the value in columnend`.
Finally, the df2 should look like,
start end symbol symbol_id type
146 243 bus bus-203 CDS
1058 650 car car-203 CDS
202 837 train train-204 CDS
I have tried using list of values from df['symbol'],
sym_list=df['symbol'].tolist().drop_duplicates()
for symbol in df['symbol'].values:
if symbol in tuple(sym_list):
df_symbol =df['symbol'].isin(symbol)
which threw the following error,
TypeError: only list-like objects are allowed to be passed to isin(), you passed a [str]
I was trying to capture the first and last value for each symbol and symbol_id value using,
start = df.query('type =="CDS"')[['start']].iloc[0]
end = df.query('type =="CDS"')[['end']].iloc[-1]
However, my data frame is quite big and I have more than 50,000 unique values for symbol, hence I need a better solution here.
Any help or suggestions are appreciated!!
you can do it using group by and first and last aggrigate function
df[df["type"]=="CDS"].groupby("symbol_id").agg({"start":"first", "end":"last", "symbol":"first","symbol_id":"first", "type":"first"})
Try:
df_group = df[df['type']=='CDS'].groupby(['symbol_id', 'symbol', 'type'])
df_new = pd.DataFrame(columns =['start', 'end'])
df_new[['start', 'end']] = df_group.agg({'start':'first', 'end': 'last'})
df_new.reset_index()
symbol_id symbol start end type
0 bus-201 bus 146 243 CDS
1 car-203 car 1058 650 CDS
2 train-204 train 202 837 CDS
Edited using agg as used by #Dev Khadka

Extract duplicates into new dataframe with Pandas

I have a large data frame with a many columns. One of these columns is what's supposed to be a Unique ID and the other is a Year. Unfortunately, there are duplicates in the Unique ID column.
I know how to generate a list of all duplicates, but what I actually want to do is extract them out such that only the first entry (by year) remains. For example, the dataframe currently looks something like this (with a bunch of other columns):
ID Year
----------
123 1213
123 1314
123 1516
154 1415
154 1718
233 1314
233 1415
233 1516
And what I want to do is transform this dataframe into:
ID Year
----------
123 1213
154 1415
233 1314
While storing just the those duplicates in another dataframe:
ID Year
----------
123 1314
123 1516
154 1415
233 1415
233 1516
I could drop duplicates by year to keep the oldest entry, but I am not sure how to get just the duplicates into a list that I can store as another dataframe.
How would I do this?
Use duplicated
In [187]: d = df.duplicated(subset=['ID'], keep='first')
In [188]: df[~d]
Out[188]:
ID Year
0 123 1213
3 154 1415
5 233 1314
In [189]: df[d]
Out[189]:
ID Year
1 123 1314
2 123 1516
4 154 1718
6 233 1415
7 233 1516

Python Pandas: Append Dataframe To Another Dataframe Only If Column Value is Unique

I have two data frames that I want to append together. Below are samples.
df_1:
Code Title
103 general checks
107 limits
421 horseshoe
319 scheduled
501 zonal
df_2
Code Title
103 hello
108 lucky eight
421 little toe
319 scheduled cat
503 new item
I want to append df_2 to df_1 ONLY IF the code number in df_2 does not exist already in df_1.
Below is the dataframe I want:
Code Title
103 general checks
107 limits
421 horseshoe
319 scheduled
501 zonal
108 lucky eight
503 new item
I have searched through Google and Stackoverflow but couldn't find anything on this specific case.
Just append the filtered data frame
df3 = df2.loc[~df2.Code.isin(df.Code)]
df.append(df3)
Code Title
0 103 general checks
1 107 limits
2 421 horseshoe
3 319 scheduled
4 501 zonal
1 108 lucky eight
4 503 new item
Notice that you might end up with duplicated indexes, which may cause problems. To avoid that, you can .reset_index(drop=True) to get a fresh df with no duplicated indexes.
df.append(df3).reset_index(drop=True)
Code Title
0 103 general checks
1 107 limits
2 421 horseshoe
3 319 scheduled
4 501 zonal
5 108 lucky eight
6 503 new item
You can concat and then drop_duplicates. Assumes within each dataframe Code is unique.
res = pd.concat([df1, df2]).drop_duplicates('Code')
print(res)
Code Title
0 103 general_checks
1 107 limits
2 421 horseshoe
3 319 scheduled
4 501 zonal
1 108 lucky_eight
4 503 new_item
Similar to concat(), you could also use merge:
df3 = pd.merge(df_1, df_2, how='outer').drop_duplicates('Code')
Code Title
0 103 general checks
1 107 limits
2 421 horseshoe
3 319 scheduled
4 501 zonal
6 108 lucky eight
9 503 new item

GroupBy and Cut in Pandas [duplicate]

This question already has an answer here:
applying pandas cut within a groupby
(1 answer)
Closed 1 year ago.
I am trying to group a set of things and perform cuts within the groups dynamically based on the min, max and average of both (min and max) value.
My dataset looks something like this:
Country Value
Uganda 210
Kenya 423
Kenya 315
Tanzania 780
Uganda 124
Uganda 213
Tanzania 978
Kenya 524
What I expect is in which range does each value fall, above or below mid-value:
Country Value Range
Uganda 210 (168.5, 213)
Uganda 124 (124, 168.5)
Uganda 213 (168.5, 213)
Kenya 423 (419.5, 524)
Kenya 315 (315, 419.5)
Kenya 524 (419.5, 524)
Tanzania 780 (780, 879)
Tanzania 978 (879, 980)
I am able to achieve this if I am doing it with a loop iterating over each group. I am also able to achieve the cuts based on the min and max value over the entire dataset but not individual groups. However, I was wondering if it can be done in a line or two using pandas and not use loops.
This is how I did it:
df['range'] = df.groupby('country')[['value']].transform(lambda x: pd.cut(x, bins = 2).astype(str))
Try this;
data['Range'] = data.groupby('Country').Value.apply(pd.cut, bins=2)

Categories

Resources