How to aggregate two largest values per group in pandas? - python

I was going through this link: Return top N largest values per group using pandas
and found multiple ways to find the topN values per group.
However, I prefer dictionary method with agg function and would like to know if it is possible to get the equivalent of the dictionary method for the following problem?
import numpy as np
import pandas as pd
df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
'B': [1, 1, 2, 2, 1],
'C': [10, 20, 30, 40, 50],
'D': ['X', 'Y', 'X', 'Y', 'Y']})
print(df)
A B C D
0 1 1 10 X
1 1 1 20 Y
2 1 2 30 X
3 2 2 40 Y
4 2 1 50 Y
I can do this:
df1 = df.groupby(['A'])['C'].nlargest(2).droplevel(-1).reset_index()
print(df1)
A C
0 1 30
1 1 20
2 2 50
3 2 40
# also this
df1 = df.sort_values('C', ascending=False).groupby('A', sort=False).head(2)
print(df1)
# also this
df.set_index('C').groupby('A')['B'].nlargest(2).reset_index()
Required
df.groupby('A',as_index=False).agg(
{'C': lambda ser: ser.nlargest(2) # something like this
})
Is it possible to use the dictionary here?

If you want to get a dictionary like A: 2 top values from C,
you can run:
df.groupby(['A'])['C'].apply(lambda x:
x.nlargest(2).tolist()).to_dict()
For your DataFrame, the result is:
{1: [30, 20], 2: [50, 40]}

Related

pandas groupby ID and select row with minimal value of specific columns

i want to select the whole row in which the minimal value of 3 selected columns is found, in a dataframe like this:
it is supposed to look like this afterwards:
I tried something like
dfcheckminrow = dfquery[dfquery == dfquery['A':'C'].min().groupby('ID')]
obviously it didn't work out well.
Thanks in advance!
Bkeesey's answer looks like it almost got you to your solution. I added one more step to get the overall minimum for each group.
import pandas as pd
# create sample df
df = pd.DataFrame({'ID': [1, 1, 2, 2, 3, 3],
'A': [30, 14, 100, 67, 1, 20],
'B': [10, 1, 2, 5, 100, 3],
'C': [1, 2, 3, 4, 5, 6],
})
# set "ID" as the index
df = df.set_index('ID')
# get the min for each column
mindf = df[['A','B']].groupby('ID').transform('min')
# get the min between columns and add it to df
df['min'] = mindf.apply(min, axis=1)
# filter df for when A or B matches the min
df2 = df.loc[(df['A'] == df['min']) | (df['B'] == df['min'])]
print(df2)
In my simplified example, I'm just finding the minimum between columns A and B. Here's the output:
A B C min
ID
1 14 1 2 1
2 100 2 3 2
3 1 100 5 1
One method do filter the initial DataFrame based on a groupby conditional could be to use transform to find the minimum for a "ID" group and then use loc to filter the initial DataFrame where `any(axis=1) (checking rows) is met.
# create sample df
df = pd.DataFrame({'ID': [1, 1, 2, 2, 3, 3],
'A': [30, 14, 100, 67, 1, 20],
'B': [10, 1, 2, 5, 100, 3]})
# set "ID" as the index
df = df.set_index('ID')
Sample df:
A B
ID
1 30 10
1 14 1
2 100 2
2 67 5
3 1 100
3 20 3
Use groupby and transform to find minimum value based on "ID" group.
Then use loc to filter initial df to where any(axis=1) is valid
df.loc[(df == df.groupby('ID').transform('min')).any(axis=1)]
Output:
A B
ID
1 14 1
2 100 2
2 67 5
3 1 100
3 20 3
In this example only the first row should be removed as it in both columns is not a minimum for the "ID" group.

subtract multiple columns at once

I have two dataframes:
df_1 = pd.DataFrame({'a' : [7,8, 2], 'b': [6, 6, 11], 'c': [4, 8, 6]})
df_1
and
df_2 = pd.DataFrame({'d' : [8, 4, 12], 'e': [16, 2, 1], 'f': [9, 3, 4]})
df_2
My goal is something like:
In a way that 'in one shot' I can subtract each column multiple times.
I'm trying for loop but I´m stuck!
You can subtract them as numpy arrays (using .values) and then put the result in a dataframe:
df_3 = pd.DataFrame(df_1.values - df_2.values, columns=list('xyz'))
# x y z
# 0 -1 -10 -5
# 1 4 4 5
# 2 -10 10 2
Or rename df_1.columns and df_2.columns to ['x','y','z'] and you can subtract them directly:
df_1.columns = df_2.columns = list('xyz')
df_3 = df_1 - df_2
# x y z
# 0 -1 -10 -5
# 1 4 4 5
# 2 -10 10 2

Pandas Dataframe row selection combined condition index- and column values

I want to select rows from a dataframe based on values in the index combined with values in a specific column:
df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [0, 20, 30], [40, 20, 30]],
index=[4, 5, 6, 7], columns=['A', 'B', 'C'])
A B C
4 0 2 3
5 0 4 1
6 0 20 30
7 40 20 30
with
df.loc[df['A'] == 0, 'C'] = 99
i can select all rows with column A = 0 and replace the value in column C with 99, but how can i select all rows with column A = 0 and the index < 6 (i want to combine selection on the index with selection on the column)?
You can use multiple conditions in your loc statement:
df.loc[(df.index < 6) & (df.A == 0), 'C'] = 99

Pandas ValueError when calling apply with axis=1 and setting lists of varying length as cell-value

While calling apply on a Pandas dataframe with axis=1, getting ValueError when trying to set a list as cell-value.
Note: Lists in different rows are of varying lengths and this seems to be cause, but not sure how to overcome it.
import numpy as np
import pandas as pd
data = [{'a': 1, 'b': '3412', 'c': 0}, {'a': 88, 'b': '56\t23', 'c': 1},
{'a': 45, 'b': '412\t34\t324', 'c': 2}]
df = pd.DataFrame.from_dict(data)
print("df: ")
print(df)
def get_rank_array(ids):
ids = list(map(int, ids))
return np.random.randint(0, 10, len(ids))
def get_rank_list(ids):
ids = list(map(int, ids))
return np.random.randint(0, 10, len(ids)).tolist()
df['rank'] = df.apply(lambda row: get_rank_array(row['b'].split('\t')), axis=1)
ValueError: could not broadcast input array from shape (2) into shape (3)
df['rank'] = df.apply(lambda row: get_rank_list(row['b'].split('\t')), axis=1)
print("df: ")
print(df)
df:
a b c rank
0 1 3412 0 [6]
1 88 56\t23 1 [0, 0]
2 45 412\t34\t324 2 [3, 3, 6]
get_rank_list works but not get_rank_array in producing the above expected result.
I understand the (3,) shape comes from the number of columns in the dataframe, and (2,) is from the length of the list after splitting 56\t23 in the second row.
But I do not get the reason behind the error itself.
When
data = [{'a': 45, 'b': '412\t34\t324', 'c': 2},
{'a': 1, 'b': '3412', 'c': 0}, {'a': 88, 'b': '56\t23', 'c': 1}]
the error occurs with lists too.
Observe -
df.apply(lambda x: [0, 1, 2])
a b c
0 0 0 0
1 1 1 1
2 2 2 2
df.apply(lambda x: [0, 1])
a [0, 1]
b [0, 1]
c [0, 1]
dtype: object
Pandas does two things inside apply:
it special cases np.arrays and lists, and
it attempts to snap the results into a DataFrame if the shape is compatible
Note that arrays are special cased a little differently to lists, in that, if the shape is not compatible, for lists, the result is a series (as you see in the second output above), but for arrays,
df.apply(lambda x: np.array([0, 1, 2]))
a b c
0 0 0 0
1 1 1 1
2 2 2 2
df.apply(lambda x: np.array([0, 1]))
ValueError: Shape of passed values is (3, 2), indices imply (3, 3)
In short, this is a consequence of the pandas internals. For more information, peruse the apply function code on GitHub.
To get your desired o/p, use a list comprehension and assign the result to df['new']. Don't use apply.
df['new'] = [
np.random.randint(0, 10, len(x.split('\t'))).tolist() for x in df.b
]
df
a b c new
0 1 3412 0 [8]
1 88 56\t23 1 [4, 2]
2 45 412\t34\t324 2 [9, 0, 3]

How to get back the index after groupby in pandas

I am trying to find the the record with maximum value from the first record in each group after groupby and delete the same from the original dataframe.
import pandas as pd
df = pd.DataFrame({'item_id': ['a', 'a', 'b', 'b', 'b', 'c', 'd'],
'cost': [1, 2, 1, 1, 3, 1, 5]})
print df
t = df.groupby('item_id').first() #lost track of the index
desired_row = t[t.cost == t.cost.max()]
#delete this row from df
cost
item_id
d 5
I need to keep track of desired_row and delete this row from df and repeat the process.
What is the best way to find and delete the desired_row?
I am not sure of a general way, but this will work in your case since you are taking the first item of each group (it would also easily work on the last). In fact, because of the general nature of split-aggregate-combine, I don't think this is easily achievable without doing it yourself.
gb = df.groupby('item_id', as_index=False)
>>> gb.groups # Index locations of each group.
{'a': [0, 1], 'b': [2, 3, 4], 'c': [5], 'd': [6]}
# Get the first index location from each group using a dictionary comprehension.
subset = {k: v[0] for k, v in gb.groups.iteritems()}
df2 = df.iloc[subset.values()]
# These are the first items in each groupby.
>>> df2
cost item_id
0 1 a
5 1 c
2 1 b
6 5 d
# Exclude any items from above where the cost is equal to the max cost across the first item in each group.
>>> df[~df.index.isin(df2[df2.cost == df2.cost.max()].index)]
cost item_id
0 1 a
1 2 a
2 1 b
3 1 b
4 3 b
5 1 c
Try this ?
import pandas as pd
df = pd.DataFrame({'item_id': ['a', 'a', 'b', 'b', 'b', 'c', 'd'],
'cost': [1, 2, 1, 1, 3, 1, 5]})
t=df.drop_duplicates(subset=['item_id'],keep='first')
desired_row = t[t.cost == t.cost.max()]
df[~df.index.isin([desired_row.index[0]])]
Out[186]:
cost item_id
0 1 a
1 2 a
2 1 b
3 1 b
4 3 b
5 1 c
Or using not in
Consider this df with few more rows
pd.DataFrame({'item_id': ['a', 'a', 'b', 'b', 'b', 'c', 'd', 'd','d'],
'cost': [1, 2, 1, 1, 3, 1, 5,1,7]})
df[~df.cost.isin(df.groupby('item_id').first().max().tolist())]
cost item_id
0 1 a
1 2 a
2 1 b
3 1 b
4 3 b
5 1 c
7 1 d
8 7 d
Overview: Create a dataframe using an dictionary. Group by item_id and find the max value. enumerate over the grouped dataframe and use the key which is an numeric value to return the alpha index value. Create an result_df dataframe if you desire.
df_temp = pd.DataFrame({'item_id': ['a', 'a', 'b', 'b', 'b', 'c', 'd'],
'cost': [1, 2, 1, 1, 3, 1, 5]})
grouped=df_temp.groupby(['item_id'])['cost'].max()
result_df=pd.DataFrame(columns=['item_id','cost'])
for key, value in enumerate(grouped):
index=grouped.index[key]
result_df=result_df.append({'item_id':index,'cost':value},ignore_index=True)
print(result_df.head(5))

Categories

Resources