Grouping index by specified group size in Pandas - python

I have a dataframe as show below:
df =
index value1 value2 value3
001 0.3 1.3 4.5
002 1.1 2.5 3.7
003 0.1 0.9 7.8
....
365 3.4 1.2 0.9
the index means the days in a year( so sometimes the last number of index is 366), I want to group it with random days (for example 10 days or 30 days),I thinks the code would be as below,
df_new = df.groupby( "method" ).mean()
In some question I saw the they used type of datetime to groupby, however in my dataframe the index are just numbers, is there any better way to group it ? thanks in adavance !

I think need floor index values and aggregate mean:
df_new = df.groupby( df.index // 10).mean()
Another general solution if not default unique numeric index:
df_new = df.groupby( np.arange(len(df.index)) // 10).mean()
Sample:
c = 'val1 val2 val3'.split()
df = pd.DataFrame(np.random.randint(10, size=(20,3)), columns=c)
print (df)
val1 val2 val3
0 5 9 4
1 5 7 1
2 8 3 5
3 2 4 2
4 2 8 4
5 8 5 6
6 0 9 8
7 2 3 6
8 7 0 0
9 3 3 5
10 6 6 3
11 8 9 6
12 5 1 6
13 1 5 9
14 1 4 5
15 3 2 2
16 4 5 4
17 3 5 1
18 9 4 5
19 9 8 7
df_new = df.groupby( df.index // 10).mean()
print (df_new)
val1 val2 val3
0 4.2 5.1 4.1
1 4.9 4.9 4.8

Just create a new index via floored quotient operator // and group by this index. Here is an example with 155 rows. You can drop the original index for the result.
df = pd.DataFrame({'index': list(range(1, 156)),
'val1': np.random.rand(155),
'val2': np.random.rand(155),
'val3': np.random.rand(155)})
df['new_index'] = df['index'] // 10
res = df.groupby('new_index', as_index=False).mean().drop('index', 1)
# new_index val1 val2 val3
# 0 0 0.315851 0.462080 0.491779
# 1 1 0.377690 0.566162 0.588248
# 2 2 0.314571 0.471430 0.626292
# 3 3 0.725548 0.572577 0.530589
# 4 4 0.569597 0.466964 0.443815
# 5 5 0.470747 0.394189 0.321107
# 6 6 0.362968 0.362278 0.415093
# 7 7 0.403529 0.626155 0.322582
# 8 8 0.555819 0.415741 0.525251
# 9 9 0.454660 0.336846 0.524158
# 10 10 0.435777 0.495191 0.380897
# 11 11 0.345916 0.550897 0.487255
# 12 12 0.676762 0.464794 0.612018
# 13 13 0.524610 0.450550 0.472724
# 14 14 0.466074 0.542736 0.680481
# 15 15 0.456921 0.565800 0.442543

Related

Why does pd.rolling and .apply() return multiple outputs from a function returning a single value?

I'm trying to create a rolling function that:
Divides two DataFrames with 3 columns in each df.
Calculate the mean of each row from the output in step 1.
Sums the averages from step 2.
This could be done by using pd.iterrows() hence looping through each row. However, this would be inefficient when working with larger datasets. Therefore, my objective is to create a pd.rolling function that could do this much faster.
What I would need help with is to understand why my approach below returns multiple values while the function I'm using only returns a single value.
EDIT : I have updated the question with the code that produces my desired output.
This is the test dataset I'm working with:
#import libraries
import pandas as pd
import numpy as np
#create two dataframes
values = {'column1': [7,2,3,1,3,2,5,3,2,4,6,8,1,3,7,3,7,2,6,3,8],
'column2': [1,5,2,4,1,5,5,3,1,5,3,5,8,1,6,4,2,3,9,1,4],
"column3" : [3,6,3,9,7,1,2,3,7,5,4,1,4,2,9,6,5,1,4,1,3]
}
df1 = pd.DataFrame(values)
df2 = pd.DataFrame([[2,3,4],[3,4,1],[3,6,1]])
print(df1)
print(df2)
column1 column2 column3
0 7 1 3
1 2 5 6
2 3 2 3
3 1 4 9
4 3 1 7
5 2 5 1
6 5 5 2
7 3 3 3
8 2 1 7
9 4 5 5
10 6 3 4
11 8 5 1
12 1 8 4
13 3 1 2
14 7 6 9
15 3 4 6
16 7 2 5
17 2 3 1
18 6 9 4
19 3 1 1
20 8 4 3
0 1 2
0 2 3 4
1 3 4 1
2 3 6 1
One method to achieve my desired output by looping through each row:
RunningSum = []
for index, rows in df1.iterrows():
if index > 3:
Div = abs((((df2 / df1.iloc[index-3+1:index+1].reset_index(drop="True").values)-1)*100))
Average = Div.mean(axis=0)
SumOfAverages = np.sum(Average)
RunningSum.append(SumOfAverages)
#printing my desired output values
print(RunningSum)
[330.42328042328046,
212.0899470899471,
152.06349206349208,
205.55555555555554,
311.9047619047619,
209.1269841269841,
197.61904761904765,
116.94444444444444,
149.72222222222223,
430.0,
219.51058201058203,
215.34391534391537,
199.15343915343914,
159.6031746031746,
127.6984126984127,
326.85185185185185,
204.16666666666669]
However, this would be timely when working with large datasets. Therefore, I've tried to create a function which applies to a pd.rolling() object.
def SumOfAverageFunction(vals):
Div = df2 / vals.reset_index(drop="True")
Average = Div.mean(axis=0)
SumOfAverages = np.sum(Average)
return SumOfAverages
RunningSum = df1.rolling(window=3,axis=0).apply(SumOfAverageFunction)
The problem here is that my function returns multiple output. How can I solve this?
print(RunningSum)
column1 column2 column3
0 NaN NaN NaN
1 NaN NaN NaN
2 3.214286 4.533333 2.277778
3 4.777778 3.200000 2.111111
4 5.888889 4.416667 1.656085
5 5.111111 5.400000 2.915344
6 3.455556 3.933333 5.714286
7 2.866667 2.066667 5.500000
8 2.977778 3.977778 3.063492
9 3.555556 5.622222 1.907937
10 2.750000 4.200000 1.747619
11 1.638889 2.377778 3.616667
12 2.986111 2.005556 5.500000
13 5.333333 3.075000 4.750000
14 4.396825 5.000000 3.055556
15 2.174603 3.888889 2.148148
16 2.111111 2.527778 1.418519
17 2.507937 3.500000 3.311111
18 2.880952 3.000000 5.366667
19 2.722222 3.370370 5.750000
20 2.138889 5.129630 5.666667
After reordering of operations, your calculations can be simplified
BASE = df2.sum(axis=0) /3
BASE_series = pd.Series({k: v for k, v in zip(df1.columns, BASE)})
result = df1.rdiv(BASE_series, axis=1).sum(axis=1)
print(np.around(result[4:], 3))
Outputs:
4 5.508
5 4.200
6 2.400
7 3.000
...
if you dont want to calculate anything before index 4 then change:
df1.iloc[4:].rdiv(...

Multiple indexing with multiple idxmin() and idmax() in one aggregate in pandas

In R data.table it is possible and easy to aggregate on multiple columns using argmin or argmax functions in one aggregate. For example for DT:
> DT = data.table(id=c(1,1,1,2,2,2,2,3,3,3), col1=c(1,3,5,2,5,3,6,3,67,7), col2=c(4,6,8,3,65,3,5,4,4,7), col3=c(34,64,53,5,6,2,4,6,4,67))
> DT
id col1 col2 col3
1: 1 1 4 34
2: 1 3 6 64
3: 1 5 8 53
4: 2 2 3 5
5: 2 5 65 6
6: 2 3 3 2
7: 2 6 5 4
8: 3 3 4 6
9: 3 67 4 4
10: 3 7 7 67
> DT_agg = DT[, .(agg1 = col1[which.max(col2)]
, agg2 = col2[which.min(col3)]
, agg3 = col1[which.max(col3)])
, by= id]
> DT_agg
id agg1 agg2 agg3
1: 1 5 4 3
2: 2 5 3 5
3: 3 7 4 7
agg1 is value of col1 where value of col2 is maximum, grouped by id.
agg2 is value of col2 where value of col3 is minimum, grouped by id.
agg3 is value of col1 where value of col3 is maximum, grouped by id.
how is this possible in Pandas, doing all three aggregates in one aggregate operation using groupby and agg? I can't figure out how to incorporate three different indexing in one agg function in Python. here's the dataframe in Python:
DF =pd.DataFrame({'id':[1,1,1,2,2,2,2,3,3,3], 'col1':[1,3,5,2,5,3,6,3,67,7], 'col2':[4,6,8,3,65,3,5,4,4,7], 'col3':[34,64,53,5,6,2,4,6,4,67]})
DF
Out[70]:
id col1 col2 col3
0 1 1 4 34
1 1 3 6 64
2 1 5 8 53
3 2 2 3 5
4 2 5 65 6
5 2 3 3 2
6 2 6 5 4
7 3 3 4 6
8 3 67 4 4
9 3 7 7 67
You can try this,
DF.groupby('id').agg(agg1=('col1',lambda x:x[DF.loc[x.index,'col2'].idxmax()]),
agg2 = ('col2',lambda x:x[DF.loc[x.index,'col3'].idxmin()]),
agg3 = ('col1',lambda x:x[DF.loc[x.index,'col3'].idxmax()]))
agg1 agg2 agg3
id
1 5 4 3
2 5 3 5
3 7 4 7
How about a tidyverse way in python:
>>> from datar.all import f, tibble, group_by, which_max, which_min, summarise
>>>
>>> DF = tibble(
... id=[1,1,1,2,2,2,2,3,3,3],
... col1=[1,3,5,2,5,3,6,3,67,7],
... col2=[4,6,8,3,65,3,5,4,4,7],
... col3=[34,64,53,5,6,2,4,6,4,67]
... )
>>>
>>> DF >> group_by(f.id) >> summarise(
... agg1=f.col1[which_max(f.col2)],
... agg2=f.col2[which_min(f.col3)],
... agg3=f.col1[which_max(f.col3)]
... )
id agg1 agg2 agg3
<int64> <int64> <int64> <int64>
0 1 5 4 3
1 2 5 3 5
2 3 7 4 7
I am the author of the datar package. Feel free to submit issues if you have any questions.
Toyed with this question, primarily to see if I could get an improved speed on the original solution. This is faster than named aggregation.
grp = df.groupby("id")
pd.DataFrame({ "col1": df.col1[grp.col2.idxmax()].array,
"col2": df.col2[grp.col3.idxmin()].array,
"col3": df.col1[grp.col3.idxmax()].array},
index=grp.indices)
col1 col2 col3
1 5 4 3
2 5 3 5
3 7 4 7
Speedup ~3x.

Pandas Merge Columns with Priority

My input dataframe;
MinA MinB MaxA MaxB
0 1 2 5 7
1 1 0 8 6
2 2 15 15
3 3
4 10
I want to merge "min" and "max" columns amongst themselves with priority (A columns have more priority than B columns).
If both columns are null, they should have default values, for min=0 for max=100.
Desired output is;
MinA MinB MaxA MaxB Min Max
0 1 2 5 7 1 5
1 1 0 8 6 1 8
2 2 15 15 2 15
3 3 3 100
4 10 0 10
Could you please help me about this?
This can be accomplished using mask. With your data that would look like the following:
df = pd.DataFrame({
'MinA': [1,1,2,None,None],
'MinB': [2,0,None,3,None],
'MaxA': [5,8,15,None,None],
'MaxB': [7,6,15,None,10],
})
# Create new Column, using A as the base, if it is Nan, then use B.
# Then do the same again using specified values
df['Min'] = df['MinA'].mask(pd.isna, df['MinB']).mask(pd.isna, 0)
df['Max'] = df['MaxA'].mask(pd.isna, df['MaxB']).mask(pd.isna, 100)
The above would result in the desired output:
MinA MinB MaxA MaxB Min Max
0 1 2 5 7 1 5
1 1 0 8 6 1 8
2 2 NaN 15 15 2 15
3 NaN 3 NaN NaN 3 100
4 NaN NaN NaN 10 0 10
Just use fillna() will be fine.
df['Min'] = df['MinA'].fillna(df['MinB']).fillna(0)
df['Max'] = df['MaxA'].fillna(df['MaxB']).fillna(100)

pandas version of SQL CROSS APPLY

Say we have a DataFrame df
df = pd.DataFrame({
"Id": [1, 2],
"Value": [2, 5]
})
df
Id Value
0 1 2
1 2 5
and some function f which takes an element of df and returns a DataFrame.
def f(value):
return pd.DataFrame({"A": range(10, 10 + value), "B": range(20, 20 + value)})
f(2)
A B
0 10 20
1 11 21
We want to apply f to each element in df["Value"], and join the result to df, like so:
Id Value A B
0 1 2 10 20
1 1 2 11 21
2 2 5 10 20
2 2 5 11 21
2 2 5 12 22
2 2 5 13 23
2 2 5 14 24
In T-SQL, with a table df and table-valued function f, we would do this with a CROSS APPLY:
SELECT * FROM df
CROSS APPLY f(df.Value)
How can we do this in pandas?
You could apply the function to each element in Value in a list comprehension and use pd.concat to concatenate all resulting dataframes. Also assign the corresponding Id so that it can be later on used to merge both dataframes:
l = pd.concat([f(row.Value).assign(Id=row.Id) for _, row in df.iterrows()])
df.merge(l, on='Id')
Id Value A B
0 1 2 10 20
1 1 2 11 21
2 2 5 10 20
3 2 5 11 21
4 2 5 12 22
5 2 5 13 23
6 2 5 14 24
One of the few cases I would use DataFrame.iterrows. We can iterate over each row, concat the cartesian product out of your function with the original dataframe and at the same time fillna with bfill and ffill:
df = pd.concat([pd.concat([f(r['Value']), pd.DataFrame(r).T], axis=1).bfill().ffill() for _, r in df.iterrows()],
ignore_index=True)
Which yields:
print(df)
A B Id Value
0 10 20 1.0 2.0
1 11 21 1.0 2.0
2 10 20 2.0 5.0
3 11 21 2.0 5.0
4 12 22 2.0 5.0
5 13 23 2.0 5.0
6 14 24 2.0 5.0

Expanding mean grouped by multiple columns in pandas

I have a dataframe that I'd like to calculate expanding mean over one column (quiz_score), but need to group by two different columns (userid and week). The data looks like this:
data = {"userid": ['1','1','1','1','1','1','1','1', '2','2','2','2','2','2','2','2'],\
"week": [1,1,2,2,3,3,4,4, 1,2,2,3,3,4,4,5],\
"quiz_score": [12, 14, 14, 15, 9, 15, 11, 14, 15, 14, 15, 13, 15, 10, 14, 14]}
>>> df = pd.DataFrame(data, columns = ['userid', 'week', 'quiz_score'])
>>> df
userid week quiz_score
0 1 1 12
1 1 1 14
2 1 2 14
3 1 2 15
4 1 3 9
5 1 3 15
6 1 4 11
7 1 4 14
8 2 1 15
9 2 2 14
10 2 2 15
11 2 3 13
12 2 3 15
13 2 4 10
14 2 4 14
15 2 5 14
I need to calculate expanding means by userid over each week--that is, for each user each week, I need their average quiz score over the preceding weeks. I know that a solution will involve using shift() and pd.expanding_mean() or .expanding().mean() in some form, but I've been unable to get the grouping and shift-ing correct -- even when I try without shifting, the results aren't grouped properly and seem to be just expanding mean across the rows as if there were no grouping at all:
df.groupby(['userid', 'week']).apply(pd.expanding_mean).reset_index()
To be clear, the correct result would look like this:
userid week expanding_mean_quiz_score
0 1 1 NA
1 1 2 13
2 1 3 13.75
3 1 4 13.166666
4 1 5 13
5 1 6 13
6 2 1 NA
7 2 2 15
8 2 3 14.666666
9 2 4 14.4
10 2 5 13.714
11 2 6 13.75
Note that the expanding_mean_quiz_score for each user/week is the mean of the scores for that user across all previous weeks.
Thanks for your help, I've never used expanding_mean() and am stumped here.
You can groupby userid and 'week' and keep track of the total scores and count for those groupings. Then use the expanding method on the groupby object to accumulate the scores and counts. Finally, get the desired column by dividing both accumulations.
a=df.groupby(['userid', 'week'])['quiz_score'].agg(('sum', 'count'))
a = a.reindex(pd.MultiIndex.from_product([['1', '2'], range(1,7)], names=['userid', 'week']))
b = a.groupby(level=0).cumsum().groupby(level=0).shift(1)
b['em_quiz_score'] = b['sum'] / b['count']
c = b.reset_index().drop(['count', 'sum'], axis=1)
d = c.groupby('userid').fillna(method='ffill')
d['userid'] = c['userid']
d = d[['userid', 'week', 'em_quiz_score']]
userid week em_quiz_score
0 1 1 NaN
1 1 2 13.000000
2 1 3 13.750000
3 1 4 13.166667
4 1 5 13.000000
5 1 6 13.000000
6 2 1 NaN
7 2 2 15.000000
8 2 3 14.666667
9 2 4 14.400000
10 2 5 13.714286
11 2 6 13.750000

Categories

Resources