In the following DataFrame I would like to add rows if the count of values in the column A is less than 10.
For eg., in the following Table column A group 60 appears 12 times, however gorup 61 appears 9 times. I would like to add a row after last record of group 61 and copy the value in column B,C,D from the corresponding values group 60. Similar operation for group 62 and so on.
A B C D
0 60 0.235 4 7.86
1 60 1.235 5 8.86
2 60 2.235 6 9.86
3 60 3.235 7 10.86
4 60 4.235 8 11.86
5 60 5.235 9 12.86
6 60 6.235 10 13.86
7 60 7.235 11 14.86
8 60 8.235 12 15.86
9 60 9.235 13 16.86
10 60 10.235 14 17.86
11 60 11.235 15 18.86
12 61 12.235 16 19.86
13 61 13.235 17 20.86
14 61 14.235 18 21.86
15 61 15.235 19 22.86
16 61 16.235 20 23.86
17 61 17.235 21 24.86
18 61 18.235 22 25.86
19 61 19.235 23 26.86
20 61 20.235 24 27.86
21 62 20.235 24 28.86
22 62 20.235 24 29.86
23 62 20.235 24 30.86
24 62 20.235 24 31.86
25 62 20.235 24 32.86
You can use:
#cumulative count per group
df['G'] = df.groupby('A').cumcount()
df = df.groupby(['A','G'])
.first() #agregate first
.unstack() #reshape DataFrame
.ffill() #same as fillna(method='ffill')
.stack() #get original shape
.reset_index(drop=True, level=1) #remove level G in index
.reset_index()
print (df)
A B C D
0 60 0.235 4.0 7.86
1 60 1.235 5.0 8.86
2 60 2.235 6.0 9.86
3 60 3.235 7.0 10.86
4 60 4.235 8.0 11.86
5 60 5.235 9.0 12.86
6 60 6.235 10.0 13.86
7 60 7.235 11.0 14.86
8 60 8.235 12.0 15.86
9 60 9.235 13.0 16.86
10 60 10.235 14.0 17.86
11 60 11.235 15.0 18.86
12 61 12.235 16.0 19.86
13 61 13.235 17.0 20.86
14 61 14.235 18.0 21.86
15 61 15.235 19.0 22.86
16 61 16.235 20.0 23.86
17 61 17.235 21.0 24.86
18 61 18.235 22.0 25.86
19 61 19.235 23.0 26.86
20 61 20.235 24.0 27.86
21 61 9.235 13.0 16.86
22 61 10.235 14.0 17.86
23 61 11.235 15.0 18.86
24 62 20.235 24.0 28.86
25 62 20.235 24.0 29.86
26 62 20.235 24.0 30.86
27 62 20.235 24.0 31.86
28 62 20.235 24.0 32.86
29 62 17.235 21.0 24.86
30 62 18.235 22.0 25.86
31 62 19.235 23.0 26.86
32 62 20.235 24.0 27.86
33 62 9.235 13.0 16.86
34 62 10.235 14.0 17.86
35 62 11.235 15.0 18.86
Another solution with pivot_table:
df['G'] = df.groupby('A').cumcount()
df = df.pivot_table(index='A', columns='G')
.ffill()
.stack()
.reset_index(drop=True, level=1)
.reset_index()
print (df)
A B C D
0 60 0.235 4.0 7.86
1 60 1.235 5.0 8.86
2 60 2.235 6.0 9.86
3 60 3.235 7.0 10.86
4 60 4.235 8.0 11.86
5 60 5.235 9.0 12.86
6 60 6.235 10.0 13.86
7 60 7.235 11.0 14.86
8 60 8.235 12.0 15.86
9 60 9.235 13.0 16.86
10 60 10.235 14.0 17.86
11 60 11.235 15.0 18.86
12 61 12.235 16.0 19.86
13 61 13.235 17.0 20.86
14 61 14.235 18.0 21.86
15 61 15.235 19.0 22.86
16 61 16.235 20.0 23.86
17 61 17.235 21.0 24.86
18 61 18.235 22.0 25.86
19 61 19.235 23.0 26.86
20 61 20.235 24.0 27.86
21 61 9.235 13.0 16.86
22 61 10.235 14.0 17.86
23 61 11.235 15.0 18.86
24 62 20.235 24.0 28.86
25 62 20.235 24.0 29.86
26 62 20.235 24.0 30.86
27 62 20.235 24.0 31.86
28 62 20.235 24.0 32.86
29 62 17.235 21.0 24.86
30 62 18.235 22.0 25.86
31 62 19.235 23.0 26.86
32 62 20.235 24.0 27.86
33 62 9.235 13.0 16.86
34 62 10.235 14.0 17.86
35 62 11.235 15.0 18.86
Related
I have a dataframe sorted in descending order date that records the Rank of students in class and the predicted score.
Date Student_ID Rank Predicted_Score
4/7/2021 33 2 87
13/6/2021 33 4 88
31/3/2021 33 7 88
28/2/2021 33 2 86
14/2/2021 33 10 86
31/1/2021 33 8 86
23/12/2020 33 1 81
8/11/2020 33 3 80
21/10/2020 33 3 80
23/9/2020 33 4 80
20/5/2020 33 3 80
29/4/2020 33 4 80
15/4/2020 33 2 79
26/2/2020 33 3 79
12/2/2020 33 5 79
29/1/2020 33 1 70
I want to create a column called Recent_Predicted_Score that record the last predicted_score where that student actually ranks top 3. So the desired outcome looks like
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
4/7/2021 33 2 87 86
13/6/2021 33 4 88 86
31/3/2021 33 7 88 86
28/2/2021 33 2 86 81
14/2/2021 33 10 86 81
31/1/2021 33 8 86 81
23/12/2020 33 1 81 80
8/11/2020 33 3 80 80
21/10/2020 33 3 80 80
23/9/2020 33 4 80 80
20/5/2020 33 3 80 79
29/4/2020 33 4 80 79
15/4/2020 33 2 79 79
26/2/2020 33 3 79 70
12/2/2020 33 5 79 70
29/1/2020 33 1 70
Here's what I have tried but it doesn't quite work, not sure if I am on the right track:
df.sort_values(by = ['Student_ID', 'Date'], ascending = [True, False], inplace = True)
lp1 = df['Predicted_Score'].where(df['Rank'].isin([1,2,3])).groupby(df['Student_ID']).bfill()
lp2 = df.groupby(['Student_ID', 'Rank'])['Predicted_Score'].shift(-1)
df = df.assign(Recent_Predicted_Score=lp1.mask(df['Rank'].isin([1,2,3]), lp2))
Thanks in advance.
Try:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df = df.sort_values(['Student_ID', 'Date'])
df['Recent_Predicted_Score'] = np.where(df['Rank'].isin([1, 2, 3]), df['Predicted_Score'], np.nan)
df['Recent_Predicted_Score'] = df.groupby('Student_ID', group_keys=False)['Recent_Predicted_Score'].apply(lambda x: x.ffill().shift().fillna(''))
df = df.sort_values(['Student_ID', 'Date'], ascending = [True, False])
print(df)
Prints:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
1 2021-06-13 33 4 88 86.0
2 2021-03-31 33 7 88 86.0
3 2021-02-28 33 2 86 81.0
4 2021-02-14 33 10 86 81.0
5 2021-01-31 33 8 86 81.0
6 2020-12-23 33 1 81 80.0
7 2020-11-08 33 3 80 80.0
8 2020-10-21 33 3 80 80.0
9 2020-09-23 33 4 80 80.0
10 2020-05-20 33 3 80 79.0
11 2020-04-29 33 4 80 79.0
12 2020-04-15 33 2 79 79.0
13 2020-02-26 33 3 79 70.0
14 2020-02-12 33 5 79 70.0
15 2020-01-29 33 1 70
Mask the scores where rank is greater than 3 then group the masked column by Student_ID and backward fill to propagate the last predicted score
c = 'Recent_Predicted_Score'
df[c] = df['Predicted_Score'].mask(df['Rank'].gt(3))
df[c] = df.groupby('Student_ID')[c].apply(lambda s: s.shift(-1).bfill())
Result
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 4/7/2021 33 2 87 86.0
1 13/6/2021 33 4 88 86.0
2 31/3/2021 33 7 88 86.0
3 28/2/2021 33 2 86 81.0
4 14/2/2021 33 10 86 81.0
5 31/1/2021 33 8 86 81.0
6 23/12/2020 33 1 81 80.0
7 8/11/2020 33 3 80 80.0
8 21/10/2020 33 3 80 80.0
9 23/9/2020 33 4 80 80.0
10 20/5/2020 33 3 80 79.0
11 29/4/2020 33 4 80 79.0
12 15/4/2020 33 2 79 79.0
13 26/2/2020 33 3 79 70.0
14 12/2/2020 33 5 79 70.0
15 29/1/2020 33 1 70 NaN
Note: Make sure your dataframe is sorted on Date in descending order.
Let's assume:
there may be more than one unique Student_ID
the rows are ordered by descending Date as indicated by OP, but may not be ordered by Student_ID
we want to preserve the index of the original dataframe
Subject to these assumptions, here's a way to do what your question asks:
df['Recent_Predicted_Score'] = df.loc[df.Rank <= 3, 'Predicted_Score']
df['Recent_Predicted_Score'] = ( df
.groupby('Student_ID', sort=False)
.apply(lambda group: group.shift(-1).bfill())
['Recent_Predicted_Score'] )
Explanation:
create a new column Recent_Predicted_Score containing the PredictedScore where Rank is in the top 3 and NaN otherwise
use groupby() on Student_ID with the sort argument set to False for better performance (note that groupby() preserves the order of rows within each group, specifically, not influencing the existing descending order by Date)
within each group, do shift(-1) and bfill() to get the desired result for Recent_Predicted_Score.
Sample input (with two distinct Student_ID values):
Date Student_ID Rank Predicted_Score
0 2021-07-04 33 2 87
1 2021-07-04 66 2 87
2 2021-06-13 33 4 88
3 2021-06-13 66 4 88
4 2021-03-31 33 7 88
5 2021-03-31 66 7 88
6 2021-02-28 33 2 86
7 2021-02-28 66 2 86
8 2021-02-14 33 10 86
9 2021-02-14 66 10 86
10 2021-01-31 33 8 86
11 2021-01-31 66 8 86
12 2020-12-23 33 1 81
13 2020-12-23 66 1 81
14 2020-11-08 33 3 80
15 2020-11-08 66 3 80
16 2020-10-21 33 3 80
17 2020-10-21 66 3 80
18 2020-09-23 33 4 80
19 2020-09-23 66 4 80
20 2020-05-20 33 3 80
21 2020-05-20 66 3 80
22 2020-04-29 33 4 80
23 2020-04-29 66 4 80
24 2020-04-15 33 2 79
25 2020-04-15 66 2 79
26 2020-02-26 33 3 79
27 2020-02-26 66 3 79
28 2020-02-12 33 5 79
29 2020-02-12 66 5 79
30 2020-01-29 33 1 70
31 2020-01-29 66 1 70
Output:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
1 2021-07-04 66 2 87 86.0
2 2021-06-13 33 4 88 86.0
3 2021-06-13 66 4 88 86.0
4 2021-03-31 33 7 88 86.0
5 2021-03-31 66 7 88 86.0
6 2021-02-28 33 2 86 81.0
7 2021-02-28 66 2 86 81.0
8 2021-02-14 33 10 86 81.0
9 2021-02-14 66 10 86 81.0
10 2021-01-31 33 8 86 81.0
11 2021-01-31 66 8 86 81.0
12 2020-12-23 33 1 81 80.0
13 2020-12-23 66 1 81 80.0
14 2020-11-08 33 3 80 80.0
15 2020-11-08 66 3 80 80.0
16 2020-10-21 33 3 80 80.0
17 2020-10-21 66 3 80 80.0
18 2020-09-23 33 4 80 80.0
19 2020-09-23 66 4 80 80.0
20 2020-05-20 33 3 80 79.0
21 2020-05-20 66 3 80 79.0
22 2020-04-29 33 4 80 79.0
23 2020-04-29 66 4 80 79.0
24 2020-04-15 33 2 79 79.0
25 2020-04-15 66 2 79 79.0
26 2020-02-26 33 3 79 70.0
27 2020-02-26 66 3 79 70.0
28 2020-02-12 33 5 79 70.0
29 2020-02-12 66 5 79 70.0
30 2020-01-29 33 1 70 NaN
31 2020-01-29 66 1 70 NaN
Output sorted by Student_ID, Date for easier inspection:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
2 2021-06-13 33 4 88 86.0
4 2021-03-31 33 7 88 86.0
6 2021-02-28 33 2 86 81.0
8 2021-02-14 33 10 86 81.0
10 2021-01-31 33 8 86 81.0
12 2020-12-23 33 1 81 80.0
14 2020-11-08 33 3 80 80.0
16 2020-10-21 33 3 80 80.0
18 2020-09-23 33 4 80 80.0
20 2020-05-20 33 3 80 79.0
22 2020-04-29 33 4 80 79.0
24 2020-04-15 33 2 79 79.0
26 2020-02-26 33 3 79 70.0
28 2020-02-12 33 5 79 70.0
30 2020-01-29 33 1 70 NaN
1 2021-07-04 66 2 87 86.0
3 2021-06-13 66 4 88 86.0
5 2021-03-31 66 7 88 86.0
7 2021-02-28 66 2 86 81.0
9 2021-02-14 66 10 86 81.0
11 2021-01-31 66 8 86 81.0
13 2020-12-23 66 1 81 80.0
15 2020-11-08 66 3 80 80.0
17 2020-10-21 66 3 80 80.0
19 2020-09-23 66 4 80 80.0
21 2020-05-20 66 3 80 79.0
23 2020-04-29 66 4 80 79.0
25 2020-04-15 66 2 79 79.0
27 2020-02-26 66 3 79 70.0
29 2020-02-12 66 5 79 70.0
31 2020-01-29 66 1 70 NaN
I have this dataframe that I wish to replace all the comma by dot, for example it would be 50.5 and 81.5.
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120 114 87 64 50,5 37
3 SUEZMAX 81,5 80 62 45 36 24
5 LR 2 69 72 57 42 32 20
7 AFRAMAX 66 68 55 40,5 30,5 19
9 LR 1 58 58 40 28 21 13,5
11 MR2 44 44,5 38 29 21 13
As dtypes for all the columns are object, I tried
df_useful[['NB', 'Ppt Resale ', '5 yrs', '10 yrs', '15 yrs',
'20 yrs']] = df_useful[['NB', 'Ppt Resale ', '5 yrs', '10 yrs', '15 yrs',
'20 yrs']].apply(pd.to_numeric, errors='coerce')
then the numbers with comma would become NAN.
A simple way:
out = df.replace(',', '.', regex=True)
Output:
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120 114 87 64 50.5 37
3 SUEZMAX 81.5 80 62 45 36 24
5 LR 2 69 72 57 42 32 20
7 AFRAMAX 66 68 55 40.5 30.5 19
9 LR 1 58 58 40 28 21 13.5
11 MR2 44 44.5 38 29 21 13
If your goal is to convert to numeric automatically, you can use:
df2 = (df
.drop(columns='Unnamed: 0')
.select_dtypes(exclude='number')
.apply(lambda s: pd.to_numeric(s.str.replace(',', '.'),
errors='coerce'))
)
df[list(df2)] = df2
Output:
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120.0 114.0 87 64.0 50.5 37.0
3 SUEZMAX 81.5 80.0 62 45.0 36.0 24.0
5 LR 2 69.0 72.0 57 42.0 32.0 20.0
7 AFRAMAX 66.0 68.0 55 40.5 30.5 19.0
9 LR 1 58.0 58.0 40 28.0 21.0 13.5
11 MR2 44.0 44.5 38 29.0 21.0 13.0
dtypes:
print(df.dtypes)
Unnamed: 0 object
NB float64
Ppt Resale float64
5 yrs int64
10 yrs float64
15 yrs float64
20 yrs float64
dtype: object
Another possible solution, based on the following idea:
Convert the dataframe to CSV format and then read the CSV string back, using the decimal separator parameter of pd.read_csv to have decimal dots instead of decimal commas.
from io import StringIO
pd.read_csv(StringIO(df.to_csv()), decimal=',', index_col=0)
Output:
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120.0 114.0 87 64.0 50.5 37.0
3 SUEZMAX 81.5 80.0 62 45.0 36.0 24.0
5 LR 2 69.0 72.0 57 42.0 32.0 20.0
7 AFRAMAX 66.0 68.0 55 40.5 30.5 19.0
9 LR 1 58.0 58.0 40 28.0 21.0 13.5
11 MR2 44.0 44.5 38 29.0 21.0 13.0
I have a couple of data frames given this way :
38 47 7 20 35
45 76 63 96 24
98 53 2 87 80
83 86 92 48 1
73 60 26 94 6
80 50 29 53 92
66 90 79 98 46
40 21 58 38 60
35 13 72 28 6
48 76 51 96 12
79 80 24 37 51
86 70 1 22 71
52 69 10 83 13
12 40 3 0 30
46 50 48 76 5
Could you please tell me how it is possible to add them to a list of dataframes?
Thanks a lot!
First convert values to one DataFrame with separator misisng values (converted from blank lines):
df = pd.read_csv(file, header=None, skip_blank_lines=False)
print (df)
0 1 2 3 4
0 38.0 47.0 7.0 20.0 35.0
1 45.0 76.0 63.0 96.0 24.0
2 98.0 53.0 2.0 87.0 80.0
3 83.0 86.0 92.0 48.0 1.0
4 73.0 60.0 26.0 94.0 6.0
5 NaN NaN NaN NaN NaN
6 80.0 50.0 29.0 53.0 92.0
7 66.0 90.0 79.0 98.0 46.0
8 40.0 21.0 58.0 38.0 60.0
9 35.0 13.0 72.0 28.0 6.0
10 48.0 76.0 51.0 96.0 12.0
11 NaN NaN NaN NaN NaN
12 79.0 80.0 24.0 37.0 51.0
13 86.0 70.0 1.0 22.0 71.0
14 52.0 69.0 10.0 83.0 13.0
15 12.0 40.0 3.0 0.0 30.0
16 46.0 50.0 48.0 76.0 5.0
And then in list comprehension create smaller DataFrames in list:
dfs = [g.iloc[1:].astype(int).reset_index(drop=True)
for _, g in df.groupby(df[0].isna().cumsum())]
print (dfs[1])
0 1 2 3 4
0 80 50 29 53 92
1 66 90 79 98 46
2 40 21 58 38 60
3 35 13 72 28 6
4 48 76 51 96 12
I have a dataframe using
year_start = '2020-03-29'
year_end = '2021-04-10'
week_end_sat = pd.DataFrame(pd.date_range(year_start, year_end, freq=f'W-SAT'), columns=['a'])
How can I make another column specifying the week number making 2020-03-29 as the first day of the calendar since I am trying to make a 4-4-5 calendar which always ends on a Saturday?
Final df that I want is,
a | count
2020-04-04 | 1
2020-04-11 | 2
.
.
.
2021-04-03 | 53 #since 2020 is a leap year there are 53 weeks otherwise it will be 52 weeks
2021-04-10 | 1
2021-04-17 | 2
.
2022-03-02 | 52
2022-04-09 | 1
I think you can create a baseline date range start from the first day of your given year_start.
first_day_of_year = week_end_sat.iloc[0, 0].replace(day=1, month=1)
baseline = pd.Series(pd.date_range(first_day_of_year, periods=len(week_end_sat), freq=f'W-SAT'))
The baseline's week of year is what you want.
week_end_sat['count'] = baseline['a'].dt.isocalendar().week
# print(week_end_sat)
a count
0 2020-04-04 1
1 2020-04-11 2
2 2020-04-18 3
3 2020-04-25 4
4 2020-05-02 5
5 2020-05-09 6
6 2020-05-16 7
7 2020-05-23 8
8 2020-05-30 9
9 2020-06-06 10
10 2020-06-13 11
11 2020-06-20 12
12 2020-06-27 13
13 2020-07-04 14
14 2020-07-11 15
15 2020-07-18 16
16 2020-07-25 17
17 2020-08-01 18
18 2020-08-08 19
19 2020-08-15 20
20 2020-08-22 21
21 2020-08-29 22
...
43 2021-01-30 44
44 2021-02-06 45
45 2021-02-13 46
46 2021-02-20 47
47 2021-02-27 48
48 2021-03-06 49
49 2021-03-13 50
50 2021-03-20 51
51 2021-03-27 52
52 2021-04-03 53
53 2021-04-10 1
I calculated the week number using a W-Sat frequency and isocalendar api and return the week number. I then create a baseline using the first day of the year and assign the week number to baseline_week. Now the week has an associated baseline_week number.
year_start = '2020-03-29'
year_end = '2021-04-10'
df = pd.DataFrame(pd.date_range(year_start, year_end, freq=f'W-SAT'), columns=['week_date'])
df['week_number']=df['week_date'].apply(lambda row: datetime.date(row.year, row.month, row.day).isocalendar()[1])
first_day_of_year = df.iloc[0, 0].replace(day=1, month=1)
baseline = pd.Series(pd.date_range(first_day_of_year, periods=len(df), freq=f'W-SAT'))
df['baseline_date']=baseline
df['baseline_week_number']=df['baseline_date'].apply(lambda row: datetime.date(row.year, row.month, row.day).isocalendar()[1])
print(df)
output:
week_date week_number baseline_date baseline_week_number
0 2020-04-04 14 2020-01-04 1
1 2020-04-11 15 2020-01-11 2
2 2020-04-18 16 2020-01-18 3
3 2020-04-25 17 2020-01-25 4
4 2020-05-02 18 2020-02-01 5
5 2020-05-09 19 2020-02-08 6
6 2020-05-16 20 2020-02-15 7
7 2020-05-23 21 2020-02-22 8
8 2020-05-30 22 2020-02-29 9
9 2020-06-06 23 2020-03-07 10
10 2020-06-13 24 2020-03-14 11
11 2020-06-20 25 2020-03-21 12
12 2020-06-27 26 2020-03-28 13
13 2020-07-04 27 2020-04-04 14
14 2020-07-11 28 2020-04-11 15
15 2020-07-18 29 2020-04-18 16
16 2020-07-25 30 2020-04-25 17
17 2020-08-01 31 2020-05-02 18
18 2020-08-08 32 2020-05-09 19
19 2020-08-15 33 2020-05-16 20
20 2020-08-22 34 2020-05-23 21
21 2020-08-29 35 2020-05-30 22
22 2020-09-05 36 2020-06-06 23
23 2020-09-12 37 2020-06-13 24
24 2020-09-19 38 2020-06-20 25
25 2020-09-26 39 2020-06-27 26
26 2020-10-03 40 2020-07-04 27
27 2020-10-10 41 2020-07-11 28
28 2020-10-17 42 2020-07-18 29
29 2020-10-24 43 2020-07-25 30
30 2020-10-31 44 2020-08-01 31
31 2020-11-07 45 2020-08-08 32
32 2020-11-14 46 2020-08-15 33
33 2020-11-21 47 2020-08-22 34
34 2020-11-28 48 2020-08-29 35
35 2020-12-05 49 2020-09-05 36
36 2020-12-12 50 2020-09-12 37
37 2020-12-19 51 2020-09-19 38
38 2020-12-26 52 2020-09-26 39
39 2021-01-02 53 2020-10-03 40
40 2021-01-09 1 2020-10-10 41
41 2021-01-16 2 2020-10-17 42
42 2021-01-23 3 2020-10-24 43
43 2021-01-30 4 2020-10-31 44
44 2021-02-06 5 2020-11-07 45
45 2021-02-13 6 2020-11-14 46
46 2021-02-20 7 2020-11-21 47
47 2021-02-27 8 2020-11-28 48
48 2021-03-06 9 2020-12-05 49
49 2021-03-13 10 2020-12-12 50
50 2021-03-20 11 2020-12-19 51
51 2021-03-27 12 2020-12-26 52
52 2021-04-03 13 2021-01-02 53
53 2021-04-10 14 2021-01-09 1
I want to calculate like previous 10 days' means for each day.
For example, in the result table, in column A, '1/11/2000' shows 44, which is the average of A values from '1/1/2000' to '1/10/2000'.
Raw Data:
A B C
1/1/2000 60 62 88
1/2/2000 46 99 28
1/3/2000 20 23 94
1/4/2000 28 19 79
1/5/2000 58 45 12
1/6/2000 50 46 62
1/7/2000 68 4 55
1/8/2000 54 64 79
1/9/2000 26 41 63
1/10/2000 33 10 18
1/11/2000 37 82 73
1/12/2000 67 33 29
1/13/2000 2 82 17
1/14/2000 82 74 51
1/15/2000 9 46 81
1/16/2000 72 84 70
1/17/2000 74 77 100
1/18/2000 19 88 37
Result:
A B C
1/1/2000
1/2/2000
1/3/2000
1/4/2000
1/5/2000
1/6/2000
1/7/2000
1/8/2000
1/9/2000
1/10/2000
1/11/2000 44 41 58
1/12/2000 42 43 56
1/13/2000 44 37 56
1/14/2000 42 43 49
1/15/2000 48 48 46
1/16/2000 43 48 53
1/17/2000 45 52 54
1/18/2000 46 59 58
You can use rolling.mean() with a shift:
df.rolling(window = 10).mean().applymap(round).shift()
Warning/Caveat
numpy often (not always) provides more performant solutions. However, they are also less intuitive and less flexible. I'm offering this solution to provide useful information to the community. I wouldn't recommend this to someone just getting familiar with pandas and numpy. I suggest you read #Jeff's comments below as well.
numpy using as_strided
import pandas as pd
import numpy as np
from numpy.lib.stride_tricks import as_strided as stride
v = df.values
n, m = v.shape
s1, s2 = v.strides
# note that `np.nanmean` is used to address potential nan values
pd.DataFrame(
np.nanmean(stride(v, (n - 9, 10, m), (s1, s1, s2)), 1).round(),
df.index[9:], df.columns
)
A B C
1/10/2000 44.0 41.0 58.0
1/11/2000 42.0 43.0 56.0
1/12/2000 44.0 37.0 56.0
1/13/2000 42.0 43.0 49.0
1/14/2000 48.0 48.0 46.0
1/15/2000 43.0 48.0 53.0
1/16/2000 45.0 52.0 54.0
1/17/2000 46.0 59.0 58.0
1/18/2000 42.0 62.0 54.0
time test