How to calculate previous n days mean using pandas? - python

I want to calculate like previous 10 days' means for each day.
For example, in the result table, in column A, '1/11/2000' shows 44, which is the average of A values from '1/1/2000' to '1/10/2000'.
Raw Data:
A B C
1/1/2000 60 62 88
1/2/2000 46 99 28
1/3/2000 20 23 94
1/4/2000 28 19 79
1/5/2000 58 45 12
1/6/2000 50 46 62
1/7/2000 68 4 55
1/8/2000 54 64 79
1/9/2000 26 41 63
1/10/2000 33 10 18
1/11/2000 37 82 73
1/12/2000 67 33 29
1/13/2000 2 82 17
1/14/2000 82 74 51
1/15/2000 9 46 81
1/16/2000 72 84 70
1/17/2000 74 77 100
1/18/2000 19 88 37
Result:
A B C
1/1/2000
1/2/2000
1/3/2000
1/4/2000
1/5/2000
1/6/2000
1/7/2000
1/8/2000
1/9/2000
1/10/2000
1/11/2000 44 41 58
1/12/2000 42 43 56
1/13/2000 44 37 56
1/14/2000 42 43 49
1/15/2000 48 48 46
1/16/2000 43 48 53
1/17/2000 45 52 54
1/18/2000 46 59 58

You can use rolling.mean() with a shift:
df.rolling(window = 10).mean().applymap(round).shift()

Warning/Caveat
numpy often (not always) provides more performant solutions. However, they are also less intuitive and less flexible. I'm offering this solution to provide useful information to the community. I wouldn't recommend this to someone just getting familiar with pandas and numpy. I suggest you read #Jeff's comments below as well.
numpy using as_strided
import pandas as pd
import numpy as np
from numpy.lib.stride_tricks import as_strided as stride
v = df.values
n, m = v.shape
s1, s2 = v.strides
# note that `np.nanmean` is used to address potential nan values
pd.DataFrame(
np.nanmean(stride(v, (n - 9, 10, m), (s1, s1, s2)), 1).round(),
df.index[9:], df.columns
)
A B C
1/10/2000 44.0 41.0 58.0
1/11/2000 42.0 43.0 56.0
1/12/2000 44.0 37.0 56.0
1/13/2000 42.0 43.0 49.0
1/14/2000 48.0 48.0 46.0
1/15/2000 43.0 48.0 53.0
1/16/2000 45.0 52.0 54.0
1/17/2000 46.0 59.0 58.0
1/18/2000 42.0 62.0 54.0
time test

Related

New column based on last time row value equals some numbers in Pandas dataframe

I have a dataframe sorted in descending order date that records the Rank of students in class and the predicted score.
Date Student_ID Rank Predicted_Score
4/7/2021 33 2 87
13/6/2021 33 4 88
31/3/2021 33 7 88
28/2/2021 33 2 86
14/2/2021 33 10 86
31/1/2021 33 8 86
23/12/2020 33 1 81
8/11/2020 33 3 80
21/10/2020 33 3 80
23/9/2020 33 4 80
20/5/2020 33 3 80
29/4/2020 33 4 80
15/4/2020 33 2 79
26/2/2020 33 3 79
12/2/2020 33 5 79
29/1/2020 33 1 70
I want to create a column called Recent_Predicted_Score that record the last predicted_score where that student actually ranks top 3. So the desired outcome looks like
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
4/7/2021 33 2 87 86
13/6/2021 33 4 88 86
31/3/2021 33 7 88 86
28/2/2021 33 2 86 81
14/2/2021 33 10 86 81
31/1/2021 33 8 86 81
23/12/2020 33 1 81 80
8/11/2020 33 3 80 80
21/10/2020 33 3 80 80
23/9/2020 33 4 80 80
20/5/2020 33 3 80 79
29/4/2020 33 4 80 79
15/4/2020 33 2 79 79
26/2/2020 33 3 79 70
12/2/2020 33 5 79 70
29/1/2020 33 1 70
Here's what I have tried but it doesn't quite work, not sure if I am on the right track:
df.sort_values(by = ['Student_ID', 'Date'], ascending = [True, False], inplace = True)
lp1 = df['Predicted_Score'].where(df['Rank'].isin([1,2,3])).groupby(df['Student_ID']).bfill()
lp2 = df.groupby(['Student_ID', 'Rank'])['Predicted_Score'].shift(-1)
df = df.assign(Recent_Predicted_Score=lp1.mask(df['Rank'].isin([1,2,3]), lp2))
Thanks in advance.
Try:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df = df.sort_values(['Student_ID', 'Date'])
df['Recent_Predicted_Score'] = np.where(df['Rank'].isin([1, 2, 3]), df['Predicted_Score'], np.nan)
df['Recent_Predicted_Score'] = df.groupby('Student_ID', group_keys=False)['Recent_Predicted_Score'].apply(lambda x: x.ffill().shift().fillna(''))
df = df.sort_values(['Student_ID', 'Date'], ascending = [True, False])
print(df)
Prints:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
1 2021-06-13 33 4 88 86.0
2 2021-03-31 33 7 88 86.0
3 2021-02-28 33 2 86 81.0
4 2021-02-14 33 10 86 81.0
5 2021-01-31 33 8 86 81.0
6 2020-12-23 33 1 81 80.0
7 2020-11-08 33 3 80 80.0
8 2020-10-21 33 3 80 80.0
9 2020-09-23 33 4 80 80.0
10 2020-05-20 33 3 80 79.0
11 2020-04-29 33 4 80 79.0
12 2020-04-15 33 2 79 79.0
13 2020-02-26 33 3 79 70.0
14 2020-02-12 33 5 79 70.0
15 2020-01-29 33 1 70
Mask the scores where rank is greater than 3 then group the masked column by Student_ID and backward fill to propagate the last predicted score
c = 'Recent_Predicted_Score'
df[c] = df['Predicted_Score'].mask(df['Rank'].gt(3))
df[c] = df.groupby('Student_ID')[c].apply(lambda s: s.shift(-1).bfill())
Result
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 4/7/2021 33 2 87 86.0
1 13/6/2021 33 4 88 86.0
2 31/3/2021 33 7 88 86.0
3 28/2/2021 33 2 86 81.0
4 14/2/2021 33 10 86 81.0
5 31/1/2021 33 8 86 81.0
6 23/12/2020 33 1 81 80.0
7 8/11/2020 33 3 80 80.0
8 21/10/2020 33 3 80 80.0
9 23/9/2020 33 4 80 80.0
10 20/5/2020 33 3 80 79.0
11 29/4/2020 33 4 80 79.0
12 15/4/2020 33 2 79 79.0
13 26/2/2020 33 3 79 70.0
14 12/2/2020 33 5 79 70.0
15 29/1/2020 33 1 70 NaN
Note: Make sure your dataframe is sorted on Date in descending order.
Let's assume:
there may be more than one unique Student_ID
the rows are ordered by descending Date as indicated by OP, but may not be ordered by Student_ID
we want to preserve the index of the original dataframe
Subject to these assumptions, here's a way to do what your question asks:
df['Recent_Predicted_Score'] = df.loc[df.Rank <= 3, 'Predicted_Score']
df['Recent_Predicted_Score'] = ( df
.groupby('Student_ID', sort=False)
.apply(lambda group: group.shift(-1).bfill())
['Recent_Predicted_Score'] )
Explanation:
create a new column Recent_Predicted_Score containing the PredictedScore where Rank is in the top 3 and NaN otherwise
use groupby() on Student_ID with the sort argument set to False for better performance (note that groupby() preserves the order of rows within each group, specifically, not influencing the existing descending order by Date)
within each group, do shift(-1) and bfill() to get the desired result for Recent_Predicted_Score.
Sample input (with two distinct Student_ID values):
Date Student_ID Rank Predicted_Score
0 2021-07-04 33 2 87
1 2021-07-04 66 2 87
2 2021-06-13 33 4 88
3 2021-06-13 66 4 88
4 2021-03-31 33 7 88
5 2021-03-31 66 7 88
6 2021-02-28 33 2 86
7 2021-02-28 66 2 86
8 2021-02-14 33 10 86
9 2021-02-14 66 10 86
10 2021-01-31 33 8 86
11 2021-01-31 66 8 86
12 2020-12-23 33 1 81
13 2020-12-23 66 1 81
14 2020-11-08 33 3 80
15 2020-11-08 66 3 80
16 2020-10-21 33 3 80
17 2020-10-21 66 3 80
18 2020-09-23 33 4 80
19 2020-09-23 66 4 80
20 2020-05-20 33 3 80
21 2020-05-20 66 3 80
22 2020-04-29 33 4 80
23 2020-04-29 66 4 80
24 2020-04-15 33 2 79
25 2020-04-15 66 2 79
26 2020-02-26 33 3 79
27 2020-02-26 66 3 79
28 2020-02-12 33 5 79
29 2020-02-12 66 5 79
30 2020-01-29 33 1 70
31 2020-01-29 66 1 70
Output:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
1 2021-07-04 66 2 87 86.0
2 2021-06-13 33 4 88 86.0
3 2021-06-13 66 4 88 86.0
4 2021-03-31 33 7 88 86.0
5 2021-03-31 66 7 88 86.0
6 2021-02-28 33 2 86 81.0
7 2021-02-28 66 2 86 81.0
8 2021-02-14 33 10 86 81.0
9 2021-02-14 66 10 86 81.0
10 2021-01-31 33 8 86 81.0
11 2021-01-31 66 8 86 81.0
12 2020-12-23 33 1 81 80.0
13 2020-12-23 66 1 81 80.0
14 2020-11-08 33 3 80 80.0
15 2020-11-08 66 3 80 80.0
16 2020-10-21 33 3 80 80.0
17 2020-10-21 66 3 80 80.0
18 2020-09-23 33 4 80 80.0
19 2020-09-23 66 4 80 80.0
20 2020-05-20 33 3 80 79.0
21 2020-05-20 66 3 80 79.0
22 2020-04-29 33 4 80 79.0
23 2020-04-29 66 4 80 79.0
24 2020-04-15 33 2 79 79.0
25 2020-04-15 66 2 79 79.0
26 2020-02-26 33 3 79 70.0
27 2020-02-26 66 3 79 70.0
28 2020-02-12 33 5 79 70.0
29 2020-02-12 66 5 79 70.0
30 2020-01-29 33 1 70 NaN
31 2020-01-29 66 1 70 NaN
Output sorted by Student_ID, Date for easier inspection:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
2 2021-06-13 33 4 88 86.0
4 2021-03-31 33 7 88 86.0
6 2021-02-28 33 2 86 81.0
8 2021-02-14 33 10 86 81.0
10 2021-01-31 33 8 86 81.0
12 2020-12-23 33 1 81 80.0
14 2020-11-08 33 3 80 80.0
16 2020-10-21 33 3 80 80.0
18 2020-09-23 33 4 80 80.0
20 2020-05-20 33 3 80 79.0
22 2020-04-29 33 4 80 79.0
24 2020-04-15 33 2 79 79.0
26 2020-02-26 33 3 79 70.0
28 2020-02-12 33 5 79 70.0
30 2020-01-29 33 1 70 NaN
1 2021-07-04 66 2 87 86.0
3 2021-06-13 66 4 88 86.0
5 2021-03-31 66 7 88 86.0
7 2021-02-28 66 2 86 81.0
9 2021-02-14 66 10 86 81.0
11 2021-01-31 66 8 86 81.0
13 2020-12-23 66 1 81 80.0
15 2020-11-08 66 3 80 80.0
17 2020-10-21 66 3 80 80.0
19 2020-09-23 66 4 80 80.0
21 2020-05-20 66 3 80 79.0
23 2020-04-29 66 4 80 79.0
25 2020-04-15 66 2 79 79.0
27 2020-02-26 66 3 79 70.0
29 2020-02-12 66 5 79 70.0
31 2020-01-29 66 1 70 NaN

how to replace the comma in numbers in dataframe by dot?

I have this dataframe that I wish to replace all the comma by dot, for example it would be 50.5 and 81.5.
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120 114 87 64 50,5 37
3 SUEZMAX 81,5 80 62 45 36 24
5 LR 2 69 72 57 42 32 20
7 AFRAMAX 66 68 55 40,5 30,5 19
9 LR 1 58 58 40 28 21 13,5
11 MR2 44 44,5 38 29 21 13
As dtypes for all the columns are object, I tried
df_useful[['NB', 'Ppt Resale ', '5 yrs', '10 yrs', '15 yrs',
'20 yrs']] = df_useful[['NB', 'Ppt Resale ', '5 yrs', '10 yrs', '15 yrs',
'20 yrs']].apply(pd.to_numeric, errors='coerce')
then the numbers with comma would become NAN.
A simple way:
out = df.replace(',', '.', regex=True)
Output:
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120 114 87 64 50.5 37
3 SUEZMAX 81.5 80 62 45 36 24
5 LR 2 69 72 57 42 32 20
7 AFRAMAX 66 68 55 40.5 30.5 19
9 LR 1 58 58 40 28 21 13.5
11 MR2 44 44.5 38 29 21 13
If your goal is to convert to numeric automatically, you can use:
df2 = (df
.drop(columns='Unnamed: 0')
.select_dtypes(exclude='number')
.apply(lambda s: pd.to_numeric(s.str.replace(',', '.'),
errors='coerce'))
)
df[list(df2)] = df2
Output:
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120.0 114.0 87 64.0 50.5 37.0
3 SUEZMAX 81.5 80.0 62 45.0 36.0 24.0
5 LR 2 69.0 72.0 57 42.0 32.0 20.0
7 AFRAMAX 66.0 68.0 55 40.5 30.5 19.0
9 LR 1 58.0 58.0 40 28.0 21.0 13.5
11 MR2 44.0 44.5 38 29.0 21.0 13.0
dtypes:
print(df.dtypes)
Unnamed: 0 object
NB float64
Ppt Resale float64
5 yrs int64
10 yrs float64
15 yrs float64
20 yrs float64
dtype: object
Another possible solution, based on the following idea:
Convert the dataframe to CSV format and then read the CSV string back, using the decimal separator parameter of pd.read_csv to have decimal dots instead of decimal commas.
from io import StringIO
pd.read_csv(StringIO(df.to_csv()), decimal=',', index_col=0)
Output:
Unnamed: 0 NB Ppt Resale 5 yrs 10 yrs 15 yrs 20 yrs
1 VLCC 120.0 114.0 87 64.0 50.5 37.0
3 SUEZMAX 81.5 80.0 62 45.0 36.0 24.0
5 LR 2 69.0 72.0 57 42.0 32.0 20.0
7 AFRAMAX 66.0 68.0 55 40.5 30.5 19.0
9 LR 1 58.0 58.0 40 28.0 21.0 13.5
11 MR2 44.0 44.5 38 29.0 21.0 13.0

Reading multiple DataFrames from a given input

I have a couple of data frames given this way :
38 47 7 20 35
45 76 63 96 24
98 53 2 87 80
83 86 92 48 1
73 60 26 94 6
80 50 29 53 92
66 90 79 98 46
40 21 58 38 60
35 13 72 28 6
48 76 51 96 12
79 80 24 37 51
86 70 1 22 71
52 69 10 83 13
12 40 3 0 30
46 50 48 76 5
Could you please tell me how it is possible to add them to a list of dataframes?
Thanks a lot!
First convert values to one DataFrame with separator misisng values (converted from blank lines):
df = pd.read_csv(file, header=None, skip_blank_lines=False)
print (df)
0 1 2 3 4
0 38.0 47.0 7.0 20.0 35.0
1 45.0 76.0 63.0 96.0 24.0
2 98.0 53.0 2.0 87.0 80.0
3 83.0 86.0 92.0 48.0 1.0
4 73.0 60.0 26.0 94.0 6.0
5 NaN NaN NaN NaN NaN
6 80.0 50.0 29.0 53.0 92.0
7 66.0 90.0 79.0 98.0 46.0
8 40.0 21.0 58.0 38.0 60.0
9 35.0 13.0 72.0 28.0 6.0
10 48.0 76.0 51.0 96.0 12.0
11 NaN NaN NaN NaN NaN
12 79.0 80.0 24.0 37.0 51.0
13 86.0 70.0 1.0 22.0 71.0
14 52.0 69.0 10.0 83.0 13.0
15 12.0 40.0 3.0 0.0 30.0
16 46.0 50.0 48.0 76.0 5.0
And then in list comprehension create smaller DataFrames in list:
dfs = [g.iloc[1:].astype(int).reset_index(drop=True)
for _, g in df.groupby(df[0].isna().cumsum())]
print (dfs[1])
0 1 2 3 4
0 80 50 29 53 92
1 66 90 79 98 46
2 40 21 58 38 60
3 35 13 72 28 6
4 48 76 51 96 12

Change a column format while ignoring (or keeping) NaN

I want to change a column from a DataFrame which contains values of this format hh:mm:ss to a column containing the number of minutes (while keeping the NaN values)
I can't change it directly from the excel file so I've tried to do it with pandas (I'm working on a ML model with a health database):
38 00:35:00
39 00:50:00
40 00:45:00
41 01:32:00
42 00:29:00
43 NaN
44 00:45:00
45 00:13:00
46 00:20:00
47 00:31:00
48 00:54:00
49 00:43:00
50 02:33:00
I tried to separate the values from the NaN values using a mask then convert to minutes with str.split()
df1 = df['delay'][df['delay'].notnull()].astype(str).str.split(':').apply(lambda x: int(x[0]) * 60 + int(x[1]))```
df2 = df['delai_ponc_recal_calc'][df['delai_ponc_recal_calc'].isnull()]
But then I cannot merge to two series without loosing the order (I get the NaN values with the correct indexes at the end of the merged series)
39 50
40 45
41 92
42 29
44 45
45 13
46 20
47 31
48 54
49 43
50 153
43 NaN
I also tried to go from hh:mm:ss to minutes with datatime.time and timedelta using a loop (without using a mask) but I still can't have a column (series or DF) with the all the values in minutes while keeping the NaN ...
You can use pd.to_timedelta to convert the delay column to pandas timedelta series then divide it by Timedelta of 1 min to get total minutes:
pd.to_timedelta(df['delay'], errors='coerce') / pd.Timedelta(1, 'min')
39 50.0
40 45.0
41 92.0
42 29.0
43 NaN
44 45.0
45 13.0
46 20.0
47 31.0
48 54.0
49 43.0
50 153.0
Name: delay, dtype: float64
Here are some possible solutions:
Input:
delay
38 00:35:00
39 00:50:00
40 00:45:00
41 01:32:00
42 00:29:00
43 NaN
44 00:45:00
45 00:13:00
46 00:20:00
47 00:31:00
48 00:54:00
49 00:43:00
50 02:33:00
Method 1: pd.to_datetime+ map
df['delay'] = pd.to_datetime(df['delay'])
#using lambda function
df['delay2'] = df['delay'].map(lambda x : x.hour*60 + x.minute)
print(df['delay2'])
#df.drop(['delay'],axis=1,inplace=True)
Method 2: pd.to_datetime+ dt
#converts time columns to pandas datetime64ns format
df['delay'] = pd.to_datetime(df['delay'])
#using dt to extract hour and minute data
df['delay2'] = df['delay'].dt.hour*60 + df['delay'].dt.minute
print(df['delay2'])
Output:
39 50.0
40 45.0
41 92.0
42 29.0
43 NaN
44 45.0
45 13.0
46 20.0
47 31.0
48 54.0
49 43.0
50 153.0
Name: Time, dtype: float64
You can use errors='ignore' for general cases as follows:
df['column_name'].astype(int, errors='ignore')

Appending or Adding Rows in Pandas Dataframe

In the following DataFrame I would like to add rows if the count of values in the column A is less than 10.
For eg., in the following Table column A group 60 appears 12 times, however gorup 61 appears 9 times. I would like to add a row after last record of group 61 and copy the value in column B,C,D from the corresponding values group 60. Similar operation for group 62 and so on.
A B C D
0 60 0.235 4 7.86
1 60 1.235 5 8.86
2 60 2.235 6 9.86
3 60 3.235 7 10.86
4 60 4.235 8 11.86
5 60 5.235 9 12.86
6 60 6.235 10 13.86
7 60 7.235 11 14.86
8 60 8.235 12 15.86
9 60 9.235 13 16.86
10 60 10.235 14 17.86
11 60 11.235 15 18.86
12 61 12.235 16 19.86
13 61 13.235 17 20.86
14 61 14.235 18 21.86
15 61 15.235 19 22.86
16 61 16.235 20 23.86
17 61 17.235 21 24.86
18 61 18.235 22 25.86
19 61 19.235 23 26.86
20 61 20.235 24 27.86
21 62 20.235 24 28.86
22 62 20.235 24 29.86
23 62 20.235 24 30.86
24 62 20.235 24 31.86
25 62 20.235 24 32.86
You can use:
#cumulative count per group
df['G'] = df.groupby('A').cumcount()
df = df.groupby(['A','G'])
.first() #agregate first
.unstack() #reshape DataFrame
.ffill() #same as fillna(method='ffill')
.stack() #get original shape
.reset_index(drop=True, level=1) #remove level G in index
.reset_index()
print (df)
A B C D
0 60 0.235 4.0 7.86
1 60 1.235 5.0 8.86
2 60 2.235 6.0 9.86
3 60 3.235 7.0 10.86
4 60 4.235 8.0 11.86
5 60 5.235 9.0 12.86
6 60 6.235 10.0 13.86
7 60 7.235 11.0 14.86
8 60 8.235 12.0 15.86
9 60 9.235 13.0 16.86
10 60 10.235 14.0 17.86
11 60 11.235 15.0 18.86
12 61 12.235 16.0 19.86
13 61 13.235 17.0 20.86
14 61 14.235 18.0 21.86
15 61 15.235 19.0 22.86
16 61 16.235 20.0 23.86
17 61 17.235 21.0 24.86
18 61 18.235 22.0 25.86
19 61 19.235 23.0 26.86
20 61 20.235 24.0 27.86
21 61 9.235 13.0 16.86
22 61 10.235 14.0 17.86
23 61 11.235 15.0 18.86
24 62 20.235 24.0 28.86
25 62 20.235 24.0 29.86
26 62 20.235 24.0 30.86
27 62 20.235 24.0 31.86
28 62 20.235 24.0 32.86
29 62 17.235 21.0 24.86
30 62 18.235 22.0 25.86
31 62 19.235 23.0 26.86
32 62 20.235 24.0 27.86
33 62 9.235 13.0 16.86
34 62 10.235 14.0 17.86
35 62 11.235 15.0 18.86
Another solution with pivot_table:
df['G'] = df.groupby('A').cumcount()
df = df.pivot_table(index='A', columns='G')
.ffill()
.stack()
.reset_index(drop=True, level=1)
.reset_index()
print (df)
A B C D
0 60 0.235 4.0 7.86
1 60 1.235 5.0 8.86
2 60 2.235 6.0 9.86
3 60 3.235 7.0 10.86
4 60 4.235 8.0 11.86
5 60 5.235 9.0 12.86
6 60 6.235 10.0 13.86
7 60 7.235 11.0 14.86
8 60 8.235 12.0 15.86
9 60 9.235 13.0 16.86
10 60 10.235 14.0 17.86
11 60 11.235 15.0 18.86
12 61 12.235 16.0 19.86
13 61 13.235 17.0 20.86
14 61 14.235 18.0 21.86
15 61 15.235 19.0 22.86
16 61 16.235 20.0 23.86
17 61 17.235 21.0 24.86
18 61 18.235 22.0 25.86
19 61 19.235 23.0 26.86
20 61 20.235 24.0 27.86
21 61 9.235 13.0 16.86
22 61 10.235 14.0 17.86
23 61 11.235 15.0 18.86
24 62 20.235 24.0 28.86
25 62 20.235 24.0 29.86
26 62 20.235 24.0 30.86
27 62 20.235 24.0 31.86
28 62 20.235 24.0 32.86
29 62 17.235 21.0 24.86
30 62 18.235 22.0 25.86
31 62 19.235 23.0 26.86
32 62 20.235 24.0 27.86
33 62 9.235 13.0 16.86
34 62 10.235 14.0 17.86
35 62 11.235 15.0 18.86

Categories

Resources