I'm new to pandas and I'm trying to read a strange formated file into a DataFrame.
The original file looks like this:
; No Time Date MoistAve MatTemp TDRConduct TDRAve DeltaCount tpAve Moist1 Moist2 Moist3 Moist4 TDR1 TDR2 TDR3 TDR4
1 11:38:17 11.07.2012 11.37 48.20 5.15 88.87 15 344.50 11.84 11.35 11.59 15.25 89.0 89.0 89.0 88.0
2 11:38:18 11.07.2012 11.44 48.20 5.13 88.88 2 346.22 12.08 11.83 -1.00 -1.00 89.0 89.0 -1.0 -1.0
3 11:38:19 11.07.2012 11.10 48.20 4.96 89.00 3 337.84 11.83 11.59 10.62 -1.00 89.0 89.0 89.0 -1.0
4 11:38:19 11.07.2012 11.82 48.20 5.54 88.60 3 355.92 11.10 13.54 12.32 -1.00 89.0 88.0 88.0 -1.0
I managed to get an equally structured DataFrame with:
In [42]: date_spec = {'FetchTime': [1, 2]}
In [43]: df = pd.read_csv('MeasureCK32450-20120711114050.mck', header=7, sep='\s\s+',
parse_dates=date_spec, na_values=['-1.0', '-1.00'])
In [44]: df
Out[52]:
FetchTime ; No MoistAve MatTemp TDRConduct TDRAve DeltaCount tpAve Moist1 Moist2 Moist3 Moist4 TDR1 TDR2 TDR3 TDR4
0 2012-11-07 11:38:17 1 11.37 48.2 5.15 88.87 15 344.50 11.84 11.35 11.59 15.25 89 89 89 88
1 2012-11-07 11:38:18 2 11.44 48.2 5.13 88.88 2 346.22 12.08 11.83 NaN NaN 89 89 NaN NaN
2 2012-11-07 11:38:19 3 11.10 48.2 4.96 89.00 3 337.84 11.83 11.59 10.62 NaN 89 89 89 NaN
3 2012-11-07 11:38:19 4 11.82 48.2 5.54 88.60 3 355.92 11.10 13.54 12.32 NaN 89 88 88 NaN
But now I have to expand each line of this DataFrame
.... Moist1 Moist2 Moist3 Moist4 TDR1 TDR2 TDR3 TDR4
1 .... 11.84 11.35 11.59 15.25 89 89 89 88
2 .... 12.08 11.83 NaN NaN 89 89 NaN NaN
into four lines (with three indexes No, FetchTime, and MeasureNo):
.... Moist TDR
No FetchTime MeasureNo
0 2012-11-07 11:38:17 1 .... 11.84 89 # from line 1, Moist1 and TDR1
1 2 .... 11.35 89 # from line 1, Moist2 and TDR2
2 3 .... 11.59 89 # from line 1, Moist3 and TDR3
3 4 .... 15.25 88 # from line 1, Moist4 and TDR4
4 2012-11-07 11:38:18 1 .... 12.08 89 # from line 2, Moist1 and TDR1
5 2 .... 11.83 89 # from line 2, Moist2 and TDR2
6 3 .... NaN NaN # from line 2, Moist3 and TDR3
7 4 .... NaN NaN # from line 2, Moist4 and TDR4
by preserving the other columns and MOST important, preserving the order of the entries. I
know I can iterate through each line with for row in df.iterrows(): ... but I read this is
not very fast. My first approach was this:
In [54]: data = []
In [55]: for d in range(1,5):
....: temp = df.ix[:, ['FetchTime', 'MoistAve', 'MatTemp', 'TDRConduct', 'TDRAve', 'DeltaCount', 'tpAve', 'Moist%d' % d, 'TDR%d' % d]]
....: temp.columns = ['FetchTime', 'MoistAve', 'MatTemp', 'TDRConduct', 'TDRAve', 'DeltaCount', 'tpAve', 'RawMoist', 'RawTDR']
....: temp['MeasureNo'] = d
....: data.append(temp)
....:
In [56]: test = pd.concat(data, ignore_index=True)
In [62]: test.head()
Out[62]:
FetchTime MoistAve MatTemp TDRConduct TDRAve DeltaCount tpAve RawMoist RawTDR MeasureNo
0 2012-11-07 11:38:17 11.37 48.2 5.15 88.87 15 344.50 11.84 89 1
1 2012-11-07 11:38:18 11.44 48.2 5.13 88.88 2 346.22 12.08 89 1
2 2012-11-07 11:38:19 11.10 48.2 4.96 89.00 3 337.84 11.83 89 1
3 2012-11-07 11:38:19 11.82 48.2 5.54 88.60 3 355.92 11.10 89 1
4 2012-11-07 11:38:20 12.61 48.2 5.87 88.38 3 375.72 12.80 89 1
But I don't see a way to influence the concatenation to get the order I need ...
Is there another way to get the resulting DataFrame I need?
Here is a solution based on numpy's repeat and array indexing to build de-stacked values, and pandas' merge to output the concatenated result.
First load a sample of your data into a DataFrame (slightly changed read_csv's arguments).
from cStringIO import StringIO
data = """; No Time Date MoistAve MatTemp TDRConduct TDRAve DeltaCount tpAve Moist1 Moist2 Moist3 Moist4 TDR1 TDR2 TDR3 TDR4
1 11:38:17 11.07.2012 11.37 48.20 5.15 88.87 15 344.50 11.84 11.35 11.59 15.25 89.0 89.0 89.0 88.0
2 11:38:18 11.07.2012 11.44 48.20 5.13 88.88 2 346.22 12.08 11.83 -1.00 -1.00 89.0 89.0 -1.0 -1.0
3 11:38:19 11.07.2012 11.10 48.20 4.96 89.00 3 337.84 11.83 11.59 10.62 -1.00 89.0 89.0 89.0 -1.0
4 11:38:19 11.07.2012 11.82 48.20 5.54 88.60 3 355.92 11.10 13.54 12.32 -1.00 89.0 88.0 88.0 -1.0
"""
date_spec = {'FetchTime': [1, 2]}
df = pd.read_csv(StringIO(data), header=0, sep='\s\s+',parse_dates=date_spec, na_values=['-1.0', '-1.00'])
Then build a de-stacked vector of TDRs and merge it with the original data frame
stacked_col_names = ['TDR1','TDR2','TDR3','TDR4']
repeated_row_indexes = np.repeat(np.arange(df.shape[0]),4)
repeated_col_indexes = [np.where(df.columns == c)[0][0] for c in stacked_col_names]
destacked_tdrs = pd.DataFrame(data=df.values[repeated_row_indexes,repeated_col_indexes],index=df.index[repeated_row_indexes],columns=['TDR'])
ouput = pd.merge(left_index = True, right_index = True, left = df, right = destacked_tdrs)
With the desired output :
output.ix[:,['TDR1','TDR2','TDR3','TDR4','TDR']]
TDR1 TDR2 TDR3 TDR4 TDR
0 89 89 89 88 89
0 89 89 89 88 89
0 89 89 89 88 89
0 89 89 89 88 88
1 89 89 NaN NaN 89
1 89 89 NaN NaN 89
1 89 89 NaN NaN NaN
1 89 89 NaN NaN NaN
2 89 89 89 NaN 89
2 89 89 89 NaN 89
2 89 89 89 NaN 89
2 89 89 89 NaN NaN
3 89 88 88 NaN 89
3 89 88 88 NaN 88
3 89 88 88 NaN 88
3 89 88 88 NaN NaN
This gives every gives fourth row in test starting at 'i':
test.ix[i::4]
Using the same basic loop as above, just append the set of every forth row starting at 0 to 3 after you run your code above.
data = []
for i in range(0,3:):
temp = test.ix[i::4]
data.append(temp)
test2 = pd.concat(data,ignore_index=True)
Update:
I realize now that what's you'd want isn't every fourth row but every mth row, so this would just be the loop suggestions above. Sorry.
Update 2:
Maybe not. We can take advantage of the fact that even though concatenate doesn't return the order you want what it does return has a fixed mapping to what you do want. d is the number of rows per timestamp and m is the number of timestamps.
You seem to want the rows from test as follows:
[0,m,2m,3m,1,m+1,2m+1,3m+1,2,m+2,2m+2,3m+2,...,m-1,2m-1,3m-1,4m-1]
I'm sure there are much nicer ways to generate that list of indices, but this worked for me
d = 4
m = 10
small = (np.arange(0,m).reshape(m,1).repeat(d,1).T.reshape(-1,1))
shifter = (np.arange(0,d).repeat(m).reshape(-1,1).T * m)
NewIndex = (shifter.reshape(d,-1) + small.reshape(d,-1)).T.reshape(-1,1)
NewIndex = NewIndex.reshape(-1)
test = test.ix[NewIndex]
Related
I have a dataframe sorted in descending order date that records the Rank of students in class and the predicted score.
Date Student_ID Rank Predicted_Score
4/7/2021 33 2 87
13/6/2021 33 4 88
31/3/2021 33 7 88
28/2/2021 33 2 86
14/2/2021 33 10 86
31/1/2021 33 8 86
23/12/2020 33 1 81
8/11/2020 33 3 80
21/10/2020 33 3 80
23/9/2020 33 4 80
20/5/2020 33 3 80
29/4/2020 33 4 80
15/4/2020 33 2 79
26/2/2020 33 3 79
12/2/2020 33 5 79
29/1/2020 33 1 70
I want to create a column called Recent_Predicted_Score that record the last predicted_score where that student actually ranks top 3. So the desired outcome looks like
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
4/7/2021 33 2 87 86
13/6/2021 33 4 88 86
31/3/2021 33 7 88 86
28/2/2021 33 2 86 81
14/2/2021 33 10 86 81
31/1/2021 33 8 86 81
23/12/2020 33 1 81 80
8/11/2020 33 3 80 80
21/10/2020 33 3 80 80
23/9/2020 33 4 80 80
20/5/2020 33 3 80 79
29/4/2020 33 4 80 79
15/4/2020 33 2 79 79
26/2/2020 33 3 79 70
12/2/2020 33 5 79 70
29/1/2020 33 1 70
Here's what I have tried but it doesn't quite work, not sure if I am on the right track:
df.sort_values(by = ['Student_ID', 'Date'], ascending = [True, False], inplace = True)
lp1 = df['Predicted_Score'].where(df['Rank'].isin([1,2,3])).groupby(df['Student_ID']).bfill()
lp2 = df.groupby(['Student_ID', 'Rank'])['Predicted_Score'].shift(-1)
df = df.assign(Recent_Predicted_Score=lp1.mask(df['Rank'].isin([1,2,3]), lp2))
Thanks in advance.
Try:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df = df.sort_values(['Student_ID', 'Date'])
df['Recent_Predicted_Score'] = np.where(df['Rank'].isin([1, 2, 3]), df['Predicted_Score'], np.nan)
df['Recent_Predicted_Score'] = df.groupby('Student_ID', group_keys=False)['Recent_Predicted_Score'].apply(lambda x: x.ffill().shift().fillna(''))
df = df.sort_values(['Student_ID', 'Date'], ascending = [True, False])
print(df)
Prints:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
1 2021-06-13 33 4 88 86.0
2 2021-03-31 33 7 88 86.0
3 2021-02-28 33 2 86 81.0
4 2021-02-14 33 10 86 81.0
5 2021-01-31 33 8 86 81.0
6 2020-12-23 33 1 81 80.0
7 2020-11-08 33 3 80 80.0
8 2020-10-21 33 3 80 80.0
9 2020-09-23 33 4 80 80.0
10 2020-05-20 33 3 80 79.0
11 2020-04-29 33 4 80 79.0
12 2020-04-15 33 2 79 79.0
13 2020-02-26 33 3 79 70.0
14 2020-02-12 33 5 79 70.0
15 2020-01-29 33 1 70
Mask the scores where rank is greater than 3 then group the masked column by Student_ID and backward fill to propagate the last predicted score
c = 'Recent_Predicted_Score'
df[c] = df['Predicted_Score'].mask(df['Rank'].gt(3))
df[c] = df.groupby('Student_ID')[c].apply(lambda s: s.shift(-1).bfill())
Result
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 4/7/2021 33 2 87 86.0
1 13/6/2021 33 4 88 86.0
2 31/3/2021 33 7 88 86.0
3 28/2/2021 33 2 86 81.0
4 14/2/2021 33 10 86 81.0
5 31/1/2021 33 8 86 81.0
6 23/12/2020 33 1 81 80.0
7 8/11/2020 33 3 80 80.0
8 21/10/2020 33 3 80 80.0
9 23/9/2020 33 4 80 80.0
10 20/5/2020 33 3 80 79.0
11 29/4/2020 33 4 80 79.0
12 15/4/2020 33 2 79 79.0
13 26/2/2020 33 3 79 70.0
14 12/2/2020 33 5 79 70.0
15 29/1/2020 33 1 70 NaN
Note: Make sure your dataframe is sorted on Date in descending order.
Let's assume:
there may be more than one unique Student_ID
the rows are ordered by descending Date as indicated by OP, but may not be ordered by Student_ID
we want to preserve the index of the original dataframe
Subject to these assumptions, here's a way to do what your question asks:
df['Recent_Predicted_Score'] = df.loc[df.Rank <= 3, 'Predicted_Score']
df['Recent_Predicted_Score'] = ( df
.groupby('Student_ID', sort=False)
.apply(lambda group: group.shift(-1).bfill())
['Recent_Predicted_Score'] )
Explanation:
create a new column Recent_Predicted_Score containing the PredictedScore where Rank is in the top 3 and NaN otherwise
use groupby() on Student_ID with the sort argument set to False for better performance (note that groupby() preserves the order of rows within each group, specifically, not influencing the existing descending order by Date)
within each group, do shift(-1) and bfill() to get the desired result for Recent_Predicted_Score.
Sample input (with two distinct Student_ID values):
Date Student_ID Rank Predicted_Score
0 2021-07-04 33 2 87
1 2021-07-04 66 2 87
2 2021-06-13 33 4 88
3 2021-06-13 66 4 88
4 2021-03-31 33 7 88
5 2021-03-31 66 7 88
6 2021-02-28 33 2 86
7 2021-02-28 66 2 86
8 2021-02-14 33 10 86
9 2021-02-14 66 10 86
10 2021-01-31 33 8 86
11 2021-01-31 66 8 86
12 2020-12-23 33 1 81
13 2020-12-23 66 1 81
14 2020-11-08 33 3 80
15 2020-11-08 66 3 80
16 2020-10-21 33 3 80
17 2020-10-21 66 3 80
18 2020-09-23 33 4 80
19 2020-09-23 66 4 80
20 2020-05-20 33 3 80
21 2020-05-20 66 3 80
22 2020-04-29 33 4 80
23 2020-04-29 66 4 80
24 2020-04-15 33 2 79
25 2020-04-15 66 2 79
26 2020-02-26 33 3 79
27 2020-02-26 66 3 79
28 2020-02-12 33 5 79
29 2020-02-12 66 5 79
30 2020-01-29 33 1 70
31 2020-01-29 66 1 70
Output:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
1 2021-07-04 66 2 87 86.0
2 2021-06-13 33 4 88 86.0
3 2021-06-13 66 4 88 86.0
4 2021-03-31 33 7 88 86.0
5 2021-03-31 66 7 88 86.0
6 2021-02-28 33 2 86 81.0
7 2021-02-28 66 2 86 81.0
8 2021-02-14 33 10 86 81.0
9 2021-02-14 66 10 86 81.0
10 2021-01-31 33 8 86 81.0
11 2021-01-31 66 8 86 81.0
12 2020-12-23 33 1 81 80.0
13 2020-12-23 66 1 81 80.0
14 2020-11-08 33 3 80 80.0
15 2020-11-08 66 3 80 80.0
16 2020-10-21 33 3 80 80.0
17 2020-10-21 66 3 80 80.0
18 2020-09-23 33 4 80 80.0
19 2020-09-23 66 4 80 80.0
20 2020-05-20 33 3 80 79.0
21 2020-05-20 66 3 80 79.0
22 2020-04-29 33 4 80 79.0
23 2020-04-29 66 4 80 79.0
24 2020-04-15 33 2 79 79.0
25 2020-04-15 66 2 79 79.0
26 2020-02-26 33 3 79 70.0
27 2020-02-26 66 3 79 70.0
28 2020-02-12 33 5 79 70.0
29 2020-02-12 66 5 79 70.0
30 2020-01-29 33 1 70 NaN
31 2020-01-29 66 1 70 NaN
Output sorted by Student_ID, Date for easier inspection:
Date Student_ID Rank Predicted_Score Recent_Predicted_Score
0 2021-07-04 33 2 87 86.0
2 2021-06-13 33 4 88 86.0
4 2021-03-31 33 7 88 86.0
6 2021-02-28 33 2 86 81.0
8 2021-02-14 33 10 86 81.0
10 2021-01-31 33 8 86 81.0
12 2020-12-23 33 1 81 80.0
14 2020-11-08 33 3 80 80.0
16 2020-10-21 33 3 80 80.0
18 2020-09-23 33 4 80 80.0
20 2020-05-20 33 3 80 79.0
22 2020-04-29 33 4 80 79.0
24 2020-04-15 33 2 79 79.0
26 2020-02-26 33 3 79 70.0
28 2020-02-12 33 5 79 70.0
30 2020-01-29 33 1 70 NaN
1 2021-07-04 66 2 87 86.0
3 2021-06-13 66 4 88 86.0
5 2021-03-31 66 7 88 86.0
7 2021-02-28 66 2 86 81.0
9 2021-02-14 66 10 86 81.0
11 2021-01-31 66 8 86 81.0
13 2020-12-23 66 1 81 80.0
15 2020-11-08 66 3 80 80.0
17 2020-10-21 66 3 80 80.0
19 2020-09-23 66 4 80 80.0
21 2020-05-20 66 3 80 79.0
23 2020-04-29 66 4 80 79.0
25 2020-04-15 66 2 79 79.0
27 2020-02-26 66 3 79 70.0
29 2020-02-12 66 5 79 70.0
31 2020-01-29 66 1 70 NaN
I have a couple of data frames given this way :
38 47 7 20 35
45 76 63 96 24
98 53 2 87 80
83 86 92 48 1
73 60 26 94 6
80 50 29 53 92
66 90 79 98 46
40 21 58 38 60
35 13 72 28 6
48 76 51 96 12
79 80 24 37 51
86 70 1 22 71
52 69 10 83 13
12 40 3 0 30
46 50 48 76 5
Could you please tell me how it is possible to add them to a list of dataframes?
Thanks a lot!
First convert values to one DataFrame with separator misisng values (converted from blank lines):
df = pd.read_csv(file, header=None, skip_blank_lines=False)
print (df)
0 1 2 3 4
0 38.0 47.0 7.0 20.0 35.0
1 45.0 76.0 63.0 96.0 24.0
2 98.0 53.0 2.0 87.0 80.0
3 83.0 86.0 92.0 48.0 1.0
4 73.0 60.0 26.0 94.0 6.0
5 NaN NaN NaN NaN NaN
6 80.0 50.0 29.0 53.0 92.0
7 66.0 90.0 79.0 98.0 46.0
8 40.0 21.0 58.0 38.0 60.0
9 35.0 13.0 72.0 28.0 6.0
10 48.0 76.0 51.0 96.0 12.0
11 NaN NaN NaN NaN NaN
12 79.0 80.0 24.0 37.0 51.0
13 86.0 70.0 1.0 22.0 71.0
14 52.0 69.0 10.0 83.0 13.0
15 12.0 40.0 3.0 0.0 30.0
16 46.0 50.0 48.0 76.0 5.0
And then in list comprehension create smaller DataFrames in list:
dfs = [g.iloc[1:].astype(int).reset_index(drop=True)
for _, g in df.groupby(df[0].isna().cumsum())]
print (dfs[1])
0 1 2 3 4
0 80 50 29 53 92
1 66 90 79 98 46
2 40 21 58 38 60
3 35 13 72 28 6
4 48 76 51 96 12
I want the values to ffill() in S0.0,S1.0,S2.0 within the 'ID' group
ID Close S0.0 S1.0 S2.0
0 UNITY 11.66 NaN 54 NaN
1 UNITY 11.55 56 NaN NaN
2 UNITY 11.59 NaN NaN 78
3 TRINITY 11.69 47 NaN NaN
4 TRINITY 11.37 NaN 69 NaN
5 TRINITY 11.89 NaN NaN 70
intended result:
ID Close S0.0 S1.0 S2.0
0 UNITY 11.66 NaN 54 NaN
1 UNITY 11.55 56 54 NaN
2 UNITY 11.59 56 54 78
3 TRINITY 11.69 47 NaN NaN
4 TRINITY 11.37 47 69 NaN
5 TRINITY 11.89 47 69 70
Here are my attempts and their undesired outcomes:
Attempt 1:
df[df['S0.0']==""] = np.NaN
df[df['S1.0']==""] = np.NaN
df[df['S2.0']==""] = np.NaN
df['S0.0'].groupby('ID').fillna(method='ffill', inplace = True)
df['S1.0'].groupby('ID').fillna(method='ffill', inplace = True)
df['S2.0'].groupby('ID').fillna(method='ffill', inplace = True)
output:
raise KeyError(gpr)
KeyError: 'ID'
Attempt 2:
df.groupby('ID')[['S0.0', 'S1.0', 'S2.0']].ffill()
#this makes no difference to the data.
#but when I try this:
df = df.groupby('ID')[['S0.0', 'S1.0', 'S2.0']].ffill()
df
Output:
S0.0 S1.0 S2.0
NaN 54 NaN
56 54 NaN
56 54 78
47 NaN NaN
47 69 NaN
47 69 70
which again is not what I wanted. Little help will be appreciated.
THANKS!
UPDATE
The second attempt is the right one! Just don't specify the Sx.0's columns.
id = df.ID
df = pd.concat([id,df.groupby('ID').ffill()],axis=1)
output:
ID Close S0.0 S1.0 S2.0
0 UNITY 11.66 NaN 54.0 NaN
1 UNITY 11.55 56.0 54.0 NaN
2 UNITY 11.59 56.0 54.0 78.0
3 TRINITY 11.69 47.0 NaN NaN
4 TRINITY 11.37 47.0 69.0 NaN
5 TRINITY 11.89 47.0 69.0 70.0
Just do:
df[['S0.0', 'S1.0', 'S2.0']] = df.groupby('ID')[['S0.0', 'S1.0', 'S2.0']].ffill()
print(df)
Output
Close S0.0 S1.0 S2.0
0 11.66 NaN 54.0 NaN
1 11.55 56.0 54.0 NaN
2 11.59 56.0 54.0 78.0
3 11.69 47.0 NaN NaN
4 11.37 47.0 69.0 NaN
5 11.89 47.0 69.0 70.0
We have a dataframe 'A' with 5 columns, and we want to add the rolling mean of each column, we could do:
A = pd.DataFrame(np.random.randint(100, size=(5, 5)))
for i in range(0,5):
A[i+6] = A[i].rolling(3).mean()
If however 'A' has column named 'A', 'B'...'E':
A = pd.DataFrame(np.random.randint(100, size=(5, 5)), columns = ['A', 'B',
'C', 'D', 'E'])
How could we neatly add 5 columns with the rolling mean, and each name being ['A_mean', 'B_mean', ....'E_mean']?
try this:
for col in df:
A[col+'_mean'] = A[col].rolling(3).mean()
Output with your way:
0 1 2 3 4 6 7 8 9 10
0 16 53 9 16 67 NaN NaN NaN NaN NaN
1 55 37 93 92 21 NaN NaN NaN NaN NaN
2 10 5 93 99 27 27.0 31.666667 65.000000 69.000000 38.333333
3 94 32 81 91 34 53.0 24.666667 89.000000 94.000000 27.333333
4 37 46 20 18 10 47.0 27.666667 64.666667 69.333333 23.666667
and Output with mine:
A B C D E A_mean B_mean C_mean D_mean E_mean
0 16 53 9 16 67 NaN NaN NaN NaN NaN
1 55 37 93 92 21 NaN NaN NaN NaN NaN
2 10 5 93 99 27 27.0 31.666667 65.000000 69.000000 38.333333
3 94 32 81 91 34 53.0 24.666667 89.000000 94.000000 27.333333
4 37 46 20 18 10 47.0 27.666667 64.666667 69.333333 23.666667
Without loops:
pd.concat([A, A.apply(lambda x:x.rolling(3).mean()).rename(
columns={col: str(col) + '_mean' for col in A})], axis=1)
A B C D E A_mean B_mean C_mean D_mean E_mean
0 67 54 85 61 62 NaN NaN NaN NaN NaN
1 44 53 30 80 58 NaN NaN NaN NaN NaN
2 10 59 14 39 12 40.333333 55.333333 43.0 60.000000 44.000000
3 47 25 58 93 38 33.666667 45.666667 34.0 70.666667 36.000000
4 73 80 30 51 77 43.333333 54.666667 34.0 61.000000 42.333333
I've got some SQL data that I'm grouping and performing some aggregation on. It works nicely:
grouped = df.groupby(['a', 'b'])
agged = grouped.aggregate({
c: [numpy.sum, numpy.mean, numpy.size],
d: [numpy.sum, numpy.mean, numpy.size]
})
and
c d
sum mean size sum mean size
a b
25 20 107.0 0.804511 133.0 5328000 40060.150376 133
21 110.0 0.774648 142.0 6031000 42471.830986 142
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 72.0 0.947368 76.0 2920000 38421.052632 76
25 54.0 0.818182 66.0 2570000 38939.393939 66
26 23 126.0 0.792453 159.0 8795000 55314.465409 159
but I want to fill all of the rows that are in a=25 but not in a=26 with zeros. In other words, something like:
c d
sum mean size sum mean size
a b
25 20 107.0 0.804511 133.0 5328000 40060.150376 133
21 110.0 0.774648 142.0 6031000 42471.830986 142
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 72.0 0.947368 76.0 2920000 38421.052632 76
25 54.0 0.818182 66.0 2570000 38939.393939 66
26 20 0 0 0 0 0 0
21 0 0 0 0 0 0
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 0 0 0 0 0 0
25 0 0 0 0 0 0
How can I do this?
Consider the dataframe df
df = pd.DataFrame(
np.random.randint(10, size=(6, 6)),
pd.MultiIndex.from_tuples(
[(25, 20), (25, 21), (25, 23), (25, 24), (25, 25), (26, 23)],
names=['a', 'b']
),
pd.MultiIndex.from_product(
[['c', 'd'], ['sum', 'mean', 'size']]
)
)
c d
sum mean size sum mean size
a b
25 20 8 3 5 5 0 2
21 3 7 8 9 2 7
23 2 1 3 2 5 4
24 9 0 1 7 1 6
25 1 9 3 5 8 8
26 23 8 8 4 8 0 5
You can quickly recover all missing rows from the cartesian product with unstack(fill_value=0) followed by stack
df.unstack(fill_value=0).stack()
c d
mean size sum mean size sum
a b
25 20 3 5 8 0 2 5
21 7 8 3 2 7 9
23 1 3 2 5 4 2
24 0 1 9 1 6 7
25 9 3 1 8 8 5
26 20 0 0 0 0 0 0
21 0 0 0 0 0 0
23 8 4 8 0 5 8
24 0 0 0 0 0 0
25 0 0 0 0 0 0
Note: Using fill_value=0 preserves the dtype int. Without it, when unstacked, the gaps get filled with NaN and dtypes get converted to float
print(df)
c d
sum mean size sum mean size
a b
25 20 107.0 0.804511 133.0 5328000 40060.150376 133
21 110.0 0.774648 142.0 6031000 42471.830986 142
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 72.0 0.947368 76.0 2920000 38421.052632 76
25 54.0 0.818182 66.0 2570000 38939.393939 66
26 23 126.0 0.792453 159.0 8795000 55314.465409 159
I like:
df = df.unstack().replace(np.nan,0).stack(-1)
print(df)
c d
mean size sum mean size sum
a b
25 20 0.804511 133.0 107.0 40060.150376 133.0 5328000.0
21 0.774648 142.0 110.0 42471.830986 142.0 6031000.0
23 0.792453 159.0 126.0 55314.465409 159.0 8795000.0
24 0.947368 76.0 72.0 38421.052632 76.0 2920000.0
25 0.818182 66.0 54.0 38939.393939 66.0 2570000.0
26 20 0.000000 0.0 0.0 0.000000 0.0 0.0
21 0.000000 0.0 0.0 0.000000 0.0 0.0
23 0.792453 159.0 126.0 55314.465409 159.0 8795000.0
24 0.000000 0.0 0.0 0.000000 0.0 0.0
25 0.000000 0.0 0.0 0.000000 0.0 0.0