I have a dataframe like this
time value
0 1 214
1 4 234
2 5 253
3 7 272
4 9 201
5 11 221
6 13 211
7 15 201
8 17 199
I want to split it into intervals and calculate for every interval the difference for the values to the first row of every interval.
Result should be like this with an interval of 6 for example (the lines inside are just for better explanation):
time value diff_to_first
0 1 214 0
1 4 234 20
2 5 253 39
--------------------------------
3 7 272 0
4 9 201 -71
5 11 221 -51
--------------------------------
6 13 211 0
7 15 201 -10
8 17 199 -12
With the following code i get the wanted result, but i think the code is not very elegant. Are there any better solutions (for example, how can i integrate the subset term in the loc statement) ?
import pandas as pd
interval = 6
low = 0
df = pd.DataFrame([[1, 214], [4, 234], [5, 253], [7, 272], [9, 201], [11, 221],
[13, 211], [15, 201], [17, 199]], columns=['time', 'value'])
df['diff_to_first'] = None
maxvalue = df['time'].max()
while low <= maxvalue:
high = low + interval
subset = df[ (df['time']>=low) & (df['time']<high) ]
first = subset.iloc[0]['value']
df.loc[ (df['time']>=low) & (df['time']<high),
'diff_to_first'] = df.loc[ (df['time']>=low) & (df['time']<high) , 'value'] - first
low = high
You can make a new column "group". Then use groupby and apply you defined function to join column with diff by group. It will be more elegant. But I think, my way to create "group" column also can be more elegant = )
def diff(df):
df['diff_to_first'] = df.value - df.value.values[0]
return df
df['group'] = np.concatenate([[i] * 3 for i in range(0, len(df)/3)])
df.groupby('group').apply(diff)
Output:
time value group diff_to_first
0 1 214 0 0
1 4 234 0 20
2 5 253 0 39
3 7 272 1 0
4 9 201 1 -71
5 11 221 1 -51
6 13 211 2 0
7 15 201 2 -10
8 17 199 2 -12
you can group the dataframe by value of interval and difference the grouped data with the shifting by 1 index
interval = 3
df['diff_to_first'] = df.value.groupby(np.repeat(np.arange(len(df)/interval),interval)[:len(df)]).apply(lambda x:x-x.shift()).fillna(0)
Out:
time value diff_to_first
0 1 214 0.0
1 4 234 20.0
2 5 253 19.0
3 7 272 0.0
4 9 201 -71.0
5 11 221 20.0
6 13 211 0.0
7 15 201 -10.0
8 17 199 -2.0
Related
I have a pandas data frame like this:
Subset Position Value
1 1 2
1 10 3
1 15 0.285714
1 43 1
1 48 0
1 89 2
1 132 2
1 152 0.285714
1 189 0.133333
1 200 0
2 1 0.133333
2 10 0
2 15 2
2 33 2
2 36 0.285714
2 72 2
2 132 0.133333
2 152 0.133333
2 220 3
2 250 8
2 350 6
2 750 0
I want to know how can I get the mean of values for every "x" row with "y" step size per subset in pandas?
For example, mean of every 5 rows (step size =2) for value column in each subset like this:
Subset Start_position End_position Mean
1 1 48 1.2571428
1 15 132 1.0571428
1 48 189 0.8838094
2 1 36 0.8838094
2 15 132 1.2838094
2 36 220 1.110476
2 132 350 3.4533332
Is this what you were looking for:
df = pd.DataFrame({'Subset': [1]*10+[2]*12,
'Position': [1,10,15,43,48,89,132,152,189,200,1,10,15,33,36,72,132,152,220,250,350,750],
'Value': [2,3,.285714,1,0,2,2,.285714,.1333333,0,0.133333,0,2,2,.285714,2,.133333,.133333,3,8,6,0]})
averaged_df = pd.DataFrame(columns=['Subset', 'Start_position', 'End_position', 'Mean'])
window = 5
step_size = 2
for subset in df.Subset.unique():
subset_df = df[df.Subset==subset].reset_index(drop=True)
for i in range(0,len(df),step_size):
window_rows = subset_df.iloc[i:i+window]
if len(window_rows) < window:
continue
window_average = {'Subset': window_rows.Subset.loc[0+i],
'Start_position': window_rows.Position[0+i],
'End_position': window_rows.Position.iloc[-1],
'Mean': window_rows.Value.mean()}
averaged_df = averaged_df.append(window_average,ignore_index=True)
Some notes about the code:
It assumes all subsets are in order in the original df (1,1,2,1,2,2 will behave as if it was 1,1,1,2,2,2)
If there is a group left that's smaller than a window, it will skip it (e.g. 1, 132, 200, 0,60476 is not included`)
One version specific answer would be, using pandas.api.indexers.FixedForwardWindowIndexer introduced in pandas 1.1.0:
>>> window=5
>>> step=2
>>> indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=window)
>>> df2 = df.join(df.Position.shift(-(window-1)), lsuffix='_start', rsuffix='_end')
>>> df2 = df2.assign(Mean=df2.pop('Value').rolling(window=indexer).mean()).iloc[::step]
>>> df2 = df2[df2.Position_start.lt(df2.Position_end)].dropna()
>>> df2['Position_end'] = df2['Position_end'].astype(int)
>>> df2
Subset Position_start Position_end Mean
0 1 1 48 1.257143
2 1 15 132 1.057143
4 1 48 189 0.883809
10 2 1 36 0.883809
12 2 15 132 1.283809
14 2 36 220 1.110476
16 2 132 350 3.453333
I'm trying to create a new column, lets call it "HomeForm", that is the sum of the last 5 values of "FTHG" for each of the entries in the "HomeTeam" column.
Say for Team 0, the idea would be to populate the cell on the new column with the sum of the last 5 values of "FTHG" that correspond to Team 0. The table is ordered by date.
How can it be done in Python?
HomeTeam FTHG HomeForm
Date
136 0 4
135 2 0
135 4 2
135 5 0
135 6 1
135 13 0
135 17 3
135 18 1
134 11 4
134 12 0
128 1 0
128 3 0
128 8 2
128 9 1
128 13 3
128 14 1
128 15 0
127 7 1
127 16 1
126 10 1
Thanks.
You'll groupby on HomeTeam and perform a rolling sum here, summing for a minimum of 1 period, and maximum of 5.
First, define a function -
def f(x):
return x.shift().rolling(window=5, min_periods=1).sum()
This function performs the rolling sum of the previous 5 games (hence the shift). Pass this function to dfGroupBy.transform -
df['HomeForm'] = df.groupby('HomeTeam', sort=False).FTHG.transform(f)
df
HomeTeam FTHG HomeForm
Date
136 0 4 NaN
135 2 0 NaN
135 4 2 NaN
135 5 0 NaN
135 6 1 NaN
135 13 0 NaN
135 17 3 NaN
135 18 1 NaN
134 11 4 NaN
134 12 0 NaN
128 1 0 NaN
128 3 0 NaN
128 8 2 NaN
128 9 1 NaN
128 13 3 0.0
128 14 1 NaN
128 15 0 NaN
127 7 1 NaN
127 16 1 NaN
126 10 1 NaN
If needed, fill the NaNs with zeros and convert to integer -
df['HomeForm'] = df['HomeForm'].fillna(0).astype(int)
New to pandas, I'm trying to sum up all previous values of a column. In SQL I did this by joining the table to itself, so I've been taking the same approach in pandas, but having some issues.
Original Data Frame
TeamName PlayerCount Goals CalMonth
0 A 25 126 1
1 A 25 100 2
2 A 25 156 3
3 B 22 205 1
4 B 30 300 2
5 B 28 189 3
Code
prev_month = np.where(df3['CalMonth'] == 12, df3['CalMonth'] - 11, df3['CalMonth'] + 1)
df4 = pd.merge(df3, df3, how='left', left_on=['TeamName','CalMonth'], right_on=['TeamName', prev_month])
print(df4.head(20))
Output
TeamName PlayerCount_x Goals_x CalMonth_x
0 A 25 126 1
1 A 25 100 2
2 A 25 156 3
3 B 22 205 1
4 B 22 300 2
5 B 22 189 3
PlayerCount_y Goals_y CalMonth_y
NaN NaN NaN
25 126 1
25 100 2
22 NaN NaN
22 205 1
22 100 2
The output is what I had in mind, but what I want now is to create a column that is YTD and sum up all Goals from previous months. Here are my desired results (can either include the current month or not, that can be done in an additional step):
TeamName PlayerCount_x Goals_x CalMonth_x
0 A 25 126 1
1 A 25 100 2
2 A 25 156 3
3 B 22 205 1
4 B 22 300 2
5 B 22 189 3
PlayerCount_y Goals_y CalMonth_y Goals_YTD
NaN NaN NaN NaN
25 126 1 126
25 100 2 226
22 NaN NaN NaN
22 205 1 205
22 100 2 305
I have two dataframes: the both have 5 columns, but the first one has 100 rows, and the second one just one row. I should multiply every row of the first dataframe by this single row of the second, and than summarize the value of columns in each row and this value in the 6th new column 'sum of multipliations". I've seen "np.dot" operation, but I'm not sure that I could apply it to dataframes. Also I'm looking for the pythonic/pandas operation or method, if it's possible to replace a little bit heavy numpy code from scratch? Thank you in advance for your advice.
I think you can convert DataFrames to numpy arrays by values, multiple them and last sum:
import pandas as pd
import numpy as np
np.random.seed(1)
df1 = pd.DataFrame(np.random.randint(10, size=(1,5)))
df1.columns = list('ABCDE')
print df1
A B C D E
0 5 8 9 5 0
np.random.seed(0)
df2 = pd.DataFrame(np.random.randint(10,size=(10,5)))
df2.columns = list('ABCDE')
print df2
A B C D E
0 5 0 3 3 7
1 9 3 5 2 4
2 7 6 8 8 1
3 6 7 7 8 1
4 5 9 8 9 4
5 3 0 3 5 0
6 2 3 8 1 3
7 3 3 7 0 1
8 9 9 0 4 7
9 3 2 7 2 0
print df2.values * df1.values
[[25 0 27 15 0]
[45 24 45 10 0]
[35 48 72 40 0]
[30 56 63 40 0]
[25 72 72 45 0]
[15 0 27 25 0]
[10 24 72 5 0]
[15 24 63 0 0]
[45 72 0 20 0]
[15 16 63 10 0]]
df = pd.DataFrame(df2.values * df1.values)
df['sum'] = df.sum(axis=1)
print df
0 1 2 3 4 sum
0 25 0 27 15 0 67
1 45 24 45 10 0 124
2 35 48 72 40 0 195
3 30 56 63 40 0 189
4 25 72 72 45 0 214
5 15 0 27 25 0 67
6 10 24 72 5 0 111
7 15 24 63 0 0 102
8 45 72 0 20 0 137
9 15 16 63 10 0 104
Timing:
In [1185]: %timeit df2.mul(df1.ix[0], axis=1)
The slowest run took 5.07 times longer than the fastest. This could mean that an intermediate result is being cached
1000 loops, best of 3: 287 µs per loop
In [1186]: %timeit pd.DataFrame(df2.values * df1.values)
The slowest run took 6.31 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 98 µs per loop
You are probably looking for something like this:
import pandas as pd
import numpy as np
df1 = pd.DataFrame({ 'A' : [1.1,2.7, 3.4],
'B' : [-1.,-2.5, -3.9]})
df1['sum of multipliations']=df1.sum(axis = 1)
df2 = pd.DataFrame({ 'A' : [2.],
'B' : [3.],
'sum of multipliations' : [1.]})
print df1
print df2
row = df2.ix[0]
df5=df1.mul(row, axis=1)
df5.loc['Total']= df5.sum()
print df5
I have two data frames. One representing when an order was placed and arrived, while the other one represents the working days of the shop.
Days are taken as days of the year. i.e. 32 = 1th February.
orders = DataFrame({'placed':[100,103,104,105,108,109], 'arrived':[103,104,105,106,111,111]})
Out[25]:
arrived placed
0 103 100
1 104 103
2 105 104
3 106 105
4 111 108
5 111 109
calendar = DataFrame({'day':['100','101','102','103','104','105','106','107','108','109','110','111','112','113','114','115','116','117','118','119','120'], 'closed':[0,1,1,0,0,0,0,0,1,1,0,0,0,0,0,1,1,0,0,0,0]})
Out[21]:
closed day
0 0 100
1 1 101
2 1 102
3 0 103
4 0 104
5 0 105
6 0 106
7 0 107
8 1 108
9 1 109
10 0 110
11 0 111
12 0 112
13 0 113
14 0 114
15 1 115
16 1 116
17 0 117
18 0 118
19 0 119
20 0 120
What i want to do is to compute the difference between placed and arrived
x = orders['arrived'] - orders['placed']
Out[24]:
0 3
1 1
2 1
3 1
4 3
5 2
dtype: int64
and subtract one if any day between arrived and placed (included) was a day in which the shop was closed.
i.e. in the first row the order is placed on day 100 and arrived on day 103. the day used are 100, 101, 102, 103. the difference between 103 and 100 is 3. However, since 101 and 102 are days in which the shop is closed I want to subtract 1 for each. That is 3 -1 -1 = 1. And finally append this result on the orders df.