How to vectorize an operation that uses previous values? - python

I want to do something like this:
df['indicator'] = df.at[x-1] + df.at[x-2]
or
df['indicator'] = df.at[x-1] > df.at[x-2]
I guess edge cases would be taken care of automatically, e.g. skip the first few rows.

This line should give you what you need. The first two rows for your indicator column will be automatically filled with 'NaN'.
df['indicator'] = df.at.shift(1) + df.at.shift(2)
For example, if we had the following dataframe:
a = pd.DataFrame({'date':['2017-06-01','2017-06-02','2017-06-03',
'2017-06-04','2017-06-05','2017-06-06'],
'count' :[10,15,17,5,3,7]})
date at
0 2017-06-01 10
1 2017-06-02 15
2 2017-06-03 17
3 2017-06-04 5
4 2017-06-05 3
5 2017-06-06 7
Then running this line will give the below result:
df['indicator'] = df.at.shift(1) + df.at.shift(2)
date at indicator
0 2017-06-01 10 NaN
1 2017-06-02 15 NaN
2 2017-06-03 17 25.0
3 2017-06-04 5 32.0
4 2017-06-05 3 22.0
5 2017-06-06 7 8.0

Related

Calculate the mean value using two columns in pandas

I have a deal dataframe with three columns and I have sorted by the type and date, It looks like:
type date price
A 2020-05-01 4
A 2020-06-04 6
A 2020-06-08 8
A 2020-07-03 5
B 2020-02-01 3
B 2020-04-02 4
There are many types (A, B, C,D,E…), I want to calculate the previous mean price of the same type of product. For example: the pre_mean_price value of third row A is (4+6)/2=5. I want to get a dataframe like this:
type date price pre_mean_price
A 2020-05-01 4 .
A 2020-06-04 6 4
A 2020-06-08 8 5
A 2020-07-03 5 6
B 2020-02-01 3 .
B 2020-04-02 4 3
How can I calculate the pre_mean_price? Thanks a lot!
You can use expanding().mean() after groupby for each group , then shift the values.
df['pre_mean_price'] = df.groupby("type")['price'].apply(lambda x:
x.expanding().mean().shift())
print(df)
type date price pre_mean_price
0 A 2020-05-01 4 NaN
1 A 2020-06-04 6 4.0
2 A 2020-06-08 8 5.0
3 A 2020-07-03 5 6.0
4 B 2020-02-01 3 NaN
5 B 2020-04-02 4 3.0
Something like
df['pre_mean_price'] = df.groupby('type').expanding().mean().groupby('type').shift(1)['price'].values
which produces
type date price pre_mean_price
0 A 2020-05-01 4 NaN
1 A 2020-06-04 6 4.0
2 A 2020-06-08 8 5.0
3 A 2020-07-03 5 6.0
4 B 2020-02-01 3 NaN
5 B 2020-04-02 4 3.0
Short explanation
The idea is to
First groupby "type" with .groupby(). This must be done since we want to calculate the (incremental) means within the group "type".
Then, calculate the incremental mean with expanding().mean(). The output in this point is
price
type
A 0 4.00
1 5.00
2 6.00
3 5.75
B 4 3.00
5 3.50
Then, groupby again by "type", and shift the elements inside the groups by one row with shift(1).
Then, just extract the values of the price column (the incremental means)
Note: This assumes your data is sorted by date. It it is not, call df.sort_values('date', inplace=True) before.

How to loc 5 rows before and 5 rows after value 1 in column

I have dataframe , i want to change loc 5 rows before and 5 rows after flag value is 1.
df=pd.DataFrame({'A':[2,1,3,4,7,8,11,1,15,20,15,16,87],
'flag':[0,0,0,0,0,1,1,1,0,0,0,0,0]})
expect_output
df1_before =pd.DataFrame({'A':[1,3,4,7,8],
'flag':[0,0,0,0,1]})
df1_after =pd.DataFrame({'A':[8,11,1,15,20],
'flag':[1,1,1,0,0]})
do same process for all three flag 1
I think one easy way is to loop over the index where the flag is 1 and select the rows you want with loc:
l = len(df)
for idx in df[df.flag.astype(bool)].index:
dfb = df.loc[max(idx-4,0):idx]
dfa = df.loc[idx:min(idx+4,l)]
#do stuff
the min and max function are to ensure the boundary are not passed in case you have a flag=1 within the first or last 5 rows. Note also that with loc, if you want 5 rows, you need to use +/-4 on idx to get the right segment.
That said, depending on what your actual #do stuff is, you might want to change tactic. Let's say for example, you want to calculate the difference between the sum of A over the 5 rows after and the 5 rows before. you could use rolling and shift:
df['roll'] = df.rolling(5)['A'].sum()
df.loc[df.flag.astype(bool), 'diff_roll'] = df['roll'].shift(-4) - df['roll']
print (df)
A flag roll diff_roll
0 2 0 NaN NaN
1 1 0 NaN NaN
2 3 0 NaN NaN
3 4 0 NaN NaN
4 7 0 17.0 NaN
5 8 1 23.0 32.0 #=55-23, 55 is the sum of A of df_after and 23 df_before
6 11 1 33.0 29.0
7 1 1 31.0 36.0
8 15 0 42.0 NaN
9 20 0 55.0 NaN
10 15 0 62.0 NaN
11 16 0 67.0 NaN
12 87 0 153.0 NaN

Number of active IDs in each period

I have a dataframe that looks like this
ID | START | END
1 |2016-12-31|2017-02-30
2 |2017-01-30|2017-10-30
3 |2016-12-21|2018-12-30
I want to know the number of active IDs in each possible day. So basically count the number of overlapping time periods.
What I did to calculate this was creating a new data frame c_df with the columns date and count. The first column was populated using a range:
all_dates = pd.date_range(start=min(df['START']), end=max(df['END']))
Then for every line in my original data frame I calculated a different range for the start and end dates:
id_dates = pd.date_range(start=min(user['START']), end=max(user['END']))
I then used this range of dates to increment by one the corresponding count cell in c_df.
All these loops though are not very efficient for big data sets and look ugly. Is there a more efficient way of doing this?
If your dataframe is small enough so that performance is not a concern, create a date range for each row, then explode them and count how many times each date exists in the exploded series.
Requires pandas >= 0.25:
df.apply(lambda row: pd.date_range(row['START'], row['END']), axis=1) \
.explode() \
.value_counts() \
.sort_index()
If your dataframe is large, take advantage of numpy broadcasting to improve performance.
Work with any version of pandas:
dates = pd.date_range(df['START'].min(), df['END'].max()).values
start = df['START'].values[:, None]
end = df['END'].values[:, None]
mask = (start <= dates) & (dates <= end)
result = pd.DataFrame({
'Date': dates,
'Count': mask.sum(axis=0)
})
Create IntervalIndex and use genex or list comprehension with contains to check each date again each interval (Note: I made a smaller sample to test on this solution)
Sample `df`
Out[56]:
ID START END
0 1 2016-12-31 2017-01-20
1 2 2017-01-20 2017-01-30
2 3 2016-12-28 2017-02-03
3 4 2017-01-20 2017-01-25
iix = pd.IntervalIndex.from_arrays(df.START, df.END, closed='both')
all_dates = pd.date_range(start=min(df['START']), end=max(df['END']))
df_final = pd.DataFrame({'dates': all_dates,
'date_counts': (iix.contains(dt).sum() for dt in all_dates)})
In [58]: df_final
Out[58]:
dates date_counts
0 2016-12-28 1
1 2016-12-29 1
2 2016-12-30 1
3 2016-12-31 2
4 2017-01-01 2
5 2017-01-02 2
6 2017-01-03 2
7 2017-01-04 2
8 2017-01-05 2
9 2017-01-06 2
10 2017-01-07 2
11 2017-01-08 2
12 2017-01-09 2
13 2017-01-10 2
14 2017-01-11 2
15 2017-01-12 2
16 2017-01-13 2
17 2017-01-14 2
18 2017-01-15 2
19 2017-01-16 2
20 2017-01-17 2
21 2017-01-18 2
22 2017-01-19 2
23 2017-01-20 4
24 2017-01-21 3
25 2017-01-22 3
26 2017-01-23 3
27 2017-01-24 3
28 2017-01-25 3
29 2017-01-26 2
30 2017-01-27 2
31 2017-01-28 2
32 2017-01-29 2
33 2017-01-30 2
34 2017-01-31 1
35 2017-02-01 1
36 2017-02-02 1
37 2017-02-03 1

Pandas - Replace NaNs in a column with the mean of specific group

I am working with data like the following. The dataframe is sorted by the date:
category value Date
0 1 24/5/2019
1 NaN 24/5/2019
1 1 26/5/2019
2 2 1/6/2019
1 2 23/7/2019
2 NaN 18/8/2019
2 3 20/8/2019
7 3 1/9/2019
1 NaN 12/9/2019
2 NaN 13/9/2019
I would like to replace the "NaN" values with the previous mean for that specific category.
What is the best way to do this in pandas?
Some approaches I considered:
1) This litte riff:
df['mean' = df.groupby('category')['time'].apply(lambda x: x.shift().expanding().mean()))
source
This gets me the the correct means in but in another column, and it does not replace the NaNs.
2) This riff replaces the NaNs with the average of the columns:
df = df.groupby(df.columns, axis = 1).transform(lambda x: x.fillna(x.mean()))
Source 2
Both of these do not exactly give what I want. If someone could guide me on this it would be much appreciated!
You can replace value by new Series from shift + expanding + mean, first value of 1 group is not replaced, because no previous NaN values exits:
df['Date'] = pd.to_datetime(df['Date'])
s = df.groupby('category')['value'].apply(lambda x: x.shift().expanding().mean())
df['value'] = df['value'].fillna(s)
print (df)
category value Date
0 0 1.0 2019-05-24
1 1 NaN 2019-05-24
2 1 1.0 2019-05-26
3 2 2.0 2019-01-06
4 1 2.0 2019-07-23
5 2 2.0 2019-08-18
6 2 3.0 2019-08-20
7 7 3.0 2019-01-09
8 1 1.5 2019-12-09
9 2 2.5 2019-09-13
You can use pandas.Series.fillna to replace NaN values:
df['value']=df['value'].fillna(df.groupby('category')['value'].transform(lambda x: x.shift().expanding().mean()))
print(df)
category value Date
0 0 1.0 24/5/2019
1 1 NaN 24/5/2019
2 1 1.0 26/5/2019
3 2 2.0 1/6/2019
4 1 2.0 23/7/2019
5 2 2.0 18/8/2019
6 2 3.0 20/8/2019
7 7 3.0 1/9/2019
8 1 1.5 12/9/2019
9 2 2.5 13/9/2019

How to fill first N/A cell when apply rolling mean to a column -python

I need to apply rolling mean to a column as showing in pic1 s3, after i apply rolling mean and set windows = 5, i got correct answer , but left first 4 rows empty,as showing in pic2 sa3.
i want to fill the first 4 empty cells in pic2 sa3 with the mean of all data in pic1 s3 up to the current row,as showing in pic3 a3.
how can i do with with an easy function besides the rolling mean method.
I think need parameter min_periods=1 in rolling:
min_periods : int, default None
Minimum number of observations in window required to have a value (otherwise result is NA). For a window that is specified by an offset, this will default to 1.
df = df.rolling(5, min_periods=1).mean()
Sample:
np.random.seed(1256)
df = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('abcde'))
print (df)
a b c d e
0 1 5 8 8 9
1 3 6 3 0 6
2 7 0 1 5 1
3 6 6 5 0 4
4 4 9 4 6 1
5 7 7 5 8 3
6 0 7 2 8 2
7 4 8 3 5 5
8 8 2 0 9 2
9 4 7 1 5 1
df = df.rolling(5, min_periods=1).mean()
print (df)
a b c d e
0 1.000000 5.000000 8.00 8.000000 9.000000
1 2.000000 5.500000 5.50 4.000000 7.500000
2 3.666667 3.666667 4.00 4.333333 5.333333
3 4.250000 4.250000 4.25 3.250000 5.000000
4 4.200000 5.200000 4.20 3.800000 4.200000
5 5.400000 5.600000 3.60 3.800000 3.000000
6 4.800000 5.800000 3.40 5.400000 2.200000
7 4.200000 7.400000 3.80 5.400000 3.000000
8 4.600000 6.600000 2.80 7.200000 2.600000
9 4.600000 6.200000 2.20 7.000000 2.600000
So you want to add:
df['sa3'].fillna(df['s3'].mean(), inplace=True)
Hopefully I used correct column names.
You can use pandas to find the rolling mean and then fill the NaN with zero.
Use something like the following:
col = [1,2,3,4,5,6,7,8,9]
df = pd.DataFrame(col)
df['rm'] = df.rolling(5).mean().fillna(value =0, inplace=False)
print df
0 rm
0 1 0.0
1 2 0.0
2 3 0.0
3 4 0.0
4 5 3.0
5 6 4.0
6 7 5.0
7 8 6.0
8 9 7.0
I see, some of the answers are dealing with null and replacing them with mean and some answers are creating rolling mean but not replacing nulls with it. So i figured out the code myself and posting it here.
df['Col']= df['Col'].fillna(df['Col'].rolling(4,center=True,min_periods=1).mean())
'4' is the length of rolling window
centre = True indicates that the replaced value will will consider half the value above and half values below the null values to replace.

Categories

Resources