Finding consecutive counts of nan in python? - python

I have a dataframe with missing values. I want to find the number of consecutive missing values along with their counts. Below is the sample data and sample result that I expect
Sample data
Timestamp X
2018-01-02 00:00:00 6
2018-01-02 00:05:00 6
2018-01-02 00:10:00 4
2018-01-02 00:15:00 nan
2018-01-02 00:20:00 nan
2018-01-02 00:25:00 3
2018-01-02 00:30:00 4
2018-01-02 00:35:00 nan
2018-01-02 00:40:00 nan
2018-01-02 00:45:00 nan
2018-01-02 00:50:00 nan
2018-01-02 00:55:00 nan
2018-01-02 01:00:00 nan
2018-01-02 01:05:00 2
2018-01-02 01:10:00 4
2018-01-02 01:15:00 6
2018-01-02 01:20:00 6
2018-01-02 01:25:00 nan
2018-01-02 01:30:00 nan
2018-01-02 01:35:00 6
2018-01-02 01:40:00 nan
2018-01-02 01:45:00 nan
2018-01-02 01:50:00 6
2018-01-02 01:55:00 6
2018-01-02 02:00:00 nan
2018-01-02 02:05:00 nan
2018-01-02 02:10:00 nan
2018-01-02 02:15:00 3
2018-01-02 02:20:00 4
Expected result
Consecutive missing
values range Cases
0-2 3
3-5 1
6 and above 1

First use solution from Identifying consecutive NaN's with pandas, then filter out 0 values and use cut for bins, last count values by GroupBy.size:
s = df.X.isna().groupby(df.X.notna().cumsum()).sum()
s = s[s!=0]
b = pd.cut(s, bins=[0, 2, 5, np.inf], labels=['0-2','3-5','6 and above'])
out = b.groupby(b).size().reset_index(name='Cases')
print (out)
X Cases
0 0-2 3
1 3-5 1
2 6 and above 1

Related

How to extract the first and last value from a data sequence based on a column value?

I have a time series dataset that can be created with the following code.
idx = pd.date_range("2018-01-01", periods=100, freq="H")
ts = pd.Series(idx)
dft = pd.DataFrame(ts,columns=["date"])
dft["data"] = ""
dft["data"][0:5]= "a"
dft["data"][5:15]= "b"
dft["data"][15:20]= "c"
dft["data"][20:30]= "d"
dft["data"][30:40]= "a"
dft["data"][40:70]= "c"
dft["data"][70:85]= "b"
dft["data"][85:len(dft)]= "c"
In the data column, the unique values are a,b,c,d. These values are repeating in a sequence in different time windows. I want to capture the first and last value of that time window. How can I do that?
Compute a grouper for your changing values using shift to compare consecutive rows, then use groupby+agg to get the min/max per group:
group = dft.data.ne(dft.data.shift()).cumsum()
dft.groupby(group)['date'].agg(['min', 'max'])
output:
min max
data
1 2018-01-01 00:00:00 2018-01-01 04:00:00
2 2018-01-01 05:00:00 2018-01-01 14:00:00
3 2018-01-01 15:00:00 2018-01-01 19:00:00
4 2018-01-01 20:00:00 2018-01-02 05:00:00
5 2018-01-02 06:00:00 2018-01-02 15:00:00
6 2018-01-02 16:00:00 2018-01-03 21:00:00
7 2018-01-03 22:00:00 2018-01-04 12:00:00
8 2018-01-04 13:00:00 2018-01-05 03:00:00
edit. combining with original data:
dft.groupby(group).agg({'data': 'first', 'date': ['min', 'max']})
output:
data date
first min max
data
1 a 2018-01-01 00:00:00 2018-01-01 04:00:00
2 b 2018-01-01 05:00:00 2018-01-01 14:00:00
3 c 2018-01-01 15:00:00 2018-01-01 19:00:00
4 d 2018-01-01 20:00:00 2018-01-02 05:00:00
5 a 2018-01-02 06:00:00 2018-01-02 15:00:00
6 c 2018-01-02 16:00:00 2018-01-03 21:00:00
7 b 2018-01-03 22:00:00 2018-01-04 12:00:00
8 c 2018-01-04 13:00:00 2018-01-05 03:00:00

Pandas sum over a date range for each category separately

I have a dataframe with timeseries of sales transactions for different items:
import pandas as pd
from datetime import timedelta
df_1 = pd.DataFrame()
df_2 = pd.DataFrame()
df_3 = pd.DataFrame()
# Create datetimes and data
df_1['date'] = pd.date_range('1/1/2018', periods=5, freq='D')
df_1['item'] = 1
df_1['sales']= 2
df_2['date'] = pd.date_range('1/1/2018', periods=5, freq='D')
df_2['item'] = 2
df_2['sales']= 3
df_3['date'] = pd.date_range('1/1/2018', periods=5, freq='D')
df_3['item'] = 3
df_3['sales']= 4
df = pd.concat([df_1, df_2, df_3])
df = df.sort_values(['item'])
df
Resulting dataframe:
date item sales
0 2018-01-01 1 2
1 2018-01-02 1 2
2 2018-01-03 1 2
3 2018-01-04 1 2
4 2018-01-05 1 2
0 2018-01-01 2 3
1 2018-01-02 2 3
2 2018-01-03 2 3
3 2018-01-04 2 3
4 2018-01-05 2 3
0 2018-01-01 3 4
1 2018-01-02 3 4
2 2018-01-03 3 4
3 2018-01-04 3 4
4 2018-01-05 3 4
I want to compute a sum of "sales" for a given item in a given time window. I can't use pandas rolling.sum
because the timeseries is sparse (eg. 2018-01-01 > 2018-01-04 > 2018-01-06 > etc.).
I've tried this solution (for time window = 2 days):
df['start_date'] = df['date'] - timedelta(3)
df['end_date'] = df['date'] - timedelta(1)
df['rolled_sales'] = df.apply(lambda x: df.loc[(df.date >= x.start_date) &
(df.date <= x.end_date), 'sales'].sum(), axis=1)
but it results with sums of sales of all items for a given time window:
date item sales start_date end_date rolled_sales
0 2018-01-01 1 2 2017-12-29 2017-12-31 0
1 2018-01-02 1 2 2017-12-30 2018-01-01 9
2 2018-01-03 1 2 2017-12-31 2018-01-02 18
3 2018-01-04 1 2 2018-01-01 2018-01-03 27
4 2018-01-05 1 2 2018-01-02 2018-01-04 27
0 2018-01-01 2 3 2017-12-29 2017-12-31 0
1 2018-01-02 2 3 2017-12-30 2018-01-01 9
2 2018-01-03 2 3 2017-12-31 2018-01-02 18
3 2018-01-04 2 3 2018-01-01 2018-01-03 27
4 2018-01-05 2 3 2018-01-02 2018-01-04 27
0 2018-01-01 3 4 2017-12-29 2017-12-31 0
1 2018-01-02 3 4 2017-12-30 2018-01-01 9
2 2018-01-03 3 4 2017-12-31 2018-01-02 18
3 2018-01-04 3 4 2018-01-01 2018-01-03 27
4 2018-01-05 3 4 2018-01-02 2018-01-04 27
My goal is to have rolled_sales computed for each item separately, like this:
date item sales start_date end_date rolled_sales
0 2018-01-01 1 2 2017-12-29 2017-12-31 0
1 2018-01-02 1 2 2017-12-30 2018-01-01 2
2 2018-01-03 1 2 2017-12-31 2018-01-02 4
3 2018-01-04 1 2 2018-01-01 2018-01-03 6
4 2018-01-05 1 2 2018-01-02 2018-01-04 8
0 2018-01-01 2 3 2017-12-29 2017-12-31 0
1 2018-01-02 2 3 2017-12-30 2018-01-01 3
2 2018-01-03 2 3 2017-12-31 2018-01-02 6
3 2018-01-04 2 3 2018-01-01 2018-01-03 9
4 2018-01-05 2 3 2018-01-02 2018-01-04 12
0 2018-01-01 3 4 2017-12-29 2017-12-31 0
1 2018-01-02 3 4 2017-12-30 2018-01-01 4
2 2018-01-03 3 4 2017-12-31 2018-01-02 8
3 2018-01-04 3 4 2018-01-01 2018-01-03 12
4 2018-01-05 3 4 2018-01-02 2018-01-04 16
I tried to apply solution suggested here: Pandas rolling sum for multiply values separately
but failed.
Any ideas?
Many Thanks in advance :)
Andy
Total sales With 2-days rolling window per item:
z = df.sort_values('date').set_index('date').groupby('item').rolling('2d')['sales'].sum()
Output:
item date
1 2018-01-01 2.0
2018-01-02 4.0
2018-01-03 4.0
2018-01-04 4.0
2018-01-05 4.0
2 2018-01-01 3.0
2018-01-02 6.0
2018-01-03 6.0
2018-01-04 6.0
2018-01-05 6.0
3 2018-01-01 4.0
2018-01-02 8.0
2018-01-03 8.0
2018-01-04 8.0
2018-01-05 8.0
Name: sales, dtype: float64
Total sales from last 2 days per item:
df[df.groupby('item').cumcount() < 2 ].groupby('item').sum()
Total sales between start_date and end_date per item:
start_date = pd.to_datetime('2017-12-2')
end_date = pd.to_datetime('2018-12-2')
df[df['date'].between(start_date, end_date)].groupby('item')['sales'].sum()
df['rolled_sum'] = (df.groupby('item')
.rolling('3D', on='date').sum()['sales']
.to_numpy()
)
After some data wrangling (I removed some rows to simulate sparse dates, and added helper columns "start_date" and "end_date" for 3 days distance from a given date), the final output looks like this:
date item sales start_date end_date rolled_sum
0 2018-01-01 1 2 2017-12-30 2018-01-01 2.0
3 2018-01-04 1 2 2018-01-02 2018-01-04 2.0
4 2018-01-05 1 2 2018-01-03 2018-01-05 4.0
7 2018-01-08 1 2 2018-01-06 2018-01-08 2.0
9 2018-01-10 1 2 2018-01-08 2018-01-10 4.0
12 2018-01-03 2 3 2018-01-01 2018-01-03 3.0
13 2018-01-04 2 3 2018-01-02 2018-01-04 6.0
15 2018-01-06 2 3 2018-01-04 2018-01-06 6.0
17 2018-01-08 2 3 2018-01-06 2018-01-08 6.0
18 2018-01-09 2 3 2018-01-07 2018-01-09 6.0
19 2018-01-10 2 3 2018-01-08 2018-01-10 9.0
21 2018-01-02 3 4 2017-12-31 2018-01-02 4.0
23 2018-01-04 3 4 2018-01-02 2018-01-04 8.0
25 2018-01-06 3 4 2018-01-04 2018-01-06 8.0
26 2018-01-07 3 4 2018-01-05 2018-01-07 8.0
27 2018-01-08 3 4 2018-01-06 2018-01-08 12.0
28 2018-01-09 3 4 2018-01-07 2018-01-09 12.0
29 2018-01-10 3 4 2018-01-08 2018-01-10 12.0
The magic was in rolling.sum parameter: instead of "3", I should use "3D".
Many Thanks for Your help :)
Andy

Finding maximum null values in stretch and generating flag

I have dataframe with datetime and two columns.I have to find maximum stretch of null values in a 'particular date' for column 'X' and replace it with zero in both column for that particular date. In addition to that I have to create third column with name 'flag' which will carry value of 1 for every zero imputation in other two column or else value of 0. In example below, January 1st the maximum stretch null value is 3 times, so I have to replace this with zero. Similarly, I have to replicate the process for 2nd January.
Below is my sample data:
Datetime X Y
01-01-2018 00:00 1 1
01-01-2018 00:05 nan 2
01-01-2018 00:10 2 nan
01-01-2018 00:15 3 4
01-01-2018 00:20 2 2
01-01-2018 00:25 nan 1
01-01-2018 00:30 nan nan
01-01-2018 00:35 nan nan
01-01-2018 00:40 4 4
02-01-2018 00:00 nan nan
02-01-2018 00:05 2 3
02-01-2018 00:10 2 2
02-01-2018 00:15 2 5
02-01-2018 00:20 2 2
02-01-2018 00:25 nan nan
02-01-2018 00:30 nan 1
02-01-2018 00:35 3 nan
02-01-2018 00:40 nan nan
"Below is the result that I am expecting"
Datetime X Y Flag
01-01-2018 00:00 1 1 0
01-01-2018 00:05 nan 2 0
01-01-2018 00:10 2 nan 0
01-01-2018 00:15 3 4 0
01-01-2018 00:20 2 2 0
01-01-2018 00:25 0 0 1
01-01-2018 00:30 0 0 1
01-01-2018 00:35 0 0 1
01-01-2018 00:40 4 4 0
02-01-2018 00:00 nan nan 0
02-01-2018 00:05 2 3 0
02-01-2018 00:10 2 2 0
02-01-2018 00:15 2 5 0
02-01-2018 00:20 2 2 0
02-01-2018 00:25 nan nan 0
02-01-2018 00:30 nan 1 0
02-01-2018 00:35 3 nan 0
02-01-2018 00:40 nan nan 0
This question is the extension of previous question. Here is the link Python - Find maximum null values in stretch and replacing with 0
First create consecutive groups for each column filled by unique values:
df1 = df.isna()
df2 = df1.ne(df1.groupby(df1.index.date).shift()).cumsum().where(df1)
df2['Y'] *= len(df2)
print (df2)
X Y
Datetime
2018-01-01 00:00:00 NaN NaN
2018-01-01 00:05:00 2.0 NaN
2018-01-01 00:10:00 NaN 36.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:20:00 NaN NaN
2018-01-01 00:25:00 4.0 NaN
2018-01-01 00:30:00 4.0 72.0
2018-01-01 00:35:00 4.0 72.0
2018-01-01 00:40:00 NaN NaN
2018-02-01 00:00:00 6.0 108.0
2018-02-01 00:05:00 NaN NaN
2018-02-01 00:10:00 NaN NaN
2018-02-01 00:15:00 NaN NaN
2018-02-01 00:20:00 NaN NaN
2018-02-01 00:25:00 8.0 144.0
2018-02-01 00:30:00 8.0 NaN
2018-02-01 00:35:00 NaN 180.0
2018-02-01 00:40:00 10.0 180.0
Then get groups with maximum count - here group 4:
a = df2.stack().value_counts().index[0]
print (a)
4.0
Get mask for match rows for set 0 and for Flag column cast mask to integer to Tru/False to 1/0 mapping:
mask = df2.eq(a).any(axis=1)
df.loc[mask,:] = 0
df['Flag'] = mask.astype(int)
print (df)
X Y Flag
Datetime
2018-01-01 00:00:00 1.0 1.0 0
2018-01-01 00:05:00 NaN 2.0 0
2018-01-01 00:10:00 2.0 NaN 0
2018-01-01 00:15:00 3.0 4.0 0
2018-01-01 00:20:00 2.0 2.0 0
2018-01-01 00:25:00 0.0 0.0 1
2018-01-01 00:30:00 0.0 0.0 1
2018-01-01 00:35:00 0.0 0.0 1
2018-01-01 00:40:00 4.0 4.0 0
2018-02-01 00:00:00 NaN NaN 0
2018-02-01 00:05:00 2.0 3.0 0
2018-02-01 00:10:00 2.0 2.0 0
2018-02-01 00:15:00 2.0 5.0 0
2018-02-01 00:20:00 2.0 2.0 0
2018-02-01 00:25:00 NaN NaN 0
2018-02-01 00:30:00 NaN 1.0 0
2018-02-01 00:35:00 3.0 NaN 0
2018-02-01 00:40:00 NaN NaN 0
EDIT:
Added new condition for match dates from list:
dates = df.index.floor('d')
filtered = ['2018-01-01','2019-01-01']
m = dates.isin(filtered)
df1 = df.isna() & m[:, None]
df2 = df1.ne(df1.groupby(dates).shift()).cumsum().where(df1)
df2['Y'] *= len(df2)
print (df2)
X Y
Datetime
2018-01-01 00:00:00 NaN NaN
2018-01-01 00:05:00 2.0 NaN
2018-01-01 00:10:00 NaN 36.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:20:00 NaN NaN
2018-01-01 00:25:00 4.0 NaN
2018-01-01 00:30:00 4.0 72.0
2018-01-01 00:35:00 4.0 72.0
2018-01-01 00:40:00 NaN NaN
2018-02-01 00:00:00 NaN NaN
2018-02-01 00:05:00 NaN NaN
2018-02-01 00:10:00 NaN NaN
2018-02-01 00:15:00 NaN NaN
2018-02-01 00:20:00 NaN NaN
2018-02-01 00:25:00 NaN NaN
2018-02-01 00:30:00 NaN NaN
2018-02-01 00:35:00 NaN NaN
2018-02-01 00:40:00 NaN NaN
a = df2.stack().value_counts().index[0]
#solution working also if no NaNs per filtered rows (prevent IndexError: index 0 is out of bounds)
#a = next(iter(df2.stack().value_counts().index), -1)
mask = df2.eq(a).any(axis=1)
df.loc[mask,:] = 0
df['Flag'] = mask.astype(int)
print (df)
X Y Flag
Datetime
2018-01-01 00:00:00 1.0 1.0 0
2018-01-01 00:05:00 NaN 2.0 0
2018-01-01 00:10:00 2.0 NaN 0
2018-01-01 00:15:00 3.0 4.0 0
2018-01-01 00:20:00 2.0 2.0 0
2018-01-01 00:25:00 0.0 0.0 1
2018-01-01 00:30:00 0.0 0.0 1
2018-01-01 00:35:00 0.0 0.0 1
2018-01-01 00:40:00 4.0 4.0 0
2018-02-01 00:00:00 NaN NaN 0
2018-02-01 00:05:00 2.0 3.0 0
2018-02-01 00:10:00 2.0 2.0 0
2018-02-01 00:15:00 2.0 5.0 0
2018-02-01 00:20:00 2.0 2.0 0
2018-02-01 00:25:00 NaN NaN 0
2018-02-01 00:30:00 NaN 1.0 0
2018-02-01 00:35:00 3.0 NaN 0

Mapping datetime columns of two tables

I created a dataframe with only datetime column with 1 second interval for jan 1, 2018 as shown in the code below.
i = pd.date_range(start='2018-01-01 00:00:00', end='2018-01-01 23:59:00', freq="1S")
ts = pd.DataFrame(index=i)
ts = ts.reset_index()
ts = ts.rename(columns={'index': 'datetime'})`
df1:
datetime
0 2018-01-01 00:00:00
1 2018-01-01 00:00:01
2 2018-01-01 00:00:02
3 2018-01-01 00:00:03
4 2018-01-01 00:00:04
5 2018-01-01 00:00:05
6 2018-01-01 00:00:06
7 2018-01-01 00:00:07
8 2018-01-01 00:00:08
9 2018-01-01 00:00:09
10 2018-01-01 00:00:10
11 2018-01-01 00:00:11
12 2018-01-01 00:00:12
13 2018-01-01 00:00:13
14 2018-01-01 00:00:14
15 2018-01-01 00:00:15
16 2018-01-01 00:00:16
17 2018-01-01 00:00:17
18 2018-01-01 00:00:18
19 2018-01-01 00:00:19
20 2018-01-01 00:00:20
21 2018-01-01 00:00:21
22 2018-01-01 00:00:22
23 2018-01-01 00:00:23
24 2018-01-01 00:00:24
25 2018-01-01 00:00:25
26 2018-01-01 00:00:26
27 2018-01-01 00:00:27
28 2018-01-01 00:00:28
29 2018-01-01 00:00:29`
I have another dataframe with a datetime column another columns
df2:
datetime a b c d e
0 2018-01-01 00:00:04 0.9
1 2018-01-01 00:00:06 0.6 0.7
2 2018-01-01 00:00:09 0.5 0.7 0.8
3 2018-01-01 00:00:16 2.3 3.6 4.9 5.0
4 2018-01-01 00:00:17 0.9 3.5 5.5
5 2018-01-01 00:00:23 0.1 0.6 0.0 1.7
6 2018-01-01 00:00:29 2.7 5.5 4.3 `
Now I am trying to map the datetime columns of df1 and df2 using pandas outer join and I would like my expected result to look like
datetime a b c d e
0 2018-01-01 00:00:00
1 2018-01-01 00:00:01
2 2018-01-01 00:00:02
3 2018-01-01 00:00:03
4 2018-01-01 00:00:04 0.9
5 2018-01-01 00:00:05
6 2018-01-01 00:00:06 0.6 0.7
7 2018-01-01 00:00:07
8 2018-01-01 00:00:08
9 2018-01-01 00:00:09 0.5 0.7 0.8
10 2018-01-01 00:00:10
11 2018-01-01 00:00:11
12 2018-01-01 00:00:12
13 2018-01-01 00:00:13
14 2018-01-01 00:00:14
15 2018-01-01 00:00:15
16 2018-01-01 00:00:16 2.3 3.6 4.9 5.0
17 2018-01-01 00:00:17 0.9 3.5 5.5
18 2018-01-01 00:00:18
19 2018-01-01 00:00:19
20 2018-01-01 00:00:20
21 2018-01-01 00:00:21
22 2018-01-01 00:00:22
23 2018-01-01 00:00:23 0.1 0.6 0.0 1.7
24 2018-01-01 00:00:24
25 2018-01-01 00:00:25
26 2018-01-01 00:00:26
27 2018-01-01 00:00:27
28 2018-01-01 00:00:28
29 2018-01-01 00:00:29 2.7 5.5 4.3 `
but my output looks like this
datetime a b c d e
0 2018-01-01 00:00:00
1 2018-01-01 00:00:01
2 2018-01-01 00:00:02
3 2018-01-01 00:00:03
4 2018-01-01 00:00:04
5 2018-01-01 00:00:05
6 2018-01-01 00:00:06
7 2018-01-01 00:00:07
8 2018-01-01 00:00:08
9 2018-01-01 00:00:09
10 2018-01-01 00:00:10
11 2018-01-01 00:00:11
12 2018-01-01 00:00:12
13 2018-01-01 00:00:13
14 2018-01-01 00:00:14
15 2018-01-01 00:00:15
16 2018-01-01 00:00:16
17 2018-01-01 00:00:17
18 2018-01-01 00:00:18
19 2018-01-01 00:00:19
20 2018-01-01 00:00:20
21 2018-01-01 00:00:21
22 2018-01-01 00:00:22
23 2018-01-01 00:00:23
24 2018-01-01 00:00:24
25 2018-01-01 00:00:25
26 2018-01-01 00:00:26
27 2018-01-01 00:00:27
28 2018-01-01 00:00:28
29 2018-01-01 00:00:29
30 2018-01-01 00:00:04 0.9
31 2018-01-01 00:00:06 0.6 0.7
32 2018-01-01 00:00:09 0.5 0.7 0.8
33 2018-01-01 00:00:16 2.3 3.6 4.9 5.0
34 2018-01-01 00:00:17 0.9 3.5 5.5
35 2018-01-01 00:00:23 0.1 0.6 0.0 1.7
36 2018-01-01 00:00:29 2.7 5.5 4.3 `
The code I am using to do that operation is:
test = pandas.merge(df1, df2, on = ['datetime'], how= 'outer')
I am not quite sure how to approach this issue and I would appreciate if I can get some help.
Keep ts with datetime index and use Reindex as #Scott Boston mentioned in the comments,
i = pd.date_range(start='2018-01-01 00:00:00', end='2018-01-01 23:59:00', freq="1S")
ts = pd.DataFrame(index=i)
df['datetime'] = pd.to_datetime(df['datetime'])
df.set_index('datetime').reindex(ts.index)
a b c d e
2018-01-01 00:00:00 NaN NaN NaN NaN NaN
2018-01-01 00:00:01 NaN NaN NaN NaN NaN
2018-01-01 00:00:02 NaN NaN NaN NaN NaN
2018-01-01 00:00:03 NaN NaN NaN NaN NaN
2018-01-01 00:00:04 0.9
2018-01-01 00:00:05 NaN NaN NaN NaN NaN
2018-01-01 00:00:06 0.6 0.7
2018-01-01 00:00:07 NaN NaN NaN NaN NaN
2018-01-01 00:00:08 NaN NaN NaN NaN NaN
2018-01-01 00:00:09 0.5 0.7 0.8
2018-01-01 00:00:10 NaN NaN NaN NaN NaN
2018-01-01 00:00:11 NaN NaN NaN NaN NaN
2018-01-01 00:00:12 NaN NaN NaN NaN NaN
2018-01-01 00:00:13 NaN NaN NaN NaN NaN
2018-01-01 00:00:14 NaN NaN NaN NaN NaN
2018-01-01 00:00:15 NaN NaN NaN NaN NaN
2018-01-01 00:00:16 2.3 3.6 4.9 5.0
2018-01-01 00:00:17 0.9 3.5 5.5
Option 2: concat
pd.concat([ts, df.set_index('datetime')], axis = 1)

Resampling and add threshold information in Pandas dataframe

I have my pandas data frame in 1 min frequency, I want to do the re-sampling based on the threshold data (there are multiple thresholds in a numpy array)
Here is example of my dataset:
2018-01-01 00:01:00 0.867609
2018-01-01 00:02:00 0.544493
2018-01-01 00:03:00 0.958497
2018-01-01 00:04:00 0.371790
2018-01-01 00:05:00 0.470320
2018-01-01 00:06:00 0.757448
2018-01-01 00:07:00 0.198261
2018-01-01 00:08:00 0.666350
2018-01-01 00:09:00 0.392574
2018-01-01 00:10:00 0.627608
2018-01-01 00:11:00 0.414380
2018-01-01 00:12:00 0.120925
2018-01-01 00:13:00 0.559495
2018-01-01 00:14:00 0.260619
2018-01-01 00:15:00 0.982731
2018-01-01 00:16:00 0.996133
2018-01-01 00:17:00 0.410816
2018-01-01 00:18:00 0.366457
2018-01-01 00:19:00 0.927745
2018-01-01 00:20:00 0.626804
2018-01-01 00:21:00 0.223193
2018-01-01 00:22:00 0.007136
2018-01-01 00:23:00 0.245006
2018-01-01 00:24:00 0.491245
2018-01-01 00:25:00 0.215716
2018-01-01 00:26:00 0.932378
2018-01-01 00:27:00 0.366263
2018-01-01 00:28:00 0.522177
2018-01-01 00:29:00 0.614966
2018-01-01 00:30:00 0.670983
threshold=np.array([0.5,0.8,0.9])
What I want is to extract the data where it crosses the threshold values and if doesn't cross the threshold value just resample data at 30 min
Sample ans :
Threshold
2018-01-01 00:01:00 0.867609 0.8
2018-01-01 00:02:00 0.544493 0.5
2018-01-01 00:03:00 0.958497 0.9
2018-01-01 00:05:00 0.421055 NA
2018-01-01 00:06:00 0.757448 0.5
2018-01-01 00:07:00 0.198261 NA
2018-01-01 00:08:00 0.666350 0.5
2018-01-01 00:09:00 0.392574 NA
2018-01-01 00:10:00 0.627608 0.5
2018-01-01 00:12:00 0.414380 NA
2018-01-01 00:13:00 0.559495 0.5
2018-01-01 00:14:00 0.260619 NA
2018-01-01 00:15:00 0.982731 0.9
2018-01-01 00:16:00 0.996133 0.9
2018-01-01 00:18:00 0.388636 NA
2018-01-01 00:19:00 0.927745 0.9
2018-01-01 00:20:00 0.626804 0.5
2018-01-01 00:25:00 0.215716 NA
2018-01-01 00:26:00 0.932378 0.9
2018-01-01 00:27:00 0.366263 NA
2018-01-01 00:28:00 0.522177 0.5
2018-01-01 00:29:00 0.614966 0.5
2018-01-01 00:30:00 0.670983 0.5
I got the solution for resampling from #Scott Boston,
df = df.set_index(0)
g = df[1].lt(-22).mul(1).diff().bfill().ne(0).cumsum()
df.groupby(g).apply(lambda x: x.resample('1T', kind='period').mean().reset_index()
if (x.iloc[0] < -22).any() else
x.resample('30T', kind='period').mean().reset_index())\
.reset_index(drop=True)
Use pd.cut:
threshold=np.array([0.5,0.8,0.9]).tolist()
pd.cut(df[1],bins=threshold+[np.inf],labels=threshold)
Output:
0 0.8
1 0.5
2 0.9
3 NaN
4 NaN
5 0.5
6 NaN
7 0.5
8 NaN
9 0.5
10 NaN
11 NaN
12 0.5
13 NaN
14 0.9
15 0.9
16 NaN
17 NaN
18 0.9
19 0.5
20 NaN
21 NaN
22 NaN
23 NaN
24 NaN
25 0.9
26 NaN
27 0.5
28 0.5
29 0.5
Name: 1, dtype: category
Categories (3, float64): [0.5 < 0.8 < 0.9]
Now, let's add this to the datafame and filter out all consecutive NaNs.
df['Threshold'] = pd.cut(df[1],bins=threshold+[np.inf],labels=threshold)
mask = ~(df.Threshold.isnull() & (df.Threshold.isnull() == df.Threshold.isnull().shift(1)))
df[mask]
Output:
0 1 Threshold
0 2018-01-01 00:01:00 0.867609 0.8
1 2018-01-01 00:02:00 0.544493 0.5
2 2018-01-01 00:03:00 0.958497 0.9
3 2018-01-01 00:04:00 0.371790 NaN
5 2018-01-01 00:06:00 0.757448 0.5
6 2018-01-01 00:07:00 0.198261 NaN
7 2018-01-01 00:08:00 0.666350 0.5
8 2018-01-01 00:09:00 0.392574 NaN
9 2018-01-01 00:10:00 0.627608 0.5
10 2018-01-01 00:11:00 0.414380 NaN
12 2018-01-01 00:13:00 0.559495 0.5
13 2018-01-01 00:14:00 0.260619 NaN
14 2018-01-01 00:15:00 0.982731 0.9
15 2018-01-01 00:16:00 0.996133 0.9
16 2018-01-01 00:17:00 0.410816 NaN
18 2018-01-01 00:19:00 0.927745 0.9
19 2018-01-01 00:20:00 0.626804 0.5
20 2018-01-01 00:21:00 0.223193 NaN
25 2018-01-01 00:26:00 0.932378 0.9
26 2018-01-01 00:27:00 0.366263 NaN
27 2018-01-01 00:28:00 0.522177 0.5
28 2018-01-01 00:29:00 0.614966 0.5
29 2018-01-01 00:30:00 0.670983 0.5

Categories

Resources