I have a dataframe as follows:
Datetime Value
--------------------------------------------
2000-01-01 15:00:00 10
2000-01-01 16:00:00 12
2000-01-01 17:00:00 14
2000-01-01 18:00:00 16
2000-01-02 15:00:00 13
2000-01-02 16:00:00 18
2000-01-02 17:00:00 16
2000-01-02 18:00:00 15
--------------------------------------------
I want to get a column where I can obtain the difference of values from a specific time for each day onwards (let's say 16:00:00), as follows:
Datetime Value NewColumn
--------------------------------------------
2000-01-01 15:00:00 10 -
2000-01-01 16:00:00 12 0
2000-01-01 17:00:00 14 2
2000-01-01 18:00:00 16 4
2000-01-02 15:00:00 13 -
2000-01-02 16:00:00 18 0
2000-01-02 17:00:00 16 -2
2000-01-02 18:00:00 15 -3
--------------------------------------------
I have tried the following code but it shows an error of:
df['NewColumn'] = df.groupby('Datetime')['Value'].apply(lambda x: x - df.loc[(df['Datetime'].dt.time == dt.time(hour=16)), 'Value'])
ValueError: Buffer dtype mismatch, expected 'Python object' but got 'long long'
How should I write my code instead?
IIUC, this is what you need.
df['Datetime']=pd.to_datetime(df['Datetime'])
df['NewColumn'] = (df.groupby(pd.Grouper(freq='D', key='Datetime'))['Value']
.apply(lambda x: x - df.loc[x.loc[df['Datetime'].dt.hour == 16].index[0],'Value']))
df.loc[df['Datetime'].dt.hour < 16, 'NewColumn'] = '-'
print(df)
Output
Datetime Value NewColumn
0 2000-01-01 15:00:00 10 -
1 2000-01-01 16:00:00 12 0
2 2000-01-01 17:00:00 14 2
3 2000-01-01 18:00:00 16 4
4 2000-01-02 15:00:00 13 -
5 2000-01-02 16:00:00 18 0
6 2000-01-02 17:00:00 16 -2
7 2000-01-02 18:00:00 15 -3
Related
I have a dataset with 15-minutes observations for different stations for 20 years. I want to know the range time that each station has data.
station_id
start_time
end_time
observation
2
2000-01-02 01:00:00
2000-01-02 01:15:00
50
2
2000-01-02 01:15:00
2000-01-02 01:30:00
15
2
2000-02-02 01:30:00
2000-01-02 01:45:00
3
3
2000-01-02 05:00:00
2000-01-02 05:15:00
10
3
2000-01-02 05:15:00
2000-01-02 05:30:00
2
3
2000-02-03 01:00:00
2000-01-02 01:15:00
15
3
2000-02-04 01:00:00
2000-01-02 01:15:00
20
an example of I want to have
|station_id | start | end | years |days
| 2 |2000-01-02 01:00:00|2000-01-02 01:45:00| 1 | 1
| 3 |2000-01-02 05:00:00|2000-01-02 01:15:00| 1 | 1
Try using groupby, diff, abs, agg and assign:
df[['start_time', 'end_time']] = df[['start_time', 'end_time']].apply(pd.to_datetime)
x = df.groupby('station_id').agg({'start_time': 'first', 'end_time': 'last'})
temp = x.diff(axis=1).abs()['end_time']
x = x.assign(years=temp.dt.days // 365, days=temp.dt.days % 365).reset_index()
print(x)
I have timeseries data recorded at 10min frequency. I want to average the values at one hour interval. But for that I want to take 3 values before the hour and 2 values after the hour, take the average and assign that value to the exact hour timestamp.
for example, I have the series
index = pd.date_range('2000-01-01T00:30:00', periods=63, freq='10min')
series = pd.Series(range(63), index=index)
series
2000-01-01 00:30:00 0
2000-01-01 00:40:00 1
2000-01-01 00:50:00 2
2000-01-01 01:00:00 3
2000-01-01 01:10:00 4
2000-01-01 01:20:00 5
2000-01-01 01:30:00 6
2000-01-01 01:40:00 7
2000-01-01 01:50:00 8
2000-01-01 02:00:00 9
2000-01-01 02:10:00 10
..
2000-01-01 08:50:00 50
2000-01-01 09:00:00 51
2000-01-01 09:10:00 52
2000-01-01 09:20:00 53
2000-01-01 09:30:00 54
2000-01-01 09:40:00 55
2000-01-01 09:50:00 56
2000-01-01 10:00:00 57
2000-01-01 10:10:00 58
2000-01-01 10:20:00 59
2000-01-01 10:30:00 60
2000-01-01 10:40:00 61
2000-01-01 10:50:00 62
Freq: 10T, Length: 63, dtype: int64
So, if I do
series.resample('1H').mean()
2000-01-01 00:00:00 1.0
2000-01-01 01:00:00 5.5
2000-01-01 02:00:00 11.5
2000-01-01 03:00:00 17.5
2000-01-01 04:00:00 23.5
2000-01-01 05:00:00 29.5
2000-01-01 06:00:00 35.5
2000-01-01 07:00:00 41.5
2000-01-01 08:00:00 47.5
2000-01-01 09:00:00 53.5
2000-01-01 10:00:00 59.5
Freq: H, dtype: float64
the first value is the average of 0, 1, 2, and assigned to hour 0, the second the average of the values for 1:00:00 to 1:50:00 assigned to 1:00:00 and so on.
What I would like to have is the first average centered at 1:00:00 calculated using values from 00:30:00 through 01:20:00, the second centered at 02:00:00 calculated from 01:30:00 to 02:20:00 and so on...
What will be the best way to do that?
Thanks!
You should be able to do that with:
series.index = series.index - pd.Timedelta(30, unit='m')
series_grouped_mean = series.groupby(pd.Grouper(freq='60min')).mean()
series_grouped_mean.index = series_grouped_mean.index + pd.Timedelta(60, unit='m')
series_grouped_mean
I got:
2000-01-01 01:00:00 2.5
2000-01-01 02:00:00 8.5
2000-01-01 03:00:00 14.5
2000-01-01 04:00:00 20.5
2000-01-01 05:00:00 26.5
2000-01-01 06:00:00 32.5
2000-01-01 07:00:00 38.5
2000-01-01 08:00:00 44.5
2000-01-01 09:00:00 50.5
2000-01-01 10:00:00 56.5
2000-01-01 11:00:00 61.0
Freq: H, dtype: float64
i have this information; where "opid" is categorical
datetime id nut opid user amount
2018-01-01 07:01:00 1531 3hrnd 1 mherrera 1
2018-01-01 07:05:00 9510 sd45f 1 svasqu 1
2018-01-01 07:06:00 8125 5s8fr 15 urubi 1
2018-01-01 07:08:15 6324 sd5d6 1 jgonza 1
2018-01-01 07:12:01 0198 tgfg5 1 julmaf 1
2018-01-01 07:13:50 6589 mbkg4 15 jdjiep 1
2018-01-01 07:16:10 9501 wurf4 15 polga 1
the result i'm looking for is something like this
datetime opid amount
2018-01-01 07:00:00 1 3
2018-01-01 07:00:00 15 1
2018-01-01 07:10:00 1 1
2018-01-01 07:10:00 15 2
so... basically i need to know how many of each "opid" are done every 10 min
P.D "amount" is always 1, "opid" is from 1 - 15
Using grouper:
df.set_index('datetime').groupby(['opid', pd.Grouper(freq='10min')]).amount.sum()
opid datetime
1 2018-01-01 07:00:00 3
2018-01-01 07:10:00 1
15 2018-01-01 07:00:00 1
2018-01-01 07:10:00 2
Name: amount, dtype: int64
I was wandering if theres a better approach of combining two dataframes than what I did below.
import pandas as pd
#create ramdom data sets
N = 50
df = pd.DataFrame({'date': pd.date_range('2000-1-1', periods=N, freq='H'),
'value': np.random.random(N)})
index = pd.DatetimeIndex(df['date'])
peak_time = df.iloc[index.indexer_between_time('7:00','9:00')]
lunch_time = df.iloc[index.indexer_between_time('12:00','14:00')]
comb_data = pd.concat([peak_time, lunch_time], ignore_index=True)
Is there a way to combine two ranges when using between_time using logical operator?
I have to use that to make a new column in df called 'isPeak' where 1 is written when it's in range between 7:00 ~ 9:00 and also 12:00 ~ 14:00 and 0 if not.
For me working np.union1d:
import numpy as np
idx = np.union1d(index.indexer_between_time('7:00','9:00'),
index.indexer_between_time('12:00','14:00'))
comb_data = df.iloc[idx]
print (comb_data)
date value
7 2000-01-01 07:00:00 0.760627
8 2000-01-01 08:00:00 0.236474
9 2000-01-01 09:00:00 0.626146
12 2000-01-01 12:00:00 0.625335
13 2000-01-01 13:00:00 0.793105
14 2000-01-01 14:00:00 0.706873
31 2000-01-02 07:00:00 0.113688
32 2000-01-02 08:00:00 0.035565
33 2000-01-02 09:00:00 0.230603
36 2000-01-02 12:00:00 0.423155
37 2000-01-02 13:00:00 0.947584
38 2000-01-02 14:00:00 0.226181
Alternative with numpy.r_:
idx = np.r_[index.indexer_between_time('7:00','9:00'),
index.indexer_between_time('12:00','14:00')]
comb_data = df.iloc[idx]
print (comb_data)
date value
7 2000-01-01 07:00:00 0.760627
8 2000-01-01 08:00:00 0.236474
9 2000-01-01 09:00:00 0.626146
31 2000-01-02 07:00:00 0.113688
32 2000-01-02 08:00:00 0.035565
33 2000-01-02 09:00:00 0.230603
12 2000-01-01 12:00:00 0.625335
13 2000-01-01 13:00:00 0.793105
14 2000-01-01 14:00:00 0.706873
36 2000-01-02 12:00:00 0.423155
37 2000-01-02 13:00:00 0.947584
38 2000-01-02 14:00:00 0.226181
Pure pandas solution with Index.union and convert array to index:
idx = (pd.Index(index.indexer_between_time('7:00','9:00'))
.union(pd.Index(index.indexer_between_time('12:00','14:00'))))
comb_data = df.iloc[idx]
print (comb_data)
date value
7 2000-01-01 07:00:00 0.760627
8 2000-01-01 08:00:00 0.236474
9 2000-01-01 09:00:00 0.626146
12 2000-01-01 12:00:00 0.625335
13 2000-01-01 13:00:00 0.793105
14 2000-01-01 14:00:00 0.706873
31 2000-01-02 07:00:00 0.113688
32 2000-01-02 08:00:00 0.035565
33 2000-01-02 09:00:00 0.230603
36 2000-01-02 12:00:00 0.423155
37 2000-01-02 13:00:00 0.947584
38 2000-01-02 14:00:00 0.226181
I want to resample the pandas series
import pandas as pd
index_1 = pd.date_range('1/1/2000', periods=4, freq='T')
index_2 = pd.date_range('1/2/2000', periods=3, freq='T')
series = pd.Series(range(4), index=index_1)
series=series.append(pd.Series(range(3), index=index_2))
print series
>>>2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-02 00:00:00 0
2000-01-02 00:01:00 1
2000-01-02 00:02:00 2
such that the resulting DataSeries only contains every second entry, i.e
>>>2000-01-01 00:00:00 0
2000-01-01 00:02:00 2
2000-01-02 00:00:00 0
2000-01-02 00:02:00 2
using the (poorly documented) resample method of pandas in the following way:
resampled_series = series.resample('2T', closed='right')
print resampled_series
I get
>>>1999-12-31 23:58:00 0.0
2000-01-01 00:00:00 1.5
2000-01-01 00:02:00 3.0
2000-01-01 00:04:00 NaN
2000-01-01 00:56:00 NaN
...
2000-01-01 23:54:00 NaN
2000-01-01 23:56:00 NaN
2000-01-01 23:58:00 0.0
2000-01-02 00:00:00 1.5
2000-01-02 00:02:00 3.0
Why does it start 2 minutes earlier than the original series? why does it contain all the time steps inbetween, which are not contained in the original series? How can I get my desired result?
resample() is not the right function for your purpose.
try this:
series[series.index.minute % 2 == 0]