How to calculate the mean of a column dataframe in python - python

I have this dataframe and i want to calculate the Temperature mean for each day:
Dates Temp
13 2019-08-02 24.5
20 2019-08-02 24.3
27 2019-08-03 24.1
34 2019-08-03 23.7
41 2019-08-04 23.6
I use this code that seemed good to me:
df.groupby('Dates', as_index=False)['Temp'].mean()
But the final result is this, which is clearly not the good output as i would have the mean temperature for each day of the year :
Dates Temp
0 2019-08-02 24.4
1 2019-08-03 23.9
2 2019-08-04 23.6
Any idea?

If data has same year use date_range with Series.reindex:
df['Dates'] = pd.to_datetime(df['Dates'])
y = df['Dates'].dt.year.min()
r = pd.date_range(f'{y}-01-01', f'{y}-12-31', name='Dates')
df1 = df.groupby('Dates')['Temp'].mean().reindex(r).reset_index()
print (df1)
Dates Temp
0 2019-01-01 NaN
1 2019-01-02 NaN
2 2019-01-03 NaN
3 2019-01-04 NaN
4 2019-01-05 NaN
.. ... ...
360 2019-12-27 NaN
361 2019-12-28 NaN
362 2019-12-29 NaN
363 2019-12-30 NaN
364 2019-12-31 NaN
[365 rows x 2 columns]
If multiple years:
y1, y2 = df['Dates'].dt.year.min(), df['Dates'].dt.year.max()
r = pd.date_range(f'{y1}-01-01', f'{y2}-12-31')
df.groupby('Dates')['Temp'].mean().reindex(r).reset_index()

Related

ValueError: Invalid fill method. Expecting pad (ffill) or backfill (bfill). Got nearest

I have this df:
Week U.S. 30 yr FRM U.S. 15 yr FRM
0 2014-12-31 3.87 3.15
1 2015-01-01 NaN NaN
2 2015-01-02 NaN NaN
3 2015-01-03 NaN NaN
4 2015-01-04 NaN NaN
... ... ... ...
2769 2022-07-31 NaN NaN
2770 2022-08-01 NaN NaN
2771 2022-08-02 NaN NaN
2772 2022-08-03 NaN NaN
2773 2022-08-04 4.99 4.26
And when I try to run this interpolation:
pmms_df.interpolate(method = 'nearest', inplace = True)
I get ValueError: Invalid fill method. Expecting pad (ffill) or backfill (bfill). Got nearest
I read in this post that pandas interpolate doesn't do well with the time columns, so I tried this:
pmms_df[['U.S. 30 yr FRM', 'U.S. 15 yr FRM']].interpolate(method = 'nearest', inplace = True)
but the output is exactly the same as before the interpolation.
It may not work great with date columns, but it works well with a datetime index, which is probably what you should be using here:
df = df.set_index('Week')
df = df.interpolate(method='nearest')
print(df)
# Output:
U.S. 30 yr FRM U.S. 15 yr FRM
Week
2014-12-31 3.87 3.15
2015-01-01 3.87 3.15
2015-01-02 3.87 3.15
2015-01-03 3.87 3.15
2015-01-04 3.87 3.15
2022-07-31 4.99 4.26
2022-08-01 4.99 4.26
2022-08-02 4.99 4.26
2022-08-03 4.99 4.26
2022-08-04 4.99 4.26

find specific value that meets conditions - python

Trying to create new column with values that meet specific conditions. Below I have set out code which goes some way in explaining the logic but does not produce the correct output:
import pandas as pd
import numpy as np
df = pd.DataFrame({'date': ['2019-08-06 09:00:00', '2019-08-06 12:00:00', '2019-08-06 18:00:00', '2019-08-06 21:00:00', '2019-08-07 09:00:00', '2019-08-07 16:00:00', '2019-08-08 17:00:00' ,'2019-08-09 16:00:00'],
'type': [0, 1, np.nan, 1, np.nan, np.nan, 0 ,0],
'colour': ['blue', 'red', np.nan, 'blue', np.nan, np.nan, 'blue', 'red'],
'maxPixel': [255, 7346, 32, 5184, 600, 322, 72, 6000],
'minPixel': [86, 96, 14, 3540, 528, 300, 12, 4009],
'colourDate': ['2019-08-06 12:00:00', '2019-08-08 16:00:00', '2019-08-06 23:00:00', '2019-08-06 22:00:00', '2019-08-08 09:00:00', '2019-08-09 16:00:00', '2019-08-08 23:00:00' ,'2019-08-11 16:00:00'] })
max_conditions = [(df['type'] == 1) & (df['colour'] == 'blue'),
(df['type'] == 1) & (df['colour'] == 'red')]
max_choices = [np.where(df['date'] <= df['colourDate'], max(df['maxPixel']), np.nan),
np.where(df['date'] <= df['colourDate'], min(df['minPixel']), np.nan)]
df['pixelLimit'] = np.select(max_conditions, max_choices, default=np.nan)
Incorrect output:
date type colour maxPixel minPixel colourDate pixelLimit
0 2019-08-06 09:00:00 0.0 blue 255 86 2019-08-06 12:00:00 NaN
1 2019-08-06 12:00:00 1.0 red 7346 96 2019-08-08 16:00:00 12.0
2 2019-08-06 18:00:00 NaN NaN 32 14 2019-08-06 23:00:00 NaN
3 2019-08-06 21:00:00 1.0 blue 5184 3540 2019-08-06 22:00:00 6000.0
4 2019-08-07 09:00:00 NaN NaN 600 528 2019-08-08 09:00:00 NaN
5 2019-08-07 16:00:00 NaN NaN 322 300 2019-08-09 16:00:00 NaN
6 2019-08-08 17:00:00 0.0 blue 72 12 2019-08-08 23:00:00 NaN
7 2019-08-09 16:00:00 0.0 red 6000 4009 2019-08-11 16:00:00 NaN
Explanation why output is incorrect:
Value 12.0 in index row 1 for column df['pixelLimit'] is incorrect because this value is from df['minPixel'] index row 6 which has has a df['date'] datetime of 2019-08-08 17:00:00 which is greater than the 2019-08-08 16:00:00 df['date'] datetime contained in index row 1.
Value 6000.0 in index row 3 for column df['pixelLimit'] is incorrect because this value is from df['maxPixel'] index row 7 which has a df['date'] datetime of 2019-08-09 16:00:00 which is greater than the 2019-08-06 22:00:00 df['date'] datetime contained in index row .
Correct output:
date type colour maxPixel minPixel colourDate pixelLimit
0 2019-08-06 09:00:00 0.0 blue 255 86 2019-08-06 12:00:00 NaN
1 2019-08-06 12:00:00 1.0 red 7346 96 2019-08-08 16:00:00 14.0
2 2019-08-06 18:00:00 NaN NaN 32 14 2019-08-06 23:00:00 NaN
3 2019-08-06 21:00:00 1.0 blue 5184 3540 2019-08-06 22:00:00 5184.0
4 2019-08-07 09:00:00 NaN NaN 600 528 2019-08-08 09:00:00 NaN
5 2019-08-07 16:00:00 NaN NaN 322 300 2019-08-09 16:00:00 NaN
6 2019-08-08 17:00:00 0.0 blue 72 12 2019-08-08 23:00:00 NaN
7 2019-08-09 16:00:00 0.0 red 6000 4009 2019-08-11 16:00:00 NaN
Explanation why output is correct:
Value 14.0 in index row 1 for column df['pixelLimit'] is correct because we are looking for the minimum value in column df['minPixel'] which has a datetime in column df['date'] less than the datetime in index row 1 for column df['colourDate'] and greater or equal to the datetime in index row 1 for column df['date']
Value 5184.0 in index row 3 for column df['pixelLimit'] is correct because we are looking for the maximum value in column df['maxPixel'] which has a datetime in column df['date'] less than the datetime in index row 3 for column df['colourDate'] and greater or equal to the datetime in index row 3 for column df['date']
Considerations:
Maybe np.select is not best suited for this task and some sort of function might serve the task better?
Also, maybe I need to create some sort of dynamic len to use as a starting point for each row?
Request
Please can anyone out there help me amend my code to achieve the correct output
For matching problems like this one possibility is to do the complete merge, then subset, using a Boolean Series, to all rows that satisfy your condition (for that row) and find the max or min among all the possible matches. Since this requires slightly different columns and different functions I split the operations into 2 very similar pieces of code, one to deal with 1/blue and the other for 1/red.
First some housekeeping, make things datetime
import pandas as pd
df['date'] = pd.to_datetime(df['date'])
df['colourDate'] = pd.to_datetime(df['colourDate'])
Calculate the min pixel for 1/red between the times for each row
# Subset of rows we need to do this for
dfmin = df[df.type.eq(1) & df.colour.eq('red')].reset_index()
# To each row merge all rows from the original DataFrame
dfmin = dfmin.merge(df[['date', 'minPixel']], how='cross')
# If pd.version < 1.2 instead use:
#dfmin = dfmin.assign(t=1).merge(df[['date', 'minPixel']].assign(t=1), on='t')
# Only keep rows between the dates, then among those find the min minPixel
smin = (dfmin[dfmin.date_y.between(dfmin.date_x, dfmin.colourDate)]
.groupby('index')['minPixel_y'].min()
.rename('pixel_limit'))
#index
#1 14
#Name: pixel_limit, dtype: int64
# Max is basically a mirror
dfmax = df[df.type.eq(1) & df.colour.eq('blue')].reset_index()
dfmax = dfmax.merge(df[['date', 'maxPixel']], how='cross')
#dfmax = dfmax.assign(t=1).merge(df[['date', 'maxPixel']].assign(t=1), on='t')
smax = (dfmax[dfmax.date_y.between(dfmax.date_x, dfmax.colourDate)]
.groupby('index')['maxPixel_y'].max()
.rename('pixel_limit'))
Finally because the above groups over the original index (i.e. 'index') we can simply assign back to align with the original DataFrame.
df['pixel_limit'] = pd.concat([smin, smax])
date type colour maxPixel minPixel colourDate pixel_limit
0 2019-08-06 09:00:00 0.0 blue 255 86 2019-08-06 12:00:00 NaN
1 2019-08-06 12:00:00 1.0 red 7346 96 2019-08-08 16:00:00 14.0
2 2019-08-06 18:00:00 NaN NaN 32 14 2019-08-06 23:00:00 NaN
3 2019-08-06 21:00:00 1.0 blue 5184 3540 2019-08-06 22:00:00 5184.0
4 2019-08-07 09:00:00 NaN NaN 600 528 2019-08-08 09:00:00 NaN
5 2019-08-07 16:00:00 NaN NaN 322 300 2019-08-09 16:00:00 NaN
6 2019-08-08 17:00:00 0.0 blue 72 12 2019-08-08 23:00:00 NaN
7 2019-08-09 16:00:00 0.0 red 6000 4009 2019-08-11 16:00:00 NaN
If you need to bring along a lot of different information for the row with the min/max Pixel then instead of groupby min/max we will sort_values and then gropuby + head or tail to get the min or max pixel. For the min this would look like (slight renaming of suffixes):
# Subset of rows we need to do this for
dfmin = df[df.type.eq(1) & df.colour.eq('red')].reset_index()
# To each row merge all rows from the original DataFrame
dfmin = dfmin.merge(df[['date', 'minPixel']].reset_index(), how='cross',
suffixes=['', '_match'])
# For older pandas < 1.2
#dfmin = (dfmin.assign(t=1)
# .merge(df[['date', 'minPixel']].reset_index().assign(t=1),
# on='t', suffixes=['', '_match']))
# Only keep rows between the dates, then among those find the min minPixel row.
# A bunch of renaming.
smin = (dfmin[dfmin.date_match.between(dfmin.date, dfmin.colourDate)]
.sort_values('minPixel_match', ascending=True)
.groupby('index').head(1)
.set_index('index')
.filter(like='_match')
.rename(columns={'minPixel_match': 'pixel_limit'}))
The Max would then be similar using .tail
dfmax = df[df.type.eq(1) & df.colour.eq('blue')].reset_index()
dfmax = dfmax.merge(df[['date', 'maxPixel']].reset_index(), how='cross',
suffixes=['', '_match'])
smax = (dfmax[dfmax.date_match.between(dfmax.date, dfmin.colourDate)]
.sort_values('maxPixel_match', ascending=True)
.groupby('index').tail(1)
.set_index('index')
.filter(like='_match')
.rename(columns={'maxPixel_match': 'pixel_limit'}))
And finally we concat along axis=1 now that we need to join multiple columns to the original:
result = pd.concat([df, pd.concat([smin, smax])], axis=1)
date type colour maxPixel minPixel colourDate index_match date_match pixel_limit
0 2019-08-06 09:00:00 0.0 blue 255 86 2019-08-06 12:00:00 NaN NaN NaN
1 2019-08-06 12:00:00 1.0 red 7346 96 2019-08-08 16:00:00 2.0 2019-08-06 18:00:00 14.0
2 2019-08-06 18:00:00 NaN NaN 32 14 2019-08-06 23:00:00 NaN NaN NaN
3 2019-08-06 21:00:00 1.0 blue 5184 3540 2019-08-06 22:00:00 3.0 2019-08-06 21:00:00 5184.0
4 2019-08-07 09:00:00 NaN NaN 600 528 2019-08-08 09:00:00 NaN NaN NaN
5 2019-08-07 16:00:00 NaN NaN 322 300 2019-08-09 16:00:00 NaN NaN NaN
6 2019-08-08 17:00:00 0.0 blue 72 12 2019-08-08 23:00:00 NaN NaN NaN
7 2019-08-09 16:00:00 0.0 red 6000 4009 2019-08-11 16:00:00 NaN NaN NaN

Getting a valueerror on merging two dataframes in Pandas

I tried to merge two dataframes using panda but this is the error code that I get:
ValueError: You are trying to merge on datetime64[ns] and datetime64[ns, UTC] columns. If you wish to proceed you should use pd.concat
I have tried different solutions found online but nothing works!! The code has been provided to me and it seems to work on other PCs but not on my computer.
This is my code:
import sys
import os
from datetime import datetime
import numpy as np
import pandas as pd
# --------------------------------------------------------------------
# -- price, consumption and production --
# --------------------------------------------------------------------
fn = '../data/np_data.csv'
if os.path.isfile(fn):
df_data = pd.read_csv(fn,header=[0],parse_dates=[0])
else:
sys.exit('Could not open data file {}̈́'.format(fn))
# --------------------------------------------------------------------
# -- temp. data --
# --------------------------------------------------------------------
fn = '../data/temp.csv'
if os.path.isfile(fn):
dtemp = pd.read_csv(fn,header=[0],parse_dates=[0])
else:
sys.exit('Could not open data file {}̈́'.format(fn))
# --------------------------------------------------------------------
# -- price data --
# -- first date: 2014-01-13 --
# -- last date: 2020-02-01 --
# --------------------------------------------------------------------
fn = '../data/eprice.csv'
if os.path.isfile(fn):
eprice = pd.read_csv(fn,header=[0])
else:
sys.exit('Could not open data file {}̈́'.format(fn))
# --------------------------------------------------------------------
# -- combine dataframes (and save as CSV file) --
# --------------------------------------------------------------------
#
df= df_data.merge(dtemp, on='time',how='left') ## This is where I get the error.
print(df.info())
print(eprice.info())
#
# add eprice
df = df.merge(eprice, on='date', how='left')
#
# eprice available only available on trading days
# fills in missing values, last observation is used
df = df.fillna(method='ffill')
#
# keep only the relevant time period
df = df[df.date > '2014-01-23']
df = df[df.date < '2020-02-01']
df.to_csv('../data/my_data.csv',index=False)
The datasets that have been imported look normal with expected number of columns and observations. The version that I have in Panda is 1.0.3
Edit:
this is the output (df) when I first merge df_data and dtemp.
time price_sys price_no1 ... temp_no3 temp_no4 temp_no5
0 2014-01-23 00:00:00+00:00 32.08 32.08 ... NaN NaN NaN
1 2014-01-24 00:00:00+00:00 31.56 31.60 ... -2.5 -8.7 2.5
2 2014-01-24 00:00:00+00:00 30.96 31.02 ... -2.5 -8.7 2.5
3 2014-01-24 00:00:00+00:00 30.84 30.79 ... -2.5 -8.7 2.5
4 2014-01-24 00:00:00+00:00 31.58 31.10 ... -2.5 -8.7 2.5
[5 rows x 25 columns]
This is the output for eprice before I merge:
<bound method NDFrame.head of date gas price oil price coal price carbon price
0 2014-01-24 00:00:00 66.00 107.88 79.42 6.89
1 2014-01-27 00:00:00 64.20 106.69 79.43 7.04
2 2014-01-28 00:00:00 63.75 107.41 79.29 7.20
3 2014-01-29 00:00:00 63.20 107.85 78.52 7.21
4 2014-01-30 00:00:00 62.60 107.95 78.18 7.46
... ... ... ... ...
1608 2020-03-25 00:00:00 22.30 27.39 67.81 17.51
1609 2020-03-26 00:00:00 21.55 26.34 70.35 17.35
1610 2020-03-27 00:00:00 18.90 24.93 72.46 16.39
1611 2020-03-30 00:00:00 19.20 22.76 71.63 17.06
1612 2020-03-31 00:00:00 18.00 22.74 71.13 17.68
[1613 rows x 5 columns]>
This is what happends when I merge df and eprice:
<bound method NDFrame.head of date gas price oil price coal price carbon price
0 2014-01-24 00:00:00 66.00 107.88 79.42 6.89
1 2014-01-27 00:00:00 64.20 106.69 79.43 7.04
2 2014-01-28 00:00:00 63.75 107.41 79.29 7.20
3 2014-01-29 00:00:00 63.20 107.85 78.52 7.21
4 2014-01-30 00:00:00 62.60 107.95 78.18 7.46
... ... ... ... ...
1608 2020-03-25 00:00:00 22.30 27.39 67.81 17.51
1609 2020-03-26 00:00:00 21.55 26.34 70.35 17.35
1610 2020-03-27 00:00:00 18.90 24.93 72.46 16.39
1611 2020-03-30 00:00:00 19.20 22.76 71.63 17.06
1612 2020-03-31 00:00:00 18.00 22.74 71.13 17.68
[1613 rows x 5 columns]>
time price_sys ... coal price carbon price
0 2014-01-23 00:00:00+00:00 32.08 ... NaN NaN
1 2014-01-24 00:00:00+00:00 31.56 ... NaN NaN
2 2014-01-24 00:00:00+00:00 30.96 ... NaN NaN
3 2014-01-24 00:00:00+00:00 30.84 ... NaN NaN
4 2014-01-24 00:00:00+00:00 31.58 ... NaN NaN
[5 rows x 29 columns]
Try doing df['Time'] = pd.to_datetime(df['Time'], utc = True) on both the time columns before joining (or rather the one without UTC needs to go through this!)

Date subraction not working with date range function

I am trying subtract two datetimes when there valid there is valid value for T1,T2 obtain the difference. The difference is caluclated by considering only weekdays between the dates not considering saturday and sunday.
Code works for only some rows. How can this be fixed.
T1 T2 Diff
0 2017-12-04 05:48:15 2018-01-05 12:15:22 NaN
1 2017-07-10 08:23:11 2018-01-05 15:28:22 NaN
2 2017-09-11 05:10:37 2018-01-29 15:02:07 NaN
3 2017-12-21 04:51:12 2018-01-29 16:06:43 NaN
4 2017-10-13 10:11:00 2018-02-22 16:19:04 NaN
5 2017-09-28 21:44:31 2018-01-29 12:42:02 NaN
6 2018-01-23 20:00:58 2018-01-29 14:40:33 NaN
7 2017-11-28 15:39:38 2018-01-31 11:57:04 NaN
8 2017-12-21 12:44:00 2018-01-31 13:12:37 30.0
9 2017-11-09 05:52:29 2018-01-22 11:42:01 53.0
10 2018-02-12 04:21:08 NaT NaN
df[['T1','T2','diff']].dtypes
T1 datetime64[ns]
T2 datetime64[ns]
diff float64
df['T1'] = pd.to_datetime(df['T1'])
df['T2'] = pd.to_datetime(df['t2'])
def fun(row):
if row.isnull().any():
return np.nan
ts = pd.DataFrame(pd.date_range(row["T1"],row["T2"]), columns=["date"])
ts["dow"] = ts["date"].dt.weekday
return (ts["dow"]<5).sum()
df["diff"] = df.apply(lambda x: fun(x), axis=1)
Instead of trying to check for a null value in the row, use a try/except to capture the error when it does the calculation with a null value.
This worked for me in, I think, the manner you want.
import pandas as pd
import numpy as np
df = pd.read_csv("/home/rightmire/Downloads/test.csv", sep=",")
# df = df[["m1","m2"]]
print(df)
# print(df[['m1','m2']].dtypes)
df['m1'] = pd.to_datetime(df['m1'])
df['m2'] = pd.to_datetime(df['m2'])
print(df[['m1','m2']].dtypes)
#for index, row in df.iterrows():
def fun(row):
try:
ts = pd.DataFrame(pd.date_range(row["m1"],row["m2"]), columns=["date"])
# print(ts)
ts["dow"] = ts["date"].dt.weekday
result = (ts["dow"]<5).sum()
# print("Result = ", result)
return result
except Exception as e:
# print("ERROR:{}".format(str(e)))
result = np.nan
# print("Result = ", result)
return result
df["diff"] = df.apply(lambda x: fun(x), axis=1)
print(df["diff"])
OUTPUT OF INTEREST:
dtype: object
0 275.0
1 147.0
2 58.0
3 28.0
4 95.0
5 87.0
6 4.0
7 46.0
8 30.0
9 96.0
10 NaN
11 27.0
12 170.0
13 158.0
14 79.0
Name: diff, dtype: float64

Pandas converting timestamp and monthly summary

I have several .csv files which I am importing via Pandas and then work out a summary of the data (min, max, mean), ideally weekly and monthly reports. I have the following code, but just do not seem to get the month summary to work, I am sure the problem is with the timestamp conversion.
What am I doing wrong?
import pandas as pd
import numpy as np
#Format of the data that is been imported
#2017-05-11 18:29:14+00:00,264.0,987.99,26.5,23.70,512.0,11.763,52.31
df = pd.read_csv('data.csv')
df['timestamp'] = pd.to_datetime(df['time'], format='%Y-%m-%d %H:%M:%S')
print 'month info'
print [g for n, g in df.groupby(pd.Grouper(key='timestamp',freq='M'))]
print(data.groupby('timestamp')['light'].mean())
IIUC, you almost have it, and your datetime conversion is fine. Here is an example:
Starting from a dataframe like this (which is your example row, duplicated with slight modifications):
>>> df
time x y z a b c d
0 2017-05-11 18:29:14+00:00 264.0 947.99 24.5 53.7 511.0 11.463 12.31
1 2017-05-15 18:29:14+00:00 265.0 957.99 25.5 43.7 512.0 11.563 22.31
2 2017-05-21 18:29:14+00:00 266.0 967.99 26.5 33.7 513.0 11.663 32.31
3 2017-06-11 18:29:14+00:00 267.0 977.99 26.5 23.7 514.0 11.763 42.31
4 2017-06-22 18:29:14+00:00 268.0 997.99 27.5 13.7 515.0 11.800 52.31
You can do what you did before with your datetime:
df['timestamp'] = pd.to_datetime(df['time'], format='%Y-%m-%d %H:%M:%S')
And then get your summaries either separately:
monthly_mean = df.groupby(pd.Grouper(key='timestamp',freq='M')).mean()
monthly_max = df.groupby(pd.Grouper(key='timestamp',freq='M')).max()
monthly_min = df.groupby(pd.Grouper(key='timestamp',freq='M')).min()
weekly_mean = df.groupby(pd.Grouper(key='timestamp',freq='W')).mean()
weekly_min = df.groupby(pd.Grouper(key='timestamp',freq='W')).min()
weekly_max = df.groupby(pd.Grouper(key='timestamp',freq='W')).max()
# Examples:
>>> monthly_mean
x y z a b c d
timestamp
2017-05-31 265.0 957.99 25.5 43.7 512.0 11.5630 22.31
2017-06-30 267.5 987.99 27.0 18.7 514.5 11.7815 47.31
>>> weekly_mean
x y z a b c d
timestamp
2017-05-14 264.0 947.99 24.5 53.7 511.0 11.463 12.31
2017-05-21 265.5 962.99 26.0 38.7 512.5 11.613 27.31
2017-05-28 NaN NaN NaN NaN NaN NaN NaN
2017-06-04 NaN NaN NaN NaN NaN NaN NaN
2017-06-11 267.0 977.99 26.5 23.7 514.0 11.763 42.31
2017-06-18 NaN NaN NaN NaN NaN NaN NaN
2017-06-25 268.0 997.99 27.5 13.7 515.0 11.800 52.31
Or aggregate them all together to get a multi-indexed dataframe with your summaries:
monthly_summary = df.groupby(pd.Grouper(key='timestamp',freq='M')).agg(['mean', 'min', 'max'])
weekly_summary = df.groupby(pd.Grouper(key='timestamp',freq='W')).agg(['mean', 'min', 'max'])
# Example of summary of row 'x':
>>> monthly_summary['x']
mean min max
timestamp
2017-05-31 265.0 264.0 266.0
2017-06-30 267.5 267.0 268.0
>>> weekly_summary['x']
mean min max
timestamp
2017-05-14 264.0 264.0 264.0
2017-05-21 265.5 265.0 266.0
2017-05-28 NaN NaN NaN
2017-06-04 NaN NaN NaN
2017-06-11 267.0 267.0 267.0
2017-06-18 NaN NaN NaN
2017-06-25 268.0 268.0 268.0

Categories

Resources