Maximum Monthly Values whilst retaining the Data at which that values occurred - python

I have daily rainfall data that looks like the following:
Date Rainfall (mm)
1922-01-01 0.0
1922-01-02 0.0
1922-01-03 0.0
1922-01-04 0.0
1922-01-05 31.5
1922-01-06 0.0
1922-01-07 0.0
1922-01-08 0.0
1922-01-09 0.0
1922-01-10 0.0
1922-01-11 0.0
1922-01-12 9.1
1922-01-13 6.4
I am trying to work out the maximum value for each month for each year, and also what date the maximum value occurred on. I have been using the code:
rain_data.groupby(pd.Grouper(freq = 'M'))['Rainfall (mm)'].max()
This is returning the correct maximum value but returns the end date of each month rather than the date that maximum event occurred on.
1974-11-30 0.0
I have also tried using .idxmax(), but this also just return the end values of each month.
Any suggestions on how I could get the correct date?

pd.Grouper seems to change the order within groups for Datetime, which breaks the usual trick of .sort_values + .tail. Instead group on the year and month:
df.sort_values('Rainfall (mm)').groupby([df.Date.dt.year, df.Date.dt.month]).tail(1)
Sample Data + Output
import pandas as pd
import numpy as np
np.random.seed(123)
df = pd.DataFrame({'Date': pd.date_range('1922-01-01', freq='D', periods=100),
'Rainfall (mm)': np.random.randint(1,100,100)})
df.sort_values('Rainfall (mm)').groupby([df.Date.dt.month, df.Date.dt.year]).tail(1)
# Date Rainfall (mm)
#82 1922-03-24 92
#35 1922-02-05 98
#2 1922-01-03 99
#90 1922-04-01 99
The problem with pd.Grouper is that it creates a DatetimeIndex with an end of the month frequency, which we don't really need and we're using .apply. This give you a new index, and is nicely sorted by date though!
(df.groupby(pd.Grouper(key='Date', freq='1M'))
.apply(lambda x: x.loc[x['Rainfall (mm)'].idxmax()])
.reset_index(drop=True))
# Date Rainfall (mm)
#0 1922-01-03 99
#1 1922-02-05 98
#2 1922-03-24 92
#3 1922-04-01 99
Also can with .drop_duplicates using the first 7 characters of the date to get the Year-Month
(df.assign(ym = df.Date.astype(str).str[0:7])
.sort_values('Rainfall (mm)')
.drop_duplicates('ym', keep='last')
.drop(columns='ym'))

Related

Upsampling and dividing data in pandas

I am trying to upsample a pandas datetime-indexed dataframe, so that resulting data is equally divided over the new entries.
For instance, let's say I have a dataframe which stores a cost each month, and I want to get a dataframe which summarizes the equivalent costs per day for each month:
df = (pd.DataFrame([[pd.to_datetime('2023-01-01'), 31],
[pd.to_datetime('2023-02-01'), 14]],
columns=['time', 'cost']
)
.set_index("time")
)
Daily costs are 1$ (or whatever currency you like) in January, and 0.5$ in February. My goal in picture:
After a lot of struggle, I managed to obtain the next code snippet which seems to do what I want:
# add a value to perform a correct resampling
df.loc[df.index.max() + relativedelta(months=1)] = 0
# forward-fill over the right scale
# then divide each entry per the number of rows in the month
df = (df
.resample('1d')
.ffill()
.iloc[:-1]
.groupby(lambda x: datetime(x.year, x.month, 1))
.transform(lambda x: (x / x.count()))
)
However, this is not entirely ok:
using transform forces me to have dataframes with a single column ;
I need to hardcode my original frequency several times in different formats (while adding an extra value at the end of the dataframe, and in the groupby), making a function design hard ;
It only works with evenly-spaced datetime index (even if it's ok in my case) ;
it remains complex.
Does anyone have a suggestion to improve that code snippet ?
What if we took df's month indices and expanded them into days range, while dividing df's values by a number those days and assigning to each day, all by list comprehensions (edit: for equally distributed values per day):
import pandas as pd
# initial DataFrame
df = (pd.DataFrame([[pd.to_datetime('2023-01-01'), 31],
[pd.to_datetime('2023-02-01'), 14]],
columns=['time', 'cost']
).set_index("time"))
# reformat to months
df.index = df.index.strftime('%m-%Y')
df1 = pd.concat( # concatenate the resulted DataFrames into one
[pd.DataFrame( # make a DataFrame from a row in df
[v / pd.Period(i).days_in_month # each month's value divided by n of days in a month
for d in range(pd.Period(i).days_in_month)], # repeated for as many times as there are days
index=pd.date_range(start=i, periods=pd.Period(i).days_in_month, freq='D')) # days range
for i, v in df.iterrows()]) # for each df's index and value
df1
Output:
cost
2023-01-01 1.0
2023-01-02 1.0
2023-01-03 1.0
2023-01-04 1.0
2023-01-05 1.0
2023-01-06 1.0
2023-01-07 1.0
2023-01-08 1.0
2023-01-09 1.0
2023-01-10 1.0
2023-01-11 1.0
... ...
2023-02-13 0.5
2023-02-14 0.5
2023-02-15 0.5
2023-02-16 0.5
2023-02-17 0.5
2023-02-18 0.5
2023-02-19 0.5
2023-02-20 0.5
2023-02-21 0.5
2023-02-22 0.5
2023-02-23 0.5
2023-02-24 0.5
2023-02-25 0.5
2023-02-26 0.5
2023-02-27 0.5
2023-02-28 0.5
What could be done to avoid uniform distribution of daily costs and for the cases with multiple columns? Here's an extended df:
# additional columns and a row
df = (pd.DataFrame([[pd.to_datetime('2023-01-01'), 31, 62, 23],
[pd.to_datetime('2023-02-01'), 14, 28, 51],
[pd.to_datetime('2023-03-01'), 16, 33, 21]],
columns=['time', 'cost1', 'cost2', 'cost3']
).set_index("time"))
# reformat to months
df.index = df.index.strftime('%m-%Y')
df
Output:
cost1 cost2 cost3
time
01-2023 31 62 23
02-2023 14 28 51
03-2023 16 33 21
Here's what I came up for the cases where monthly costs may be upsampled by randomized daily costs, inspired by this question. This solution is scalable to the number of columns and rows:
df1 = pd.concat( # concatenate the resulted DataFrames into one
[pd.DataFrame( # make a DataFrame from a row in df
# here we make a Series with random Dirichlet distributed numbers
# with length of a month and a column's value as the sum
[pd.Series((np.random.dirichlet(np.ones(pd.Period(i).days_in_month), size=1)*v
).flatten()) # the product is an ndarray that needs flattening
for v in row], # for every column value in a row
# index renamed as columns because of the created DataFrame's shape
index=df.columns
# transpose and set the proper index
).T.set_index(
pd.date_range(start=i,
periods=pd.Period(i).days_in_month,
freq='D'))
for i, row in df.iterrows()]) # iterate over every row
Output:
cost1 cost2 cost3
2023-01-01 1.703177 1.444117 0.160151
2023-01-02 0.920706 3.664460 0.823405
2023-01-03 1.210426 1.194963 0.294093
2023-01-04 0.214737 1.286273 0.923881
2023-01-05 1.264553 0.380062 0.062829
... ... ... ...
2023-03-27 0.124092 0.615885 0.251369
2023-03-28 0.520578 1.505830 1.632373
2023-03-29 0.245154 3.094078 0.308173
2023-03-30 0.530927 0.406665 1.149860
2023-03-31 0.276992 1.115308 0.432090
90 rows × 3 columns
To assert the monthly sum:
df1.groupby(pd.Grouper(freq='M')).agg('sum')
Output:
cost1 cost2 cost3
2023-01-31 31.0 62.0 23.0
2023-02-28 14.0 28.0 51.0
2023-03-31 16.0 33.0 21.0

Appending from one dataframe to another dataframe (with different sizes) when two values match

I have two pandas dataframes and some of the values overlap and I'd like to append to the original dataframe if the time_date value and the origin values are the same.
Here is my original dataframe called flightsDF which is very long, it has the format:
year month origin dep_time dep_delay arr_time time_hour
2001 01 EWR 15:00 15 17:00 2013-01-01T06:00:00Z
I have another dataframe weatherDF (much shorter than flightsDF) with some extra infomation for some of the values in the original dataframe
origin temp dewp humid wind_dir wind_speed precip visib time_hour
0 EWR 39.02 26.06 59.37 270.0 10.35702 0.0 10.0 2013-01-01T06:00:00Z
1 EWR 39.02 26.96 61.63 250.0 8.05546 0.0 10.0 2013-01-01T07:00:00Z
2 LGH 39.02 28.04 64.43 240.0 11.50780 0.0 10.0 2013-01-01T08:00:00Z
I'd like to append the extra information (temp, dewp, humid,...) from weatherDF to the original data frame if both the time_hour and origin match with the original dataframe flightsDF
I have tried
for x in weatherDF:
if x['time_hour'] == flightsDF['time_hour'] & flightsDF['origin']=='EWR':
flights_df.append(x)
and some other similar ways but I can't seem to get it working, can anyone help?
I am planning to append all the corresponding values and then dropping any from the combined dataframe that don't have those values.
You are probably looking for pd.merge:
flightDF = flightsDF.merge(weatherDF, on=['origin', 'time_hour'], how='left')
print(out)
# Output
year month origin dep_time dep_delay arr_time time_hour temp dewp humid wind_dir wind_speed precip visib
0 2001 1 EWR 15:00 15 17:00 2013-01-01T06:00:00Z 39.02 26.06 59.37 270.0 10.35702 0.0 10.0
If I'm right take the time to read Pandas Merging 101

Python: Compare data against the 95th percentile of a running window dataset

I have a large DataFrame of thousands of rows but only 2 columns. The 2 columns are of the below format:
Dt
Val
2020-01-01
10.5
2020-01-01
11.2
2020-01-01
10.9
2020-01-03
11.3
2020-01-05
12.0
The first column is date and the second column is a value. For each date, there may be zero, one or more values.
What I need to do is the following: Compute the 95th percentile based on the 30 days that just past and see if the current value is above or below that 95th percentile value. There must however be a minimum of 50 values available for the past 30 days.
For example, if a record has date "2020-12-01" and value "10.5", then I need to first see how many values are there available for the date range 2020-11-01 to 2020-11-30. If there are at least 50 values available over that date range, then I will want to compute the 95th percentile of those values and compare 10.5 against that. If 10.5 is greater than the 95th percentile value, then the result for that record is "Above Threshold". If 10.5 is less than the 95th percentile value, then the result for that record is "Below Threshold". If there are less than 50 values over the date range 2020-11-01 to 2020-11-30, then the result for that record is "Insufficient Data".
I would like to avoid running a loop if possible as it may be expensive from a resource and time perspective to loop through thousands of records to process them one by one. I hope someone can advise of a simple(r) python / pandas solution here.
Use rolling on DatetimeIndex to get the number of values available and the 95th percentile in the last 30 days. Here is an example with 3 days rolling window:
import datetime
import pandas as pd
df = pd.DataFrame({'val':[1,2,3,4,5,6]},
index = [datetime.date(2020,10,1), datetime.date(2020,10,1), datetime.date(2020,10,2),
datetime.date(2020,10,3), datetime.date(2020,10,3), datetime.date(2020,10,4)])
df.index = pd.DatetimeIndex(df.index)
df['number_of_values'] = df.rolling('3D').count()
df['rolling_percentile'] = df.rolling('3D')['val'].quantile(0.9, interpolation='nearest')
Then you can simply do your comparison:
# Above Threshold
(df['val']>df['rolling_percentile'])&(df['number_of_values']>=50)
# Below Threshold
(df['val']>df['rolling_percentile'])&(df['number_of_values']>=50)
# Insufficient Data
df['number_of_values']<50
To remove the current date, close argument would not work for more than one row on a day, so maybe use the rolling apply:
def f(x, metric):
x = x[x.index!=x.index[-1]]
if metric == 'count':
return len(x)
elif metric == 'percentile':
return x.quantile(0.9, interpolation='nearest')
else:
return np.nan
df = pd.DataFrame({'val':[1,2,3,4,5,6]},
index = [datetime.date(2020,10,1), datetime.date(2020,10,1), datetime.date(2020,10,2),
datetime.date(2020,10,3), datetime.date(2020,10,3), datetime.date(2020,10,4)])
df.index = pd.DatetimeIndex(df.index)
df['count'] = df.rolling('3D')['val'].apply(f, args = ('count',))
df['percentile'] = df.rolling('3D')['val'].apply(f, args = ('percentile',))
val count percentile
2020-10-01 1 0.0 NaN
2020-10-01 2 0.0 NaN
2020-10-02 3 2.0 2.0
2020-10-03 4 3.0 3.0
2020-10-03 5 3.0 3.0
2020-10-04 6 3.0 5.0

Pandas: Calculate average of values for a time frame

I am working on a large datasets that looks like this:
Time, Value
01.01.2018 00:00:00.000, 5.1398
01.01.2018 00:01:00.000, 5.1298
01.01.2018 00:02:00.000, 5.1438
01.01.2018 00:03:00.000, 5.1228
01.01.2018 00:04:00.000, 5.1168
.... , ,,,,
31.12.2018 23:59:59.000, 6.3498
The data is a minute data from the first day of the year to the last day of the year
I want to use Pandas to find the average of every 5 days.
For example:
Average from 01.01.2018 00:00:00.000 to 05.01.2018 23:59:59.000 is average for 05.01.2018
The next average will be from 02.01.2018 00:00:00.000 to 6.01.2018 23:59:59.000 is average for 06.01.2018
The next average will be from 03.01.2018 00:00:00.000 to 7.01.2018 23:59:59.000 is average for 07.01.2018
and so on... We are incrementing day by 1 but calculating an average from the day to past 5days, including the current date.
For a given day, there are 24hours * 60minutes = 1440 data points. So I need to get the average of 1440 data points * 5 days = 7200 data points.
The final DataFrame will look like this, time format [DD.MM.YYYY] (without hh:mm:ss) and the Value is the average of 5 data including the current date:
Time, Value
05.01.2018, 5.1398
06.01.2018, 5.1298
07.01.2018, 5.1438
.... , ,,,,
31.12.2018, 6.3498
The bottom line is to calculate the average of data from today to the past 5 days and the average value is shown as above.
I tried to iterate through Python loop but I wanted something better than we can do from Pandas.
Perhaps this will work?
import numpy as np
# Create one year of random data spaced evenly in 1 minute intervals.
np.random.seed(0) # So that others can reproduce the same result given the random numbers.
time_idx = pd.date_range(start='2018-01-01', end='2018-12-31', freq='min')
df = pd.DataFrame({'Time': time_idx, 'Value': abs(np.random.randn(len(time_idx))) + 5})
>>> df.shape
(524161, 2)
Given the dataframe with 1 minute intervals, you can take a rolling average over the past five days (5 days * 24 hours/day * 60 minutes/hour = 7200 minutes) and assign the result to a new column named rolling_5d_avg. You can then group on the original timestamps using the dt accessor method to grab the date, and then take the last rolling_5d_avg value for each date.
df = (
df
.assign(rolling_5d_avg=df.rolling(window=5*24*60)['Value'].mean())
.groupby(df['Time'].dt.date)['rolling_5d_avg']
.last()
)
>>> df.head(10)
Time
2018-01-01 NaN
2018-01-02 NaN
2018-01-03 NaN
2018-01-04 NaN
2018-01-05 5.786603
2018-01-06 5.784011
2018-01-07 5.790133
2018-01-08 5.786967
2018-01-09 5.789944
2018-01-10 5.789299
Name: rolling_5d_avg, dtype: float64

Pandas concatenate/join/group rows in a dataframe based on date

I have a pandas dataset like this:
Date WaterTemp Discharge AirTemp Precip
0 2012-10-05 00:00 10.9 414.0 39.2 0.0
1 2012-10-05 00:15 10.1 406.0 39.2 0.0
2 2012-10-05 00:45 10.4 406.0 37.4 0.0
...
63661 2016-10-12 14:30 10.5 329.0 15.8 0.0
63662 2016-10-12 14:45 10.6 323.0 19.4 0.0
63663 2016-10-12 15:15 10.8 329.0 23 0.0
I want to extend each row so that I get a dataset that looks like:
Date WaterTemp 00:00 WaterTemp 00:15 .... Discharge 00:00 ...
0 2012-10-05 10.9 10.1 414.0
There will be at most 72 readings for each date so I should have 288 columns in addition to the date and index columns, and at most I should have at most 1460 rows (4 years * 365 days in year - possibly some missing dates). Eventually, I will use the 288-column dataset in a classification task (I'll be adding the label later), so I need to convert this dataframe to a 2d array (sans datetime) to feed into the classifier, so I can't simply group by date and then access the group. I did try grouping based on date, but I was uncertain how to change each group into a single row. I also looked at joining. It looks like joining could suit my needs (for example a join based on (day, month, year)) but I was uncertain how to split things into different pandas dataframes so that the join would work. What is a way to do this?
PS. I do already know how to change the my datetimes in my Date column to dates without the time.
I figured it out. I group the readings by time of day of reading. Each group is a dataframe in and of itself, so I just then need to concatenate the dataframes based on date. My code for the whole function is as follows.
import pandas
def readInData(filename):
#read in files and remove missing values
ds = pandas.read_csv(filename)
ds = ds[ds.AirTemp != 'M']
#set index to date
ds['Date'] = pandas.to_datetime(ds.Date, yearfirst=True, errors='coerce')
ds.Date = pandas.DatetimeIndex(ds.Date)
ds.index = ds.Date
#group by time (so group readings by time of day of reading, i.e. all readings at midnight)
dg = ds.groupby(ds.index.time)
#initialize the final dataframe
df = pandas.DataFrame()
for name, group in dg: #for each group
#each group is a dateframe
try:
#set unique column names except for date
group.columns = ['Date', 'WaterTemp'+str(name), 'Discharge'+str(name), 'AirTemp'+str(name), 'Precip'+str(name)]
#ensure date is the index
group.index = group.Date
#remove time from index
group.index = group.index.normalize()
#join based on date
df = pandas.concat([df, group], axis=1)
except: #if the try catch block isn't here, throws errors! (three for my dataset?)
pass
#remove duplicate date columns
df = df.loc[:,~df.columns.duplicated()]
#since date is index, drop the first date column
df = df.drop('Date', 1)
#return the dataset
return df

Categories

Resources