I have a pandas dataframe and the index column is time with hourly precision. I want to create a new column that compares the value of the column "Sales number" at each hour with the same exact time one week ago.
I know that it can be written in using shift function:
df['compare'] = df['Sales'] - df['Sales'].shift(7*24)
But I wonder how can I take advantage of the date_time format of the index. I mean, is there any alternatives to using shift(7*24) when the index is in date_time format?
Try something with
df['Sales'].shift(7,freq='D')
Related
I have a few set of days where the index is based on 30min data from monday to friday. There might some missing dates (Might be because of holidays). But i would like to find the highest from column high and lowest from column low for ever past week. Like i am calculating today so previous week high and low is marked in the yellow of attached image.
Tried using rolling , resampling but some how not working. Can any one help
enter image description here
You really should add sample data to your question (by that I mean a piece of code/text that can easily be used to create a dataframe for illustrating how the proposed solution works).
Here's a suggestion. With df your dataframe, and column datatime with datetimes (and not strings):
df["week"] = (
df["datetime"].dt.isocalendar().year.astype(str)
+ df["datetime"].dt.isocalendar().week.astype(str)
)
mask = df["high"] == df.groupby("week")["high"].transform("max")
df = df.merge(
df[mask].rename(columns={"low": "high_low"})
.groupby("week").agg({"high_low": "min"}).shift(),
on="week", how="left"
).drop(columns="week")
Add a week column to df (year + week) for grouping along weeks.
Extract the rows with the weekly maximum highs by mask (there could be more than one for a week).
Build a corresponding dataframe with the weekly minimum of the lows corresponding to the weekly maximum highs (column named high_low), shift it once to get the value from the previous week, and .merge it to df.
If column datetime doesn't contain datetimes:
df["datetime"] = pd.to_datetime(df["datetime"])
If I have understood correctly, the solution should be
get the week number from the date
groupby the week number and fetch the max and min number.
groupby the week fetch max date to get max/last date for a week
now merge all the dataframes into one based on date key
Once the steps are done, you could do any formatting as required.
I have a dataframe, called PORResult, of daily temperatures where rows are years and each column is a day (121 rows x 365 columns). I also have an array, called Percentile_90, of a threshold temperature for each day (length=365). For every day for every year in the PORResult dataframe I want to find out if the value for that day is higher than the value for that day in the Percentile_90 array. The results of which I want to store in a new dataframe, called Count (121rows x 365 columns). To start, the Count dataframe is full of zeros, but if the daily value in PORResult is greater than the daily value in Percentile_90. I want to change the daily value in Count to 1.
This is what I'm starting with:
for i in range(len(PORResult)):
if PORResult.loc[i] > Percentile_90[i]:
CountResult[i]+=1
But when I try this I get KeyError:0. What else can I try?
(Edited:)
Depending on your data structure, I think
CountResult = PORResult.gt(Percentile_90,axis=0).astype(int)
should do the trick. Generally, the toolset provided in pandas is sufficient that for-looping over a dataframe is unnecessary (as well as remarkably inefficient).
i'm scratiching my head for a pandas slicing problem.
I have a dataframe with a date time column and a time date index (every 15 minutes), i want to create a new column with the same frequency (15min) that contains the max value of another column in the previous day (it will be the same value for each row within the same day).
My dataframe is call klines i know that i can get the date of each row with
klines['date']=klines['timedate'].dt.date
I know I can create a timedelta with timedelta function but I can't figure out how to use the timedate object
I was hoping that something like
klines['max_prev_day']=klines[klines['Close time'].dt.date+dt.timedelta(days = -1):klines['Close time'].dt.date]['value_to_look'].max()
But i'm getting a loc error
raise InvalidIndexError(key)
Any clever input is wellcome !
This question already has answers here:
Subtract a year from a datetime column in pandas
(4 answers)
Closed 4 years ago.
I have a pandas Time Series (called df) that has one column (with name data) that contains data with a daily frequency over a time period of 5 years. The following code produces some random data:
import pandas as pd
import numpy as np
df_index = pd.date_range('01-01-2012', periods=5 * 365 + 2, freq='D')
df = pd.DataFrame({'data': np.random.rand(len(df_index))}, index=df_index)
I want to perform a simple yearly trend decomposition, where for each day I subtract its value one year ago. Aditionally, I want to attend leap years in the subtraction. Is there any elegant way to do that? My way to do this is to perform differences with 365 and 366 days and assign them to new columns.
df['diff_365'] = df['data'].diff(365)
df['diff_366'] = df['data'].diff(366)
Afterwards, I apply a function to each row thats selects the right value based on whether the same date from last year is 365 or 366 days ago.
def decide(row):
if (row.name - 59).is_leap_year:
return row[1]
else:
return row[0]
df['yearly_diff'] = df[['diff_365', 'diff_366']].apply(decide, axis=1)
Explanation: the function decide takes as argument a row from the DataFrame consisting of the columns diff_365 and diff_366 (along with the DatetimeIndex). The expression row.name returns the date of the row and assuming the time series has daily frequency (freq = 'D'), 59 days are subtracted which is the number of days from 1st January to 28th February. Based on whether the resulting date is a day from a leap year, the value from the diff_366 column is returned, otherwise the value from the diff_365 column.
This took 8 lines and it feels that the subtraction can be performed in one or two lines. I tried to apply a similiar function directly to the data column (via apply and taking the default argument axis=0). But in this case, I cannot take my DatetimeIndex into account. Is there a better to perform the subtraction?
You may not need to worry about dealing with leap years explicitly. When you construct a DatetimeIndex, you can specify start and end parameters. As per the docs:
Of the four parameters start, end, periods, and freq, exactly three
must be specified.
Here's an example of how you can restructure your logic:
df_index = pd.date_range(start='01-01-2012', end='12-31-2016', freq='D')
df = pd.DataFrame({'data': np.random.rand(len(df_index))}, index=df_index)
df['yearly_diff'] = df['data'] - (df_index - pd.DateOffset(years=1)).map(df['data'].get)
Explanation
We construct a DatetimeIndex object by supplying start, end and freq arguments.
Subtract 1 year from your index by subtracting pd.DateOffset(years=1).
Use pd.Series.map to map these 1yr behind dates to data.
Subtract the resulting series from the original data series.
Currently I'm generating a DateTimeIndex using a certain function, zipline.utils.tradingcalendar.get_trading_days. The time series is roughly daily but with some gaps.
My goal is to get the last date in the DateTimeIndex for each month.
.to_period('M') & .to_timestamp('M') don't work since they give the last day of the month rather than the last value of the variable in each month.
As an example, if this is my time series I would want to select '2015-05-29' while the last day of the month is '2015-05-31'.
['2015-05-18', '2015-05-19', '2015-05-20', '2015-05-21',
'2015-05-22', '2015-05-26', '2015-05-27', '2015-05-28',
'2015-05-29', '2015-06-01']
Condla's answer came closest to what I needed except that since my time index stretched for more than a year I needed to groupby by both month and year and then select the maximum date. Below is the code I ended up with.
# tempTradeDays is the initial DatetimeIndex
dateRange = []
tempYear = None
dictYears = tempTradeDays.groupby(tempTradeDays.year)
for yr in dictYears.keys():
tempYear = pd.DatetimeIndex(dictYears[yr]).groupby(pd.DatetimeIndex(dictYears[yr]).month)
for m in tempYear.keys():
dateRange.append(max(tempYear[m]))
dateRange = pd.DatetimeIndex(dateRange).order()
Suppose your data frame looks like this
original dataframe
Then the following Code will give you the last day of each month.
df_monthly = df.reset_index().groupby([df.index.year,df.index.month],as_index=False).last().set_index('index')
transformed_dataframe
This one line code does its job :)
My strategy would be to group by month and then select the "maximum" of each group:
If "dt" is your DatetimeIndex object:
last_dates_of_the_month = []
dt_month_group_dict = dt.groupby(dt.month)
for month in dt_month_group_dict:
last_date = max(dt_month_group_dict[month])
last_dates_of_the_month.append(last_date)
The list "last_date_of_the_month" contains all occuring last dates of each month in your dataset. You can use this list to create a DatetimeIndex in pandas again (or whatever you want to do with it).
This is an old question, but all existing answers here aren't perfect. This is the solution I came up with (assuming that date is a sorted index), which can be even written in one line, but I split it for readability:
month1 = pd.Series(apple.index.month)
month2 = pd.Series(apple.index.month).shift(-1)
mask = (month1 != month2)
apple[mask.values].head(10)
Few notes here:
Shifting a datetime series requires another pd.Series instance (see here)
Boolean mask indexing requires .values (see here)
By the way, when the dates are the business days, it'd be easier to use resampling: apple.resample('BM')
Maybe the answer is not needed anymore, but while searching for an answer to the same question I found maybe a simpler solution:
import pandas as pd
sample_dates = pd.date_range(start='2010-01-01', periods=100, freq='B')
month_end_dates = sample_dates[sample_dates.is_month_end]
Try this, to create a new diff column where the value 1 points to the change from one month to the next.
df['diff'] = np.where(df['Date'].dt.month.diff() != 0,1,0)