I have a pandas dataframe:
Date Party Status
-------------------------------------------
0 01-01-2018 John Sent
1 13-01-2018 Lisa Received
2 15-01-2018 Will Received
3 19-01-2018 Mark Sent
4 02-02-2018 Will Sent
5 28-02-2018 John Received
I would like to add new columns that perform a .cumsum(), but it is conditional on the dates. It would look like this:
Num of Sent Num of Received
Date Party Status in Past 30 Days in Past 30 Days
-----------------------------------------------------------------------------------
0 01-01-2018 John Sent 1 0
1 13-01-2018 Lisa Received 1 1
2 15-01-2018 Will Received 1 2
3 19-01-2018 Mark Sent 2 2
4 02-02-2018 Will Sent 2 2
5 28-02-2018 John Received 1 1
I managed to implement what I need by writing the following code:
def inner_func(date_var, status_var, date_array, status_array):
sent_increment = 0
received_increment = 0
for k in range(0, len(date_array)):
if((date_var - date_array[k]).days <= 30):
if(status_array[k] == "Sent"):
sent_increment += 1
elif(status_array[k] == "Received"):
received_increment += 1
return sent_increment, received_increment
import pandas as pd
import time
df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "15-01-2018", "19-01-2018", "02-02-2018", "28-02-2018"]),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"],
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]})
df = df.sort_values("Date")
date_array = []
status_array = []
for i in range(0, len(df)):
date_var = df.loc[i,"Date"]
date_array.append(date_var)
status_var = df.loc[i,"Status"]
status_array.append(status_var)
sent_count, received_count = inner_func(date_var, status_var, date_array, status_array)
df.loc[i, "Num of Sent in Past 30 days"] = sent_count
df.loc[i, "Num of Received in Past 30 days"] = received_count
However, the process is computationally expensive and painfully slow when df is large, since the nested loops go through the dataframe twice. Is there a more pythonic way to implement what I am trying to achieve without iterating through the dataframe in the way I am doing?
Update 2
Michael has provided the solution to what I am looking for: here. Lets assume that I want to apply the solution on groupby objects. For example, using the rolling solution to compute the cumulative sums based for each party:
Sent past 30 Received past 30
Date Party Status days by party days by party
-----------------------------------------------------------------------------------
0 01-01-2018 John Sent 1 0
1 13-01-2018 Lisa Received 0 1
2 15-01-2018 Will Received 0 1
3 19-01-2018 Mark Sent 1 0
4 02-02-2018 Will Sent 1 1
5 28-02-2018 John Received 0 1
I have attempted to regenerate the solution for the using the groupby method below:
l = []
grp_obj = df.groupby("Party")
grp_obj.rolling('30D', min_periods=1)["dummy"].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
But I ended up with incorrect values. I know that it is happening because the concat method is combining the dataframes without condsidering their indices, since groupby orders the data differently. Is there a way I can modify the list appending to include the original index, such that I can merge/join the value_counts dataframe to the original one?
If you set Date as index and convert Status temporary to a categorical you can use pd.rolling with a little trick
df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1)
Out:
Date Party Status Received past 30 Days Sent past 30 Days
0 2018-01-01 John Sent 0 1
1 2018-01-13 Lisa Received 1 1
2 2018-01-15 Will Received 2 1
3 2018-01-19 Mark Sent 2 2
4 2018-02-02 Will Sent 2 2
5 2018-02-28 John Received 1 1
Maintaining an original index to allow subsequent merging
Slightly adjust the data to have different sequences in Date and index
df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "03-01-2018", "19-01-2018", "08-02-2018", "22-02-2018"]),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"],
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]})
df
Out:
Date Party Status
0 2018-01-01 John Sent
1 2018-01-13 Lisa Received
2 2018-03-01 Will Received
3 2018-01-19 Mark Sent
4 2018-08-02 Will Sent
5 2018-02-22 John Received
Store the original index after sorting by Date and reindex after operationing on the dataframe sorted by Date
df = df.sort_values('Date')
df = df.reset_index()
df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
df = pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1)
df.set_index('index')
Out:
Date Party Status Received past 30 Days Sent past 30 Days
index
0 2018-01-01 John Sent 0 1
1 2018-01-13 Lisa Received 1 1
3 2018-01-19 Mark Sent 1 2
5 2018-02-22 John Received 1 0
2 2018-03-01 Will Received 2 0
4 2018-08-02 Will Sent 0 1
Counting values in groups
Sort by Party and Date first to get the right order to append the grouped counts
df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "15-01-2018", "19-01-2018", "02-02-2018", "28-02-2018"]),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"],
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]})
df = df.sort_values(['Party','Date'])
After that reindex before concat to append to the right rows
df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.groupby('Party').rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1).sort_values('Date')
Out:
Date Party Status Received past 30 Days Sent past 30 Days
0 2018-01-01 John Sent 0 1
2 2018-01-13 Lisa Received 1 0
4 2018-01-15 Will Received 1 0
3 2018-01-19 Mark Sent 0 1
5 2018-02-02 Will Sent 1 1
1 2018-02-28 John Received 1 0
Micro-Benchmark
As this solution is also iterating over the dataset I compared the running times of both approaches. Only very small datasets were used because the original solution's runtime was increasing fast.
Results
Code to reproduce the benchmark
import pandas as pd
import perfplot
def makedata(n=1):
df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "15-01-2018", "19-01-2018", "02-02-2018", "28-02-2018"]*n),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"]*n,
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]*n})
return df.sort_values("Date")
def rolling(df):
df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
return pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1)
def forloop(df):
date_array = []
status_array = []
def inner_func(date_var, status_var, date_array, status_array):
sent_increment = 0
received_increment = 0
for k in range(0, len(date_array)):
if((date_var - date_array[k]).days <= 30):
if(status_array[k] == "Sent"):
sent_increment += 1
elif(status_array[k] == "Received"):
received_increment += 1
return sent_increment, received_increment
for i in range(0, len(df)):
date_var = df.loc[i,"Date"]
date_array.append(date_var)
status_var = df.loc[i,"Status"]
status_array.append(status_var)
sent_count, received_count = inner_func(date_var, status_var, date_array, status_array)
df.loc[i, "Num of Sent in Past 30 days"] = sent_count
df.loc[i, "Num of Received in Past 30 days"] = received_count
return df
perfplot.show(
setup=makedata,
kernels=[forloop, rolling],
n_range=[x for x in range(5, 105, 5)],
equality_check=None,
xlabel='len(df)'
)
Related
This question already has answers here:
How to join two dataframes for which column values are within a certain range?
(9 answers)
Closed 7 days ago.
Existing dataframe :
df_1
Id dates time(sec)_1 time(sec)_2
1 02/02/2022 15 20
1 04/02/2022 20 30
1 03/02/2022 30 40
1 06/02/2022 50 40
2 10/02/2022 10 10
2 11/02/2022 15 20
df_2
Id min_date action_date
1 02/02/2022 04/02/2022
2 06/02/2022 10/02/2022
Expected Dataframe :
df_2
Id min_date action_date count_of_dates avg_time_1 avg_time_2
1 02/02/2022 04/02/2022 3 21.67 30
2 06/02/2022 10/02/2022 1 10 10
count of dates, avg_time_1 , avg_time_2 is to be created from the df_1.
count of dates is calculated considering the min_date and action_date i.e. number of dates from from df_1 falling under min_date and action_date.
avg_time_1 and avg_time_2 are calculated w.r.t. to count of dates
stuck with applying the condition for dates :-( any leads.?
If small data is possible filter per rows by custom function:
df_1['dates'] = df_1['dates'].apply(pd.to_datetime)
df_2[['min_date','action_date']] = df_2[['min_date','action_date']].apply(pd.to_datetime)
def f(x):
m = df_1['Id'].eq(x['Id']) & df_1['dates'].between(x['min_date'], x['action_date'])
s = df_1.loc[m, ['time(sec)_1','time(sec)_2']].mean()
return pd.Series([m.sum()] + s.to_list(), index=['count_of_dates'] + s.index.tolist())
df = df_2.join(df_2.apply(f, axis=1))
print (df)
Id min_date action_date count_of_dates time(sec)_1 time(sec)_2
0 1 2022-02-02 2022-04-02 3.0 21.666667 30.0
1 2 2022-06-02 2022-10-02 1.0 10.000000 10.0
If Id in df_2 is unique is possible improve performance by merge df_1 with aggregate size and mean:
df = df_2.merge(df_1, on='Id')
d = {'count_of_dates':('Id','size'),
'time(sec)_1':('time(sec)_1','mean'),
'time(sec)_2':('time(sec)_2','mean')}
df = df_2.join(df[df['dates'].between(df['min_date'], df['action_date'])]
.groupby('Id').agg(**d), on='Id')
print (df)
Id min_date action_date count_of_dates time(sec)_1 time(sec)_2
0 1 2022-02-02 2022-04-02 3 21.666667 30
1 2 2022-06-02 2022-10-02 1 10.000000 10
I have this example dataframe below. I created a function that does what I want, computing a Sales rolling average (7,14 days window) for each Store for the previous day and shifts it to the current date. How can I compute this only for a specific date, 2022-12-31, for example? I have a lot of rows and I don't want to recalculate it each time I add a date.
import numpy as np
import pandas as pd
ex = pd.DataFrame({'Date':pd.date_range('2022-10-01', '2022-12-31'),
'Store': np.random.choice(2, len(pd.date_range('2022-10-01', '2022-12-31'))),
'Sales': np.random.choice(10000, len(pd.date_range('2022-10-01', '2022-12-31')))})
ex.sort_values(['Store','Date'], ascending=False, inplace=True)
for days in [7, 14]:
ex['Sales_mean_' + str(days) + '_days'] = ex.groupby('Store')[['Sales']].apply(lambda x: x.shift(-1).rolling(days).mean().shift(-days+1))```
I redefined a similar dataframe because using a random variable generator makes debugging difficult. At each test the dataframe changes randomly.
In addition to keep it simple, I will use 2 and 3 moving average periods.
Starting dataframe
Date Store Sales
9 2022-10-10 1 5347
8 2022-10-09 1 1561
7 2022-10-08 1 5648
6 2022-10-07 1 8123
5 2022-10-06 1 1401
4 2022-10-05 0 2745
3 2022-10-04 0 7848
2 2022-10-03 0 3151
1 2022-10-02 0 4296
0 2022-10-01 0 9028
It gives :
ex = pd.DataFrame({
"Date": pd.date_range('2022-10-01', '2022-10-10'),
"Store": [0]*5+[1]*5,
"Sales": [9028, 4296, 3151, 7848, 2745, 1401, 8123, 5648, 1561, 5347],
})
ex.sort_values(['Store','Date'], ascending=False, inplace=True)
Proposed code
import pandas as pd
import numpy as np
ex = pd.DataFrame({
"Date": pd.date_range('2022-10-01', '2022-10-10'),
"Store": [0]*5+[1]*5,
"Sales": [9028, 4296, 3151, 7848, 2745, 1401, 8123, 5648, 1561, 5347],
})
ex.sort_values(['Store','Date'], ascending=False, inplace=True)
periods=(2,3)
### STEP 1 -- Initialization : exhaustive Mean() Calculation
for per in periods:
ex["Sales_mean_{0}_days".format(per)] = (
ex.groupby(['Store'])['Sales']
.apply(lambda g: g.shift(-1)
.rolling(per)
.mean()
.shift(-per+1))
)
### STEP 2 -- New Row Insertion
def fmt_newRow(g, newRow, periods):
return {
"Date": pd.Timestamp(newRow[0]),
"Store": newRow[1],
"Sales": newRow[2],
"Sales_mean_{0}_days".format(periods[0]): g['Sales'].iloc[0:periods[0]].mean(),
"Sales_mean_{0}_days".format(periods[1]): g['Sales'].iloc[0:periods[1]].mean(),
}
def add2DF(ex, newRow):
# g : sub-Store group
g = (
ex.loc[ex.Store==newRow[1]]
.sort_values(['Store','Date'], ascending=False)
)
# Append newRow like a dictionnary and sort by ['Store','Date']
ex = (
ex.append(fmt_newRow(g, newRow, periods), ignore_index=True)
.sort_values(['Store','Date'], ascending=False)
.reset_index(drop=True)
)
#
return ex
newRow = ['2022-10-11', 1, 2803] # [Date, Store, Sales]
ex = add2DF(ex, newRow)
print(ex)
Result
Date Store Sales Sales_mean_2_days Sales_mean_3_days
0 2022-10-11 1 2803 3454.0 4185.333333
1 2022-10-10 1 5347 3604.5 5110.666667
2 2022-10-09 1 1561 6885.5 5057.333333
3 2022-10-08 1 5648 4762.0 NaN
4 2022-10-07 1 8123 NaN NaN
5 2022-10-06 1 1401 NaN NaN
6 2022-10-05 0 2745 5499.5 5098.333333
7 2022-10-04 0 7848 3723.5 5491.666667
8 2022-10-03 0 3151 6662.0 NaN
9 2022-10-02 0 4296 NaN NaN
10 2022-10-01 0 9028 NaN NaN
Comments
A new row is a list like this one : [Date, Store, Sales]
Each time you need to save a new row to dataframe, you pass it to fmt_newRow function with the corresponding subgroup g
fmt_newRow returns a new row on the form of a dictionnary which is integrated in the dataframe with append Pandas function
No need to recalculate all the averages, because only the per-last g values are used to calculate the new row average
Moving averages for periods 2 and 3 were checked and are correct.
I would like to calculate how many customers there were at each time of month in the past year. My dataframe contains customer ID, start-date (where customer started being customer) and end-date (where customer ended being customer):
Customer_ID StartDate EndDate
1 01/01/2019 NAT
2 25/10/2017 01/06/2020
2 13/06/2012 15/07/2015
2 20/12/2015 03/01/2016
2 25/03/2016 14/06/2017
3 05/06/2018 05/06/2019
3 12/12/2019 NAT
The result I would like; is counting the number of customers that were "active" per month-year combination:
MONTH YEAR NUMB_CUSTOMERS
01 2013 1
02 2013 1
03 2013 1
04 2013 1
...
01 2019 2
...
09 2020 2
I would like to avoid for-loops as that takes too much long (I have a table of over 100 000 rows).
Has anyone an idea to do this neat and quickly?
Thanks!
First, read data and make it digestible for program
import pandas as pd
import datetime
df = pd.read_csv("table.csv")
func = lambda x: x.split('/', maxsplit=1)[1]
df["StartDate"] = df["StartDate"].apply(func)
mask = df["EndDate"] != "NAT"
df.loc[mask, "EndDate"] = df.loc[mask, "EndDate"].apply(func)
Then, count changes in amount of clients (you basically get a derivative of your data)
customers_gained = df[["Customer_ID", "StartDate"]].groupby("StartDate").agg("count")
customers_lost = df[["Customer_ID", "EndDate"]].groupby("EndDate").agg("count")
customers_lost.drop("NAT",inplace=True)
make a grouper for all changes in amount of clients
def make_time_table(start, end):
start_date = datetime.datetime.strptime(start, "%d/%m/%Y")
end_date = datetime.datetime.strptime(end, "%d/%m/%Y")
data_range = pd.date_range(start_date, end_date, freq="M")
string_range = [el.strftime("%m/%Y") for el in data_range]
ser = pd.Series([0]*data_range.size, index=string_range)
return ser
Next introduce change into time_table and "integrate" by accumulation
time_table = make_time_table("01/01/2012", "01/12/2020")
time_table[customers_gained.index] = customers_gained["Customer_ID"]
time_table[customers_lost.index] -= customers_lost["Customer_ID"]
result = time_table.cumsum()
print(result)
Outputs:
01/2012 0
02/2012 0
03/2012 0
04/2012 0
05/2012 0
06/2012 1
07/2012 1
...
10/2019 2
11/2019 2
12/2019 3
01/2020 3
02/2020 3
03/2020 3
04/2020 3
05/2020 3
06/2020 2
07/2020 2
08/2020 2
09/2020 2
10/2020 2
11/2020 2
dtype: int64
table.csv
Customer_ID,StartDate,EndDate
1,01/01/2019,NAT
2,25/10/2017,01/06/2020
2,13/06/2012,15/07/2015
2,20/12/2015,03/01/2016
2,25/03/2016,14/06/2017
3,25/03/2016,05/06/2019
3,12/12/2019,NAT
Please, suggest a more suitable title for this question
I have: Two-level indexed DF (crated via groupby):
clicks yield
country report_date
AD 2016-08-06 1 31
2016-12-01 1 0
AE 2016-10-11 1 0
2016-10-13 2 0
I need:
Consequently take country by country data, process it and put it back:
for country in set(DF.get_level_values(0)):
DF_country = process(DF.loc[country])
DF[country] = DF_country
Where process add new rows to DF_country.
Problem is in last string:
ValueError: Wrong number of items passed 2, placement implies 1
I just modify your code, I change the process to add, Base on my understanding process is a self-define function right ?
for country in set(DF.index.get_level_values(0)): # change here
DF_country = DF.loc[country].add(1)
DF.loc[country] = DF_country.values #and here
DF
Out[886]:
clicks yield
country report_date
AD 2016-08-06 2 32
2016-12-01 2 1
AE 2016-10-11 2 1
2016-10-13 3 1
EDIT :
l=[]
for country in set(DF.index.get_level_values(0)):
DF1=DF.loc[country]
DF1.loc['2016-01-01']=[1,2] #adding row here
l.append(DF1)
pd.concat(l,axis=0,keys=set(DF.index.get_level_values(0)))
Out[923]:
clicks yield
report_date
AE 2016-10-11 1 0
2016-10-13 2 0
2016-01-01 1 2
AD 2016-08-06 1 31
2016-12-01 1 0
2016-01-01 1 2
I could use some more help with a project. I am trying to analyze 4.5 million rows of data. I have read the data into a dataframe, have organized the data and now have 3 columns: 1) date as datetime 2) unique identifier 3) price
I need to calculate the year over year change in prices per item but the dates are not uniform and not consistent per item. For example:
date item price
12/31/15 A 110
12/31/15 B 120
12/31/14 A 100
6/24/13 B 100
What I would like is to find as a result is:
date item price previousdate % change
12/31/15 A 110 12/31/14 10%
12/31/15 B 120 6/24/13 20%
12/31/14 A 100
6/24/13 B 100
EDIT - Better example of data
date item price
6/1/2016 A 276.3457646
6/1/2016 B 5.044165645
4/27/2016 B 4.91300186
4/27/2016 A 276.4329163
4/20/2016 A 276.9991265
4/20/2016 B 4.801263717
4/13/2016 A 276.1950213
4/13/2016 B 5.582923328
4/6/2016 B 5.017863509
4/6/2016 A 276.218649
3/30/2016 B 4.64274783
3/30/2016 A 276.554653
3/23/2016 B 5.576438253
3/23/2016 A 276.3135836
3/16/2016 B 5.394435443
3/16/2016 A 276.4222986
3/9/2016 A 276.8929462
3/9/2016 B 4.999951262
3/2/2016 B 4.731349423
3/2/2016 A 276.3972068
1/27/2016 A 276.8458971
1/27/2016 B 4.993033132
1/20/2016 B 5.250379701
1/20/2016 A 276.2899864
1/13/2016 B 5.146639666
1/13/2016 A 276.7041978
1/6/2016 B 5.328296958
1/6/2016 A 276.9465891
12/30/2015 B 5.312301356
12/30/2015 A 256.259668
12/23/2015 B 5.279105491
12/23/2015 A 255.8411198
12/16/2015 B 5.150798234
12/16/2015 A 255.8360529
12/9/2015 A 255.4915183
12/9/2015 B 4.722876886
12/2/2015 A 256.267146
12/2/2015 B 5.083626167
10/28/2015 B 4.876177757
10/28/2015 A 255.6464653
10/21/2015 B 4.551439655
10/21/2015 A 256.1735769
10/14/2015 A 255.9752668
10/14/2015 B 4.693967392
10/7/2015 B 4.911797443
10/7/2015 A 256.2556707
9/30/2015 B 4.262994526
9/30/2015 A 255.8068691
7/1/2015 A 255.7312385
4/22/2015 A 234.6210132
4/15/2015 A 235.3902076
4/15/2015 B 4.154926102
4/1/2015 A 234.4713827
2/25/2015 A 235.1391496
2/18/2015 A 235.1223471
What I have done (with some help from other users) hasn't worked but is below. Thanks for any help you guys can provide or pointing me in the right direction!
import pandas as pd
import datetime as dt
import numpy as np
df = pd.read_csv('...python test file5.csv',parse_dates =['As of Date'])
df = df[['item','price','As of Date']]
def get_prev_year_price(x, df):
try:
return df.loc[x['prev_year_date'], 'price']
#return np.abs(df.time - x)
except Exception as e:
return x['price']
#Function to determine the closest date from given date and list of all dates
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
df['As of Date'] = pd.to_datetime(df['As of Date'],format='%m/%d/%Y')
df = df.rename(columns = {df.columns[2]:'date'})
# list of dates
dtlst = [item for item in df['date']]
data = []
data2 = []
for item in df['item'].unique():
item_df = df[df['item'] == item] #select based on items
select_dates = item_df['date'].unique()
item_df.set_index('date', inplace=True) #set date as key index
item_df = item_df.resample('D').mean().reset_index() #fill in missing date
item_df['price'] = item_df['price'].interpolate('nearest') #fill in price with nearest price available
# use max(item_df['date'] where item_df['date'] < item_df['date'] - pd.DateOffset(years=1, days=1))
#possible_date = item_df['date'] - pd.DateOffset(years=1)
#item_df['prev_year_date'] = max(df[df['date'] <= possible_date])
item_df['prev_year_date'] = item_df['date'] - pd.DateOffset(years=1) #calculate 1 year ago date
date_df = item_df[item_df.date.isin(select_dates)] #select dates with useful data
item_df.set_index('date', inplace=True)
date_df['prev_year_price'] = date_df.apply(lambda x: get_prev_year_price(x, item_df),axis=1)
#date_df['prev_year_price'] = date_df.apply(lambda x: nearest(dtlst, x),axis=1)
date_df['change'] = date_df['price'] / date_df['prev_year_price']-1
date_df['item'] = item
data.append(date_df)
data2.append(item_df)
summary = pd.concat(data).sort_values('date', ascending=False)
#print (summary)
#saving the output of the CSV file to see how data looks after being handled
filename = '...python_test_file_save4.csv'
summary.to_csv(filename, index=True, encoding='utf-8')
With current usecase assumptions, this works out for this specific usecase
In [2459]: def change(grp):
...: grp['% change'] = grp.price.diff()
...: grp['previousdate'] = grp.date.shift(1)
...: return grp
Sort on date then groupby and apply the change function, then sort the index back.
In [2460]: df.sort_values('date').groupby('item').apply(change).sort_index()
Out[2460]:
date item price % change previousdate
0 2015-12-31 A 110 10.0 2014-12-31
1 2015-12-31 B 120 20.0 2013-06-24
2 2014-12-31 A 100 NaN NaT
3 2013-06-24 B 100 NaN NaT
This is a good situation for merge_asof, which merges two dataframes by finding the last row of the right dataframe that is less than the key to the left dataframe. We need to add a year to the right dataframe first, since the requirement is 1 year or more difference between dates.
Here is some sample data that you brought up in your comment.
date item price
12/31/15 A 110
12/31/15 B 120
12/31/14 A 100
6/24/13 B 100
12/31/15 C 100
1/31/15 C 80
11/14/14 C 130
11/19/13 C 110
11/14/13 C 200
The dates need to be sorted for merge_asof to work. merge_asof also drops the joining column so we need to put a copy of that back in our right dataframe.
Setup dataframes
df = df.sort_values('date')
df_copy = df.copy()
df_copy['previousdate'] = df_copy['date']
df_copy['date'] += pd.DateOffset(years=1)
Use merge_asof
df_final = pd.merge_asof(df, df_copy,
on='date',
by='item',
suffixes=['current', 'previous'])
df_final['% change'] = (df_final['pricecurrent'] - df_final['priceprevious']) / df_final['priceprevious']
df_final
date item pricecurrent priceprevious previousdate % change
0 2013-06-24 B 100 NaN NaT NaN
1 2013-11-14 C 200 NaN NaT NaN
2 2013-11-19 C 110 NaN NaT NaN
3 2014-11-14 C 130 200.0 2013-11-14 -0.350000
4 2014-12-31 A 100 NaN NaT NaN
5 2015-01-31 C 80 110.0 2013-11-19 -0.272727
6 2015-12-31 A 110 100.0 2014-12-31 0.100000
7 2015-12-31 B 120 100.0 2013-06-24 0.200000
8 2015-12-31 C 100 130.0 2014-11-14 -0.230769