I need to convert the following logic to python and SQL (SQL query is more imp):
I have a table with ID and Date columns. I need to add a column called "Week_Num" such that:
Every time it sees a new ID, Week_Num becomes 1
7 dates correspond to 1 week so if the first week begins on 29th Oct 2019 then 2nd week will begin on 5th Nov 2019. This will continue till the ID does not change. For example, in the below table week 1 for ID=24 will be from 29th Oct 2019-4th Nov 2019 while week 1 for ID=25 will be from 25th Oct 2020 - 31st Oct 2020.
ID
Date
Week_Num
24
2019-10-29
1
24
2019-10-30
1
24
2019-10-31
1
24
2019-11-01
1
24
2019-11-02
1
24
2019-11-03
1
24
2019-11-04
1
24
2019-11-05
2
24
..........
.
24
2020-03-14
.
25
2020-10-25
1
25
2020-10-26
1
25
2020-10-27
1
25
2020-10-28
1
25
2020-10-29
1
25
2020-10-30
1
25
2020-10-31
1
How about just using date diff of the minimum value:
select t.*,
floor(datediff(day,
min(date) over (partition by id order by date)
date
) / 7.0
) + 1 as week_num
from t;
Related
I want to compare the two columns (date1 and date2) for same ID and set the value of column NewColumn to 'Yes' if date1 matches with the previous date2.
INPUT:
ID
Date1
date2
NewColumn
1
31 Jan 2022
1 Feb 2022
1
1 Feb 2022
2 Feb 2022
1
7 Feb 2022
8 Feb 2022
2
2 Feb 2022
2 Feb 2022
3
2 Feb 2022
3 Feb 2022
Input in CSV format:
ID,date1,date2,NewColumn
1,31/01/2022,01/02/2022,
1,01/02/2022,02/02/2022,
1,07/02/2022,08/02/2022,
2,02/02/2022,02/02/2022,
3,02/02/2022,03/02/2022,
Output:
ID
date1
date2
NewColumn
1
31 Jan 2022
1 Feb 2022
1
1 Feb 2022
2 Feb 2022
YES
1
7 Feb 2022
8 Feb 2022
2
2 Feb 2022
2 Feb 2022
3
2 Feb 2022
3 Feb 2022
In CSV format:
ID,date1,date2,NewColumn
1,31/01/2022,01/02/2022,
1,01/02/2022,02/02/2022, YES
1,07/02/2022,08/02/2022,
2,02/02/2022,02/02/2022,
3,02/02/2022,03/02/2022,
You can use groupby and apply to apply a custom function to each group. The function then needs to compare the date1 with the previous row's date2 which can be done using shift. This will give a boolean value (True or False), to get a string value you can use np.where. For example:
import numpy as np
def func(x):
return x['date1'] == x['date2'].shift(1)
df['NewColumn'] = np.where(df.groupby('ID').apply(func), 'YES', '')
Result:
ID date1 date2 NewColumn
0 1 31/01/2022 01/02/2022
1 1 01/02/2022 02/02/2022 YES
2 1 07/02/2022 08/02/2022
3 2 02/02/2022 02/02/2022
4 3 02/02/2022 03/02/2022
I have a dataframe that has data for 4 years which kind of look like this:
Year
Week
Value
2018
1
25
2018
2
28
2018
3
26
2019
1
24
2019
2
34
2019
3
30
2020
1
27
2020
2
33
2020
3
32
2021
1
39
2021
2
43
2021
3
41
What I want to do is to replace the values in 2021 with a weighting of the previous 3 years values in the same time frame. So in this example replace only the values from weeks 1 to 3 in 2021 (there could be other weeks to be left alone) with say: 45%*2020 + 30%*2019 + 25%*2018
Which would give us the following for 2021:
Year
Week
Value
2021
1
20.65
2021
2
32.05
2021
3
29.9
And we got the values for 2021 week 3 by doing:
0.4532 + 0.330 + 0.25*26 = 14.4 + 9 + 6.5 = 29.9
Also, I want to be able to skip years if I want to, 2021 can be based off of 2020, 2019, and 2016 for example.
You can create a custom function as it sounds like you need customizable parameters. There isn't a specific pandas method that can do this:
def f(df=df, years=[], weeks=[], weights=[], current_year=2021):
df = df[df['Week'].isin(weeks)]
series_weights = df['Year'].map({year : weight for year, weight in zip(years, weights)})
df['Value'] = df['Value'] * series_weights
df = df.assign(Year=current_year).groupby(['Year', 'Week'], as_index=False)['Value'].sum()
return df
f(years=[2018,2019,2020], weeks=[1,2,3], weights=[0.25,0.3,0.45])
Out[1]:
Year Week Value
0 2021 1 25.60
1 2021 2 32.05
2 2021 3 29.90
I am parsing bank statement PDFs that have the full start and end dates in question both in the filename and in the document, but the actual entries corresponding to the transactions only contain the day and month ('%d %b'). Here is what the series looks like for the "Date" column:
1
2 24 Dec
3 27 Dec
4
5
6 30 Dec
7
8 31 Dec
9
10
11 2 Jan
12
13 3 Jan
14 6 Jan
15 14 Jan
16 15 Jan
I have the start and end dates as 2013-12-23 and 2014-01-23. What is an efficient way to populate this series/column with the correct full date given the start and end range? I would like any existing date to forward fill the same date to the next date so:
1
2 24 Dec 2013
3 27 Dec 2013
4 27 Dec 2013
5 27 Dec 2013
6 30 Dec 2013
7 30 Dec 2013
8 31 Dec 2013
9 31 Dec 2013
10 31 Dec 2013
11 2 Jan 2014
12 2 Jan 2014
13 3 Jan 2014
14 6 Jan 2014
15 14 Jan 2014
16 15 Jan 2014
The date format is irrelevant as long as it is a datetime format. I was hoping to use something internal to pandas, but I can't figure out what to use and right now a check against the start and end dates and filling out the year based on where the date fits into the range is the best I've come up with, but it is inefficient to have to run this on the whole column. Any help/advice/tips would be appreciated, thanks in advance.
EDIT: just wanted to add that I am hoping for a general procedural solution that can apply to any start/end date and set of transactions and not just this particular series included although I think this is a good test case as it has an end of year overlap.
EDIT2: what I have so far after posting this question is the following, which doesn't seem terribly efficient, but seems to work:
def add_year(date, start, end):
if not date:
return(np.NaN)
else:
test_date = "{} {}".format(date, start.year)
test_date = datetime.strptime(test_date, '%d %b %Y').date()
if start_date <= test_date <= end_date:
return(test_date)
else:
return(datetime.strptime("{} {}".format(date, end.year), '%d %b %Y').date())
df['Date'] = df.Date.map(lambda date: add_year(date, start_date, end_date))
df.Date.ffill(inplace=True)
try:
df['Date']=df['Date'].replace('nan|NaN',float('NaN'),regex=True)
#convert string nan to actual NaN's
df['Date']=df['Date'].ffill()
#forword fill NaN's
c=df['Date'].str.contains('Dec') & df['Date'].notna()
#checking if Date column contain Dec
idx=df[c].index[-1]
#getting the index of last 'Date' where condition c satisfies
df.loc[:idx,'Date']=df.loc[:idx,'Date']+' 2013'
#adding 2013 to 'Date' upto last index of c
df.loc[idx+1:,'Date']=df.loc[idx+1:,'Date']+' 2014'
#adding 2014 to 'Date' from last index of c+1 upto last
df['Date']=pd.to_datetime(df['Date'])
#Finally converting these values to datetime
output of df:
Date
0 NaT
1 2013-12-24
2 2013-12-27
3 2013-12-27
4 2013-12-27
5 2013-12-30
6 2013-12-30
7 2013-12-31
8 2013-12-31
9 2013-12-31
10 2014-01-02
11 2014-01-02
12 2014-01-03
13 2014-01-06
14 2014-01-14
15 2014-01-15
I have a dataframe which consists of departments, year, the month of invoice, the invoice date and the value.
I have offset the Invoice dates by business days and now what I am trying to achieve is to combine all the months that have the same number of working days (so the 'count' of each month by year) and average the value for each day.
The data I have is as follows:
Department Year Month Invoice Date Value
0 Sales 2019 March 2019-03-25 1000.00
1 Sales 2019 March 2019-03-26 2000.00
2 Sales 2019 March 2019-03-27 3000.00
3 Sales 2019 March 2019-03-28 4000.00
4 Sales 2019 March 2019-03-29 5000.00
... ... ... ... ... ...
2435 Specialist 2020 August 2020-08-27 6000.00
2436 Specialist 2020 August 2020-08-28 7000.00
2437 Specialist 2020 September 2020-09-01 8000.00
2438 Specialist 2020 September 2020-09-02 9000.00
2439 Specialist 2020 September 2020-09-07 1000.00
The count of each month is as follows:
Year Month
2019 April 21
August 21
December 20
July 23
June 20
March 5
May 21
November 21
October 23
September 21
2020 April 21
August 20
February 20
January 22
July 23
June 22
March 22
May 19
September 5
My hope is that using this count I could aggregate the data from the original df and average for example April, August, May, November, September (2019) along with April (2020) as they all have 21 working days in the month.
Producing one dataframe with each day of the month an average of the months combined for each # of days.
I hope that makes sense.
Note: Please ignore the 5 days length, just incomplete data for those months...
Thank you
EDIT: I just realised that the days wont line up for each month so my plan is to aggregate it based on whether its the first business day of the month, then the second the third etc regardless of the actual date.
ALSO (SORRY): I was hoping it could be by department!
Department Month Length Day Number Average Value
0 Sales 21 1 20000
1 Sales 21 2 5541
2 Sales 21 3 87485
3 Sales 21 4 1863
4 Sales 21 5 48687
5 Sales 21 6 486996
6 Sales 21 7 892
7 Sales 21 8 985
8 Sales 21 9 14169
9 Sales 21 10 20000
10 Sales 21 11 5541
11 Sales 21 12 87485
12 Sales 21 13 1863
13 Sales 21 14 48687
14 Sales 21 15 486996
15 Sales 21 16 892
16 Sales 21 17 985
17 Sales 21 18 14169
......
So to explain it a bit better lets take sales, and all the months which have 21 days in them, for each day in those 21 day months I am hoping to get the average of the value and get a table that looks like above.
So 'day 1' is an average of all 'day 1s' in the 21 day months (as seen in the count df)! This is to allow me to then plot a line chart profile to show average revenue value on each given day in a 21 day month. I hope this is a bit of a better explanation, apologies.
I am not really sure whether I understand your question. Maybe you could add an expected df to your question?
In the mean time would this point you in the direction you are looking for:
import pandas as pd
from random import randint
from calendar import month_name
df = pd.DataFrame({'years': [randint(1998, 2020) for x in range(10000)],
'months': [month_name[randint(1, 12)] for x in range(10000)],
'days': [randint(1, 30) for x in range(10000)],
'revenue': [randint(0, 1000) for x in range(10000)]}
)
print(df.groupby(['months', 'days'])['revenue'].mean())
Output is:
months days
April 1 475.529412
2 542.870968
3 296.045455
4 392.416667
5 475.571429
September 26 516.888889
27 539.583333
28 513.500000
29 480.724138
30 456.500000
Name: revenue, Length: 360, dtype: float64
I have a daily time series of closing prices of a financial instrument going back to 1990.
I am trying to compare the daily percentage change for each trading day of the previous years to it's respective trading day in 2019. I have 41 trading days of data for 2019 at this time.
I get so far as filtering down and creating a new DataFrame with only the first 41 dates, closing prices, daily percentage changes, and the "trading day of year" ("tdoy") classifier for each day in the set, but am not having luck from there.
I've found other Stack Overflow questions that help people compare datetime days, weeks, years, etc. but I am not able to recreate this because of the arbitrary value each "tdoy" represents.
I won't bother creating a sample DataFrame because of the number of rows so I've linked the CSV I've come up with to this point: Sample CSV.
I think the easiest approach would just be to create a new column that returns what the 2019 percentage change is for each corresponding "tdoy" (Trading Day of Year) using df.loc, and if I could figure this much out I could then create yet another column to do the simple difference between that year/day's percentage change to 2019's respective value. Below is what I try to use (and I've tried other variations) to no avail.
df['2019'] = df['perc'].loc[((df.year == 2019) & (df.tdoy == df.tdoy))]
I've tried to search Stack and Google in probably 20 different variations of my problem and can't seem to find an answer that fits my issue of arbitrary "Trading Day of Year" classification.
I'm sure the answer is right in front of my face somewhere but I am still new to data wrangling.
First step is to import the csv properly. I'm not sure if you made the adjustment, but your data's date column is a string object.
# import the csv and assign to df. parse dates to datetime
df = pd.read_csv('TimeSeriesEx.csv', parse_dates=['Dates'])
# filter the dataframe so that you only have 2019 and 2018 data
df=df[df['year'] >= 2018]
df.tail()
Unnamed: 0 Dates last perc year tdoy
1225 7601 2019-02-20 29.96 0.007397 2019 37
1226 7602 2019-02-21 30.49 0.017690 2019 38
1227 7603 2019-02-22 30.51 0.000656 2019 39
1228 7604 2019-02-25 30.36 -0.004916 2019 40
1229 7605 2019-02-26 30.03 -0.010870 2019 41
Put the tdoy and year into a multiindex.
# create a multiindex
df.set_index(['tdoy','year'], inplace=True)
df.tail()
Dates last perc
tdoy year
37 2019 7601 2019-02-20 29.96 0.007397
38 2019 7602 2019-02-21 30.49 0.017690
39 2019 7603 2019-02-22 30.51 0.000656
40 2019 7604 2019-02-25 30.36 -0.004916
41 2019 7605 2019-02-26 30.03 -0.010870
Make pivot table
# make a pivot table and assign it to a variable
df1 = df.pivot_table(values='last', index='tdoy', columns='year')
df1.head()
year 2018 2019
tdoy
1 33.08 27.55
2 33.38 27.90
3 33.76 28.18
4 33.74 28.41
5 33.65 28.26
Create calculated column
# create the new column
df1['pct_change'] = (df1[2019]-df1[2018])/df1[2018]
df1
year 2018 2019 pct_change
tdoy
1 33.08 27.55 -0.167170
2 33.38 27.90 -0.164170
3 33.76 28.18 -0.165284
4 33.74 28.41 -0.157973
5 33.65 28.26 -0.160178
6 33.43 28.18 -0.157045
7 33.55 28.32 -0.155887
8 33.29 27.94 -0.160709
9 32.97 28.17 -0.145587
10 32.93 28.11 -0.146371
11 32.93 28.24 -0.142423
12 32.79 28.23 -0.139067
13 32.51 28.77 -0.115042
14 32.23 29.01 -0.099907
15 32.28 29.01 -0.101301
16 32.16 29.06 -0.096393
17 32.52 29.38 -0.096556
18 32.68 29.51 -0.097001
19 32.50 30.03 -0.076000
20 32.79 30.30 -0.075938
21 32.87 30.11 -0.083967
22 33.08 30.42 -0.080411
23 33.07 30.17 -0.087693
24 32.90 29.89 -0.091489
25 32.51 30.13 -0.073208
26 32.50 30.38 -0.065231
27 33.16 30.90 -0.068154
28 32.56 30.81 -0.053747
29 32.21 30.87 -0.041602
30 31.96 30.24 -0.053817
31 31.85 30.33 -0.047724
32 31.57 29.99 -0.050048
33 31.80 29.89 -0.060063
34 31.70 29.95 -0.055205
35 31.54 29.95 -0.050412
36 31.54 29.74 -0.057070
37 31.86 29.96 -0.059636
38 32.07 30.49 -0.049267
39 32.04 30.51 -0.047753
40 32.36 30.36 -0.061805
41 32.62 30.03 -0.079399
Altogether without comments and data, the codes looks like:
df = pd.read_csv('TimeSeriesEx.csv', parse_dates=['Dates'])
df=df[df['year'] >= 2018]
df.set_index(['tdoy','year'], inplace=True)
df1 = df.pivot_table(values='last', index='tdoy', columns='year')
df1['pct_change'] = (df1[2019]-df1[2018])/df1[2018]
[EDIT] poster requesting for all dates compared to 2019.
df = pd.read_csv('TimeSeriesEx.csv', parse_dates=['Dates'])
df.set_index(['tdoy','year'], inplace=True)
Ignore year filter above, create pivot table
df1 = df.pivot_table(values='last', index='tdoy', columns='year')
Create a loop going through the years/columns and create a new field for each year comparing to 2019.
for y in df1.columns:
df1[str(y) + '_pct_change'] = (df1[2019]-df1[y])/df1[y]
To view some data...
df1.loc[1:4, "1990_pct_change":"1994_pct_change"]
year 1990_pct_change 1991_pct_change 1992_pct_change 1993_pct_change 1994_pct_change
tdoy
1 0.494845 0.328351 0.489189 0.345872 -0.069257
2 0.496781 0.364971 0.516304 0.361640 -0.045828
3 0.523243 0.382050 0.527371 0.369956 -0.035262
4 0.524960 0.400888 0.531536 0.367838 -0.034659
Final code for all years:
df = pd.read_csv('TimeSeriesEx.csv', parse_dates=['Dates'])
df.set_index(['tdoy','year'], inplace=True)
df1 = df.pivot_table(values='last', index='tdoy', columns='year')
for y in df1.columns:
df1[str(y) + '_pct_change'] = (df1[2019]-df1[y])/df1[y]
df1
I also came up with my own answer more along the lines of what I was trying to originally accomplish. DataFrame I'll work with for the example. df:
Dates last perc year tdoy
0 2016-01-04 29.93 -0.020295 2016 2
1 2016-01-05 29.63 -0.010023 2016 3
2 2016-01-06 29.59 -0.001350 2016 4
3 2016-01-07 29.44 -0.005069 2016 5
4 2017-01-03 34.57 0.004358 2017 2
5 2017-01-04 34.98 0.011860 2017 3
6 2017-01-05 35.00 0.000572 2017 4
7 2017-01-06 34.77 -0.006571 2017 5
8 2018-01-02 33.38 0.009069 2018 2
9 2018-01-03 33.76 0.011384 2018 3
10 2018-01-04 33.74 -0.000592 2018 4
11 2018-01-05 33.65 -0.002667 2018 5
12 2019-01-02 27.90 0.012704 2019 2
13 2019-01-03 28.18 0.010036 2019 3
14 2019-01-04 28.41 0.008162 2019 4
15 2019-01-07 28.26 -0.005280 2019 5
I created a DataFrame with only the 2019 values for tdoy and perc
df19 = df[['tdoy','perc']].loc[df['year'] == 2019]
and then zipped a dictionary for those values
perc19 = dict(zip(df19.tdoy,df19.perc))
to end up with
perc19=
{2: 0.012704174228675058,
3: 0.010035842293906852,
4: 0.008161816891412365,
5: -0.005279831045406497}
Then map these keys with the tdoy column in the original DataFrame to create a column titled 2019 that has the corresponding 2019 percentage change value for that trading day
df['2019'] = df['tdoy'].map(perc19)
and then create a vs2019 column where I find the difference of 2019 vs. perc and square it yielding
Dates last perc year tdoy 2019 vs2019
0 2016-01-04 29.93 -0.020295 2016 2 0.012704 6.746876
1 2016-01-05 29.63 -0.010023 2016 3 0.010036 3.995038
2 2016-01-06 29.59 -0.001350 2016 4 0.008162 1.358162
3 2016-01-07 29.44 -0.005069 2016 5 -0.005280 0.001590
4 2017-01-03 34.57 0.004358 2017 2 0.012704 0.431608
5 2017-01-04 34.98 0.011860 2017 3 0.010036 0.033038
6 2017-01-05 35.00 0.000572 2017 4 0.008162 0.864802
7 2017-01-06 34.77 -0.006571 2017 5 -0.005280 0.059843
8 2018-01-02 33.38 0.009069 2018 2 0.012704 0.081880
9 2018-01-03 33.76 0.011384 2018 3 0.010036 0.018047
10 2018-01-04 33.74 -0.000592 2018 4 0.008162 1.150436
From here I can groupby in various ways and further calculate to find most similar trending percentage changes vs. the year I am comparing against (2019).