Comparing one date column to another in a different row - python

I have a dataframe df_corp:
ID arrival_date leaving_date
1 01/02/20 05/02/20
2 01/03/20 07/03/20
1 12/02/20 20/02/20
1 07/03/20 10/03/20
2 10/03/20 15/03/20
I would like to find the difference between leaving_date of a row and arrival date of the next entry with respect to ID. Basically I want to know how long before they book again.
So it'll look something like this.
ID arrival_date leaving_date time_between
1 01/02/20 05/02/20 NaN
2 01/03/20 07/03/20 NaN
1 12/02/20 20/02/20 7
1 07/03/20 10/03/20 15
2 10/03/20 15/03/20 3
I've tried grouping by ID to do the sum but I'm seriously lost on how to get the value from the next row and a different column in one.

You need to convert to_datetime and to perform a GroupBy.shift to get the previous departure date:
# arrival
a = pd.to_datetime(df_corp['arrival_date'], dayfirst=True)
# previous departure per ID
l = pd.to_datetime(df_corp['leaving_date'], dayfirst=True).groupby(df_corp['ID']).shift()
# difference in days
df_corp['time_between'] = (a-l).dt.days
output:
ID arrival_date leaving_date time_between
0 1 01/02/20 05/02/20 NaN
1 2 01/03/20 07/03/20 NaN
2 1 12/02/20 20/02/20 7.0
3 1 07/03/20 10/03/20 16.0
4 2 10/03/20 15/03/20 3.0

Related

Output raw value difference from one period to the next using Python

I have a dataset, df, where I have a new value for each day. I would like to output the percent difference of these values from row to row as well as the raw value difference:
Date Value
10/01/2020 1
10/02/2020 2
10/03/2020 5
10/04/2020 8
Desired output:
Date Value PercentDifference ValueDifference
10/01/2020 1
10/02/2020 2 100 2
10/03/2020 5 150 3
10/04/2020 8 60 3
This is what I am doing:
import pandas as pd
df = pd.read_csv('df.csv')
df = (df.merge(df.assign(Date=df['Date'] - pd.to_timedelta('1D')),
on='Date')
.assign(Value = lambda x: x['Value_y']-x['Value_x'])
[['Date','Value']]
)
df['PercentDifference'] = [f'{x:.2%}' for x in (df['Value'].div(df['Value'].shift(1)) -
1).fillna(0)]
A member has helped me with the code above, I am also trying to incorporate the value difference as shown in my desired output.
Note - Is there a way to incorporate a 'period' - say, checking the percent difference and value difference over a 7 day period and 30 day period and so on?
Any suggestion is appreciated
Use Series.pct_change and Series.diff
df['PercentageDiff'] = df['Value'].pct_change().mul(100)
df['ValueDiff'] = df['Value'].diff()
Date Value PercentageDiff ValueDiff
0 10/01/2020 1 NaN NaN
1 10/02/2020 2 100.0 1.0
2 10/03/2020 5 150.0 3.0
3 10/04/2020 8 60.0 3.0
Or you use df.assign
df.assign(
percentageDiff=df["Value"].pct_change().mul(100),
ValueDiff=df["Value"].diff()
)

Python select data for top 3 values per group in dataframe

From the given dataframe sorted by ID and Date:
ID Date Value
1 12/10/1998 0
1 04/21/2002 21030
1 08/16/2013 56792
1 09/18/2014 56792
1 09/14/2016 66354
2 06/16/2015 46645
2 12/08/2015 47641
2 12/11/2015 47641
2 04/13/2017 47641
3 07/29/2009 28616
3 03/31/2011 42127
3 03/17/2013 56000
I would like to get values for top 3 Dates, group by ID:
56792
56792
66354
47641
47641
47641
28616
42127
56000
I need values only
You could sort_values both by ID and Date, and use GroupBy.tail to take the values for the top 3 dates:
df.Date = pd.to_datetime(df.Date)
df.sort_values(['ID','Date']).groupby('ID').Value.tail(3).to_numpy()
# array([56792, 56792, 66354, 47641, 47641, 47641, 28616, 42127, 56000])

How to check date and time of max values in large data set Python

I have data sets that are ~30-60,000,000 lines each. Each Name has one or more unique ID associated with it for every day in the data set. Some OP_DATE and OP_HOUR the unique IDs can have 0 or blank values for each Load1,2,3.
I'm looking for a way to calculate the total maximum values of columns over all the OP_DATE that look like these:
Name ID OP_DATE OP_HOUR OP_TIME Load1 Load2 Load3
OMI 1 2001-01-01 1 1 11 10 12
OMI 1 2001-01-01 2 0.2 1 12 10
.
.
OMI 2A 2001-01-01 1 0.4 5
.
.
OMI 2A 2001-01-01 24 0.6 2 7 12
.
.
Kain 2 01 2002-01-01 1 0.1 6 12
Kain 2 01 2002-01-01 2 0.98 3 14 7
.
.
OMI 1 2018-01-01 1 0.89 12 10 20
.
.
I want to find the maximum values of Load1, Load2, Load3, and find what OP_DATE, OP_TIME and OP_HOUR that it occurred on.
The output I want is:
Name ID max OP_DATE max OP_HOUR max OP_TIME max Load1 max Load2 max Load3
OMI 1 2011-06-11 22 ..... max values on dates
OMI 2A 2012-02-01 12 ..... max values on dates
Kain 2 01 2006-01-01 1..... max values on dates
Is there a way I can do this easily?
I've tried:
unique_MAX = df.groupby(['Name','ID'])['Load1', 'Load2', 'Load3'].max().reset_index()
But this would group only by the dates and give me a total maximum - I'd like the associated dates, hours, and times as well.
To get the full row of information for any given fields [max]:
Get the index locations for the max of each group you desire
Use the indexes to return the full row at each location
An example for finding the max Load1 for each Name & ID pair
idx = df.groupby(['Name','ID'])['Load1'].transform(max) == df['Load1']
df[idx]
Out[14]:
name ID dt x y
1 Fred 050 1/2/2018 2 4
4 Dave 001 1/3/2018 6 1
5 Carly 002 1/3/2018 5 7

Pandas Map creating NaNs

My intention is to replace labels. I found out about using a dictionary and map it to the dataframe. To that end, I first extracted the necessary fields and created a dictionary which I then fed to the map function.
My programme is as follows:
factor_name = 'Help in household'
df = pd.read_csv('dat.csv')
labels = pd.read_csv('labels.csv')
fact_df = labels.loc[labels['Column'] == factor_name]
fact_dict = dict(zip(fact_df['Level'], fact_df['Rename']))
print df.index.to_series().map(fact_dict)
My labels.csv is as follows:
Column,Name,Level,Rename
Help in household,Every day,4,Every day
Help in household,Never,1,Never
Help in household,Once a month,2,Once a month
Help in household,Once a week,3,Once a week
State,AN,AN,Andaman & Nicobar
State,AP,AP,Andhra Pradesh
State,AR,AR,Arunachal Pradesh
State,BR,BR,Bihar
State,CG,CG,Chattisgarh
State,CH,CH,Chandigarh
State,DD,DD,Daman & Diu
State,DL,DL,Delhi
State,DN,DN,Dadra & Nagar Haveli
State,GA,GA,Goa
State,GJ,GJ,Gujarat
State,HP,HP,Himachal Pradesh
State,HR,HR,Haryana
State,JH,JH,Jharkhand
State,JK,JK,Jammu & Kashmir
State,KA,KA,Karnataka
State,KL,KL,Kerala
State,MG,MG,Meghalaya
State,MH,MH,Maharashtra
State,MN,MN,Manipur
State,MP,MP,Madhya Pradesh
State,MZ,MZ,Mizoram
State,NG,NG,Nagaland
State,OR,OR,Orissa
State,PB,PB,Punjab
State,PY,PY,Pondicherry
State,RJ,RJ,Rajasthan
State,SK,SK,Sikkim
State,TN,TN,Tamil Nadu
State,TR,TR,Tripura
State,UK,UK,Uttarakhand
State,UP,UP,Uttar Pradesh
State,WB,WB,West Bengal
My dat.csv is as follows:
Id,Help in household,Maths,Reading,Science,Social
11011001001,4,20.37,,27.78,
11011001002,3,12.96,,38.18,
11011001003,4,27.78,70,,
11011001004,4,,56.67,,36
11011001005,1,,,14.55,8.33
11011001006,4,,23.33,,30
11011001007,4,40.74,70,,
11011001008,3,,26.67,,22.92
Intended result is as follows:
4 Every day
1 Never
2 Once a month
3 Once a week
The mapping fails. The result always causes NaNs to appear which I do not want. Can anyone tell me why?
Try this:
In [140]: df['Help in household'] \
.astype(str) \
.map(labels.loc[labels['Column']=='Help in household',['Level','Rename']]
.set_index('Level')['Rename'])
Out[140]:
0 Every day
1 Once a week
2 Every day
3 Every day
4 Never
5 Every day
6 Every day
7 Once a week
Name: Help in household, dtype: object
You may also consider using merge:
In [147]: df.assign(Level=df['Help in household'].astype(str)) \
.merge(labels.loc[labels['Column']=='Help in household',['Level','Rename']],
on='Level')
Out[147]:
Id Help in household Maths Reading Science Social Level Rename
0 11011001001 4 20.37 NaN 27.78 NaN 4 Every day
1 11011001003 4 27.78 70.00 NaN NaN 4 Every day
2 11011001004 4 NaN 56.67 NaN 36.00 4 Every day
3 11011001006 4 NaN 23.33 NaN 30.00 4 Every day
4 11011001007 4 40.74 70.00 NaN NaN 4 Every day
5 11011001002 3 12.96 NaN 38.18 NaN 3 Once a week
6 11011001008 3 NaN 26.67 NaN 22.92 3 Once a week
7 11011001005 1 NaN NaN 14.55 8.33 1 Never

Applying function to Pandas Groupby

I'm currently working with panel data in Python and I'm trying to compute the rolling average for each time series observation within a given group (ID).
Given the size of my data set (thousands of groups with multiple time periods), the .groupby and .apply() functions are taking way too long to compute (has been running over an hour and still nothing -- entire data set only contains around 300k observations).
I'm ultimately wanting to iterate over multiple columns, doing the following:
Compute a rolling average for each time step in a given column, per group ID
Create a new column containing the difference between the original value and the moving average [x_t - (x_t-1 + x_t)/2]
Store column in a new DataFrame, which would be identical to original data set, except that it has the residual from #2 instead of the original value.
Repeat and append new residuals to df_resid (as seen below)
df_resid
date id rev_resid exp_resid
2005-09-01 1 NaN NaN
2005-12-01 1 -10000 -5500
2006-03-01 1 -352584 -262058.5
2006-06-01 1 240000 190049.5
2006-09-01 1 82648.75 37724.25
2005-09-01 2 NaN NaN
2005-12-01 2 4206.5 24353
2006-03-01 2 -302574 -331951
2006-06-01 2 103179 117405.5
2006-09-01 2 -52650 -72296.5
Here's small sample of the original data.
df
date id rev exp
2005-09-01 1 745168.0 545168.0
2005-12-01 1 725168.0 534168.0
2006-03-01 1 20000.0 10051.0
2006-06-01 1 500000.0 390150.0
2006-09-01 1 665297.5 465598.5
2005-09-01 2 956884.0 736987.0
2005-12-01 2 965297.0 785693.0
2006-03-01 2 360149.0 121791.0
2006-06-01 2 566507.0 356602.0
2006-09-01 2 461207.0 212009.0
And the (very slow) code:
df['rev_resid'] = df.groupby('id')['rev'].apply(lambda x:x.rolling(center=False,window=2).mean())
I'm hoping there is a much more computationally efficient way to do this (primarily with respect to #1), and could be extended to multiple columns.
Any help would be truly appreciated.
To quicken up the calculation, if dataframe is already sorted on 'id' then you don't have to do rolling within a groupby (if it isn't sorted... do so). Then since your window is only length 2 then we mask the result by checking where id == id.shift This works because it's sorted.
d1 = df[['rev', 'exp']]
df.join(
d1.rolling(2).mean().rsub(d1).add_suffix('_resid')[df.id.eq(df.id.shift())]
)
date id rev exp rev_resid exp_resid
0 2005-09-01 1 745168.0 545168.0 NaN NaN
1 2005-12-01 1 725168.0 534168.0 -10000.00 -5500.00
2 2006-03-01 1 20000.0 10051.0 -352584.00 -262058.50
3 2006-06-01 1 500000.0 390150.0 240000.00 190049.50
4 2006-09-01 1 665297.5 465598.5 82648.75 37724.25
5 2005-09-01 2 956884.0 736987.0 NaN NaN
6 2005-12-01 2 965297.0 785693.0 4206.50 24353.00
7 2006-03-01 2 360149.0 121791.0 -302574.00 -331951.00
8 2006-06-01 2 566507.0 356602.0 103179.00 117405.50
9 2006-09-01 2 461207.0 212009.0 -52650.00 -72296.50

Categories

Resources