How can I calculate the elapsed months using pandas? I have write the following, but this code is not elegant. Could you tell me a better way?
import pandas as pd
df = pd.DataFrame([pd.Timestamp('20161011'),
pd.Timestamp('20161101') ], columns=['date'])
df['today'] = pd.Timestamp('20161202')
df = df.assign(
elapsed_months=(12 *
(df["today"].map(lambda x: x.year) -
df["date"].map(lambda x: x.year)) +
(df["today"].map(lambda x: x.month) -
df["date"].map(lambda x: x.month))))
# Out[34]:
# date today elapsed_months
# 0 2016-10-11 2016-12-02 2
# 1 2016-11-01 2016-12-02 1
Update for pandas 0.24.0:
Since 0.24.0 has changed the api to return MonthEnd object from period subtraction, you could do some manual calculation as follows to get the whole month difference:
12 * (df.today.dt.year - df.date.dt.year) + (df.today.dt.month - df.date.dt.month)
# 0 2
# 1 1
# dtype: int64
Wrap in a function:
def month_diff(a, b):
return 12 * (a.dt.year - b.dt.year) + (a.dt.month - b.dt.month)
month_diff(df.today, df.date)
# 0 2
# 1 1
# dtype: int64
Prior to pandas 0.24.0. You can round the date to Month with to_period() and then subtract the result:
df['elapased_months'] = df.today.dt.to_period('M') - df.date.dt.to_period('M')
df
# date today elapased_months
#0 2016-10-11 2016-12-02 2
#1 2016-11-01 2016-12-02 1
you could also try:
df['months'] = (df['today'] - df['date']) / np.timedelta64(1, 'M')
df
# date today months
#0 2016-10-11 2016-12-02 1.708454
#1 2016-11-01 2016-12-02 1.018501
Update for pandas 1.3
If you want integers instead of MonthEnd objects:
df['elapsed_months'] = df.today.dt.to_period('M').view(dtype='int64') - df.date.dt.to_period('M').view(dtype='int64')
df
# Out[11]:
# date today elapsed_months
# 0 2016-10-11 2016-12-02 2
# 1 2016-11-01 2016-12-02 1
This works with pandas 1.1.1:
df['elapsed_months'] = df.today.dt.to_period('M').astype(int) - df.date.dt.to_period('M').astype(int)
df
# Out[11]:
# date today elapsed_months
# 0 2016-10-11 2016-12-02 2
# 1 2016-11-01 2016-12-02 1
In a simpler way, it can also be calculated using the to_period function in pandas.
pd.to_datetime('today').to_period('M') - pd.to_datetime('2020-01-01').to_period('M')
# [Out]:
# <7 * MonthEnds>
In case, you just want the integer value just use (<above_code>).n
Use can use .n to get the number of months as an integer:
(pd.to_datetime('today').to_period('M') - pd.to_datetime('2020-01-01').to_period('M')).n
On a dataframe, you can use it with .apply:
df["n_months"] = (df["date1"].dt.to_period("M") - df["date2"].dt.to_period("M")).apply(lambda x: x.n)
Also takes care of pandas 1.3.2 int conversion issue and any rounding issues with converting to ints earlier.
The following will accomplish this:
df["elapsed_months"] = ((df["today"] - df["date"]).
map(lambda x: round(x.days/30)))
# Out[34]:
# date today elapsed_months
# 0 2016-10-11 2016-12-02 2
# 1 2016-11-01 2016-12-02 1
If you don't mind ignoring the days, you can use numpy functionality:
import numpy as np
df['elapsed month'] = (df.date.values.astype('datetime64[M]')-
df.today.values.astype('datetime64[M]'))
/ np.timedelta64(1,'M')
Related
My dataset has dates in the European format, and I'm struggling to convert it into the correct format before I pass it through a pd.to_datetime, so for all day < 12, my month and day switch.
Is there an easy solution to this?
import pandas as pd
import datetime as dt
df = pd.read_csv(loc,dayfirst=True)
df['Date']=pd.to_datetime(df['Date'])
Is there a way to force datetime to acknowledge that the input is formatted at dd/mm/yy?
Thanks for the help!
Edit, a sample from my dates:
renewal["Date"].head()
Out[235]:
0 31/03/2018
2 30/04/2018
3 28/02/2018
4 30/04/2018
5 31/03/2018
Name: Earliest renewal date, dtype: object
After running the following:
renewal['Date']=pd.to_datetime(renewal['Date'],dayfirst=True)
I get:
Out[241]:
0 2018-03-31 #Correct
2 2018-04-01 #<-- this number is wrong and should be 01-04 instad
3 2018-02-28 #Correct
Add format.
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%Y')
You can control the date construction directly if you define separate columns for 'year', 'month' and 'day', like this:
import pandas as pd
df = pd.DataFrame(
{'Date': ['01/03/2018', '06/08/2018', '31/03/2018', '30/04/2018']}
)
date_parts = df['Date'].apply(lambda d: pd.Series(int(n) for n in d.split('/')))
date_parts.columns = ['day', 'month', 'year']
df['Date'] = pd.to_datetime(date_parts)
date_parts
# day month year
# 0 1 3 2018
# 1 6 8 2018
# 2 31 3 2018
# 3 30 4 2018
df
# Date
# 0 2018-03-01
# 1 2018-08-06
# 2 2018-03-31
# 3 2018-04-30
I have a dataframe in pandas called 'munged_data' with two columns 'entry_date' and 'dob' which i have converted to Timestamps using pd.to_timestamp.I am trying to figure out how to calculate ages of people based on the time difference between 'entry_date' and 'dob' and to do this i need to get the difference in days between the two columns ( so that i can then do somehting like round(days/365.25). I do not seem to be able to find a way to do this using a vectorized operation. When I do munged_data.entry_date-munged_data.dob i get the following :
internal_quote_id
2 15685977 days, 23:54:30.457856
3 11651985 days, 23:49:15.359744
4 9491988 days, 23:39:55.621376
7 11907004 days, 0:10:30.196224
9 15282164 days, 23:30:30.196224
15 15282227 days, 23:50:40.261632
However i do not seem to be able to extract the days as an integer so that i can continue with my calculation.
Any help appreciated.
Using the Pandas type Timedelta available since v0.15.0 you also can do:
In[1]: import pandas as pd
In[2]: df = pd.DataFrame([ pd.Timestamp('20150111'),
pd.Timestamp('20150301') ], columns=['date'])
In[3]: df['today'] = pd.Timestamp('20150315')
In[4]: df
Out[4]:
date today
0 2015-01-11 2015-03-15
1 2015-03-01 2015-03-15
In[5]: (df['today'] - df['date']).dt.days
Out[5]:
0 63
1 14
dtype: int64
You need 0.11 for this (0.11rc1 is out, final prob next week)
In [9]: df = DataFrame([ Timestamp('20010101'), Timestamp('20040601') ])
In [10]: df
Out[10]:
0
0 2001-01-01 00:00:00
1 2004-06-01 00:00:00
In [11]: df = DataFrame([ Timestamp('20010101'),
Timestamp('20040601') ],columns=['age'])
In [12]: df
Out[12]:
age
0 2001-01-01 00:00:00
1 2004-06-01 00:00:00
In [13]: df['today'] = Timestamp('20130419')
In [14]: df['diff'] = df['today']-df['age']
In [16]: df['years'] = df['diff'].apply(lambda x: float(x.item().days)/365)
In [17]: df
Out[17]:
age today diff years
0 2001-01-01 00:00:00 2013-04-19 00:00:00 4491 days, 00:00:00 12.304110
1 2004-06-01 00:00:00 2013-04-19 00:00:00 3244 days, 00:00:00 8.887671
You need this odd apply at the end because not yet full support for timedelta64[ns] scalars (e.g. like how we use Timestamps now for datetime64[ns], coming in 0.12)
Not sure if you still need it, but in Pandas 0.14 i usually use .astype('timedelta64[X]') method
http://pandas.pydata.org/pandas-docs/stable/timeseries.html (frequency conversion)
df = pd.DataFrame([ pd.Timestamp('20010101'), pd.Timestamp('20040605') ])
df.ix[0]-df.ix[1]
Returns:
0 -1251 days
dtype: timedelta64[ns]
(df.ix[0]-df.ix[1]).astype('timedelta64[Y]')
Returns:
0 -4
dtype: float64
Hope that will help
Let's specify that you have a pandas series named time_difference which has type
numpy.timedelta64[ns]
One way of extracting just the day (or whatever desired attribute) is the following:
just_day = time_difference.apply(lambda x: pd.tslib.Timedelta(x).days)
This function is used because the numpy.timedelta64 object does not have a 'days' attribute.
To convert any type of data into days just use pd.Timedelta().days:
pd.Timedelta(1985, unit='Y').days
84494
I have a df like the following:
import datetime as dt
import pandas as pd
import pytz
cols = ['utc_datetimes', 'zone_name']
data = [
['2019-11-13 14:41:26,2019-12-18 23:04:12', 'Europe/Stockholm'],
['2019-12-06 21:49:04,2019-12-11 22:52:57,2019-12-18 20:30:58,2019-12-23 18:49:53,2019-12-27 18:34:23,2020-01-07 21:20:51,2020-01-11 17:36:56,2020-01-20 21:45:47,2020-01-30 20:48:49,2020-02-03 21:04:52,2020-02-07 20:05:02,2020-02-10 21:07:21', 'Europe/London']
]
df = pd.DataFrame(data, columns=cols)
print(df)
# utc_datetimes zone_name
# 0 2019-11-13 14:41:26,2019-12-18 23:04:12 Europe/Stockholm
# 1 2019-12-06 21:49:04,2019-12-11 22:52:57,2019-1... Europe/London
And I would like to count the number of nights and Wednesdays, of the row's local time, the dates in the df represent. This is the desired output:
utc_datetimes zone_name nights wednesdays
0 2019-11-13 14:41:26,2019-12-18 23:04:12 Europe/Stockholm 0 1
1 2019-12-06 21:49:04,2019-12-11 22:52:57,2019-1... Europe/London 11 2
I've come up with the following double for loop, but it is not as efficient as I'd like it for the sizable df:
# New columns.
df['nights'] = 0
df['wednesdays'] = 0
for row in range(df.shape[0]):
date_list = df['utc_datetimes'].iloc[row].split(',')
user_time_zone = df['zone_name'].iloc[row]
for date in date_list:
datetime_obj = dt.datetime.strptime(
date, '%Y-%m-%d %H:%M:%S'
).replace(tzinfo=pytz.utc)
local_datetime = datetime_obj.astimezone(pytz.timezone(user_time_zone))
# Get day of the week count:
if local_datetime.weekday() == 2:
df['wednesdays'].iloc[row] += 1
# Get time of the day count:
if (local_datetime.hour >17) & (local_datetime.hour <= 23):
df['nights'].iloc[row] += 1
Any suggestions will be appreciated :)
PD. disregard the definition of 'night', just an example.
One way is to first create a reference df by exploding your utc_datetimes column and then get the TimeDelta for each zone:
df = pd.DataFrame(data, columns=cols)
s = (df.assign(utc_datetimes=df["utc_datetimes"].str.split(","))
.explode("utc_datetimes"))
s["diff"] = [pd.Timestamp(a, tz=b).utcoffset() for a,b in zip(s["utc_datetimes"],s["zone_name"])]
With this helper df you can calculate the number of wednesdays and nights:
df["wednesdays"] = (pd.to_datetime(s["utc_datetimes"])+s["diff"]).dt.day_name().eq("Wednesday").groupby(level=0).sum()
df["nights"] = ((pd.to_datetime(s["utc_datetimes"])+s["diff"]).dt.hour>17).groupby(level=0).sum()
print (df)
#
utc_datetimes zone_name wednesdays nights
0 2019-11-13 14:41:26,2019-12-18 23:04:12 Europe/Stockholm 1.0 0.0
1 2019-12-06 21:49:04,2019-12-11 22:52:57,2019-1... Europe/London 2.0 11.0
I have a DataFrame that looks like this:
raw_data = {'Series_Date':['2017-03-10','2017-03-13','2017-03-14','2017-03-15'],'SeriesDate':['2017-03-10','2017-03-13','2017-03-14','2017-03-15']}
import pandas as pd
df = pd.DataFrame(raw_data,columns=['Series_Date','SeriesDate'])
print df
To this DF, I would like to append four columns at the end:
1) Start_Date = SeriesDate - 10 Business Days
2) End_Date = SeriesDate - 3 Business Days
3) Date_Difference = (End_Date - Start_Date)/2. However, if the date difference is 4.5 days the value should be 5 and not 4 i.e. it should round up.
4) Roll_Date = End_Date - 'Date_Difference' Business Days. i.e. if Date_Difference is 5 then the Roll_Date = End_Date - 5 Business Days
I am able to append the first two columns as follows:
from pandas.tseries.offsets import BDay
df['Start_Date'] = df['SeriesDate'] - BDay(10)
df['End_Date'] = df['SeriesDate'] - BDay(3)
However, I am struggling with the last 2 columns. Could anyone provide some help?
Once you have this df:
Series_Date Start_Date End_Date
0 2017-03-10 2017-02-24 2017-03-07
1 2017-03-13 2017-02-27 2017-03-08
2 2017-03-14 2017-02-28 2017-03-09
3 2017-03-15 2017-03-01 2017-03-10
You can complete the 2 columns:
df['Date_Difference'] = ((df.End_Date - df.Start_Date) / 2).dt.ceil('D')
df['Roll_Date'] = df.End_Date - pd.Series(BDay(dd.days) for dd in df.Date_Difference)
Explanation:
(df.End_Date - df.Start_Date) / 2) gives a Series of timedeltas. .dt.ceil('D') rounds this Series up to the day.
pd.Series(BDay(dd.days) for dd in df.Date_Difference) creates a Series of BusinessDays based on the number of days in Date_Difference. (There is very likely a better way to do it, but I'm a newbie with pandas).
Side question: why do you have 2 columns Series_Date and SeriesDate with the same content ?
This question already has answers here:
Calculate Time Difference Between Two Pandas Columns in Hours and Minutes
(4 answers)
Closed 2 years ago.
I am trying to find the time difference between two columns of the following frame:
Test Date | Test Type | First Use Date
I used the following function definition to get the difference:
def days_between(d1, d2):
d1 = datetime.strptime(d1, "%Y-%m-%d")
d2 = datetime.strptime(d2, "%Y-%m-%d")
return abs((d2 - d1).days)
And it works fine, however it does not take a series as an input. So I had to construct a for loop that loops over indices:
age_veh = []
for i in range(0, len(data_manufacturer)-1):
age_veh[i].append(days_between(data_manufacturer.iloc[i,0], data_manufacturer.iloc[i,4]))
However, it does return an error:
IndexError: list index out of range
I don't know whether it's the right way of doing and what am I doing wrong or an alternative solution will be much appreciated. Please also bear in mind that I have around 2 mil rows.
Convert the columns using to_datetime then you can subtract the columns to produce a timedelta on the abs values, then you can call dt.days to get the total number of days, example:
In [119]:
import io
import pandas as pd
t="""Test Date,Test Type,First Use Date
2011-02-05,A,2010-01-05
2012-02-05,A,2010-03-05
2013-02-05,A,2010-06-05
2014-02-05,A,2010-08-05"""
df = pd.read_csv(io.StringIO(t))
df
Out[119]:
Test Date Test Type First Use Date
0 2011-02-05 A 2010-01-05
1 2012-02-05 A 2010-03-05
2 2013-02-05 A 2010-06-05
3 2014-02-05 A 2010-08-05
In [121]:
df['Test Date'] = pd.to_datetime(df['Test Date'])
df['First Use Date'] = pd.to_datetime(df['First Use Date'])
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 4 entries, 0 to 3
Data columns (total 3 columns):
Test Date 4 non-null datetime64[ns]
Test Type 4 non-null object
First Use Date 4 non-null datetime64[ns]
dtypes: datetime64[ns](2), object(1)
memory usage: 128.0+ bytes
In [122]:
df['days'] = (df['Test Date'] - df['First Use Date']).abs().dt.days
df
Out[122]:
Test Date Test Type First Use Date days
0 2011-02-05 A 2010-01-05 396
1 2012-02-05 A 2010-03-05 702
2 2013-02-05 A 2010-06-05 976
3 2014-02-05 A 2010-08-05 1280
IIUC you can first convert columns to_datetime, use abs and then convert timedelta to days:
print df
id value date1 date2 sum
0 A 150 2014-04-08 2014-03-08 NaN
1 B 100 2014-05-08 2014-02-08 NaN
2 B 200 2014-01-08 2014-07-08 100
3 A 200 2014-04-08 2014-03-08 NaN
4 A 300 2014-06-08 2014-04-08 350
df['date1'] = pd.to_datetime(df['date1'])
df['date2'] = pd.to_datetime(df['date2'])
df['diff'] = (df['date1'] - df['date2']).abs() / np.timedelta64(1, 'D')
print df
id value date1 date2 sum diff
0 A 150 2014-04-08 2014-03-08 NaN 31
1 B 100 2014-05-08 2014-02-08 NaN 89
2 B 200 2014-01-08 2014-07-08 100 181
3 A 200 2014-04-08 2014-03-08 NaN 31
4 A 300 2014-06-08 2014-04-08 350 61
EDIT:
I think better is use for converting np.timedelta64(1, 'D') to days in larger DataFrames, because it is faster:
I use EdChum sample, only len(df) = 4k:
import io
import pandas as pd
import numpy as np
t=u"""Test Date,Test Type,First Use Date
2011-02-05,A,2010-01-05
2012-02-05,A,2010-03-05
2013-02-05,A,2010-06-05
2014-02-05,A,2010-08-05"""
df = pd.read_csv(io.StringIO(t))
df = pd.concat([df]*1000).reset_index(drop=True)
df['Test Date'] = pd.to_datetime(df['Test Date'])
df['First Use Date'] = pd.to_datetime(df['First Use Date'])
print (df['Test Date'] - df['First Use Date']).abs().dt.days
print (df['Test Date'] - df['First Use Date']).abs() / np.timedelta64(1, 'D')
Timings:
In [174]: %timeit (df['Test Date'] - df['First Use Date']).abs().dt.days
10 loops, best of 3: 38.8 ms per loop
In [175]: %timeit (df['Test Date'] - df['First Use Date']).abs() / np.timedelta64(1, 'D')
1000 loops, best of 3: 1.62 ms per loop