I have a csv with a date column with dates listed as MM/DD/YY but I want to change the years from 00,02,03 to 1900, 1902, 1903 so that they are instead listed as MM/DD/YYYY
This is what works for me:
df2['Date'] = df2['Date'].str.replace(r'00', '1900')
but I'd have to do this for every year up until 68 (aka repeat this 68 times). I'm not sure how to create a loop to do the code above for every year in that range. I tried this:
ogyear=00
newyear=1900
while ogyear <= 68:
df2['date']=df2['Date'].str.replace(r'ogyear','newyear')
ogyear += 1
newyear += 1
but this returns an empty data set. Is there another way to do this?
I can't use datetime because it assumes that 02 refers to 2002 instead of 1902 and when I try to edit that as a date I get an error message from python saying that dates are immutable and that they must be changed in the original data set. For this reason I need to keep the dates as strings. I also attached the csv here in case thats helpful.
I would do it like this:
# create a data frame
d = pd.DataFrame({'date': ['20/01/00','20/01/20','20/01/50']})
# create year column
d['year'] = d['date'].str.split('/').str[2].astype(int) + 1900
# add new year into old date by replacing old year
d['new_data'] = d['date'].str.replace('[0-9]*.$','') + d['year'].astype(str)
date year new_data
0 20/01/00 1900 20/01/1900
1 20/01/20 1920 20/01/1920
2 20/01/50 1950 20/01/1950
I'd do it the following way:
from datetime import datetime
# create a data frame with dates in format month/day/shortened year
d = pd.DataFrame({'dates': ['2/01/10','5/01/20','6/01/30']})
#loop through the dates in the dates column and add them
#to list in desired form using datetime library,
#then substitute the dataframe dates column with the new ordered list
new_dates = []
for date in list(d['dates']):
dat = datetime.date(datetime.strptime(date, '%m/%d/%y'))
dat = dat.strftime("%m/%d/%Y")
new_dates.append(dat)
new_dates
d['dates'] = pd.Series(new_dates)
d
Related
I have a time column with the format XXXHHMMSS where XXX is the Day of Year. I also have a year column. I want to merge both these columns into one date time object.
Before I had detached XXX into a new column but this was making it more complicated.
I've converted the two columns to strings
points['UTC_TIME'] = points['UTC_TIME'].astype(str)
points['YEAR_'] = points['YEAR_'].astype(str)
Then I have the following line:
points['Time'] = pd.to_datetime(points['YEAR_'] * 1000 + points['UTC_TIME'], format='%Y%j%H%M%S')
I'm getting the value errorr, ValueError: time data '137084552' does not match format '%Y%j%H%M%S' (match)
Here is a photo of my columns and a link to the data
works fine for me if you combine both columns as string, EX:
import pandas as pd
df = pd.DataFrame({'YEAR_': [2002, 2002, 2002],
'UTC_TIME': [99082552, 135082552, 146221012]})
pd.to_datetime(df['YEAR_'].astype(str) + df['UTC_TIME'].astype(str).str.zfill(9),
format="%Y%j%H%M%S")
# 0 2002-04-09 08:25:52
# 1 2002-05-15 08:25:52
# 2 2002-05-26 22:10:12
# dtype: datetime64[ns]
Note, since %j expects zero-padded day of year, you might need to zero-fill, see first row in the example above.
I have a dataframe with name values and a date range (start/end). I need to expand/replace the dates with the ones generated by the from/to index. How can I do this?
Name date_range
NameOne_%Y%m-%d [-2,1]
NameTwo_%y%m%d [-3,1]
Desired result (Assuming that today's date is 2021-03-09 - 9 of march 2021):
Name
NameOne_202103-10
NameOne_202103-09
NameOne_202103-08
NameOne_202103-07
NameTwo_210310
NameTwo_210309
NameTwo_210308
NameTwo_210307
NameTwo_210306
I've been trying iterating over the dataframe and then generating the dates, but I still can't make it work..
for index, row in self.config_df.iterrows():
print(row['source'], row['date_range'])
days_sub=int(str(self.config_df["date_range"][0]).strip("[").strip("]").split(",")[0].strip())
days_add=int(str(self.config_df["date_range"][0]).strip("[").strip("]").split(",")[1].strip())
start_date = date.today() + timedelta(days=days_sub)
end_date = date.today() + timedelta(days=days_add)
date_range_df=pd.date_range(start=start_date, end=end_date)
date_range_df["source"]=row['source']
Any help is appreciated. Thanks!
Convert your date_range from str to list with ast module:
import ast
df = df.assign(date_range=df["date_range"].apply(ast.literal_eval)
Use date_range to create list of dates and explode to chain the list:
today = pd.Timestamp.today().normalize()
offset = pd.tseries.offsets.Day # shortcut
names = pd.Series([pd.date_range(today + offset(end),
today + offset(start),
freq="-1D").strftime(name)
for name, (start, end) in df.values]).explode(ignore_index=True)
>>> names
0 NameOne_202103-10
1 NameOne_202103-09
2 NameOne_202103-08
3 NameOne_202103-07
4 NameTwo_210310
5 NameTwo_210309
6 NameTwo_210308
7 NameTwo_210307
8 NameTwo_210306
dtype: object
Alright. From your question I understand you have a starting data frame like so:
config_df = pd.DataFrame({
'name': ['NameOne_%Y-%m-%d', 'NameTwo_%y%m%d'],
'str_date_range': ['[-2,1]', '[-3,1]']})
Resulting in this:
name str_date_range
0 NameOne_%Y-%m-%d [-2,1]
1 NameTwo_%y%m%d [-3,1]
To achieve your goal and avoid iterating rows - which should be avoided using pandas - you can use groupby().apply() like so:
def expand(row):
# Get the start_date and end_date from the row, by splitting
# the string and taking the first and last value respectively.
# .min() is required because row is technically a pd.Series
start_date = row.str_date_range.str.strip('[]').str.split(',').str[0].astype(int).min()
end_date = row.str_date_range.str.strip('[]').str.split(',').str[1].astype(int).min()
# Create a list range for from start_date to end_date.
# Note that range() does not include the end_date, therefor add 1
day_range = range(start_date, end_date+1)
# Create a Timedelta series from the day_range
days_diff = pd.to_timedelta(pd.Series(day_range), unit='days')
# Create an equally sized Series of today Timestamps
todays = pd.Series(pd.Timestamp.today()).repeat(len(day_range)-1).reset_index(drop=True)
df = todays.to_frame(name='date')
# Add days_diff to date column
df['date'] = df.date + days_diff
df['name'] = row.name
# Extract the date format from the name
date_format = row.name.split('_')[1]
# Add a column with the formatted date using the date_format string
df['date_str'] = df.date.dt.strftime(date_format=date_format)
df['name'] = df.name.str.split('_').str[0] + '_' + df.date_str
# Optional: drop columns
return df.drop(columns=['date'])
config_df.groupby('name').apply(expand).reset_index(drop=True)
returning:
name date_str
0 NameOne_2021-03-07 2021-03-07
1 NameOne_2021-03-08 2021-03-08
2 NameOne_2021-03-09 2021-03-09
3 NameTwo_210306 210306
4 NameTwo_210307 210307
5 NameTwo_210308 210308
6 NameTwo_210309 210309
I have a pandas dataframe and in one of the columns it has the date, for example, 1/7/13. I want to extract the year out of this. How would I do it?
I've tried
years_2 = df3.pivot_table(index=['ACCIDENT_DATE'], aggfunc ='size')
print(years_2)
but I get the recurrence of the date but I want to count just the number of times that an year occurs. Something like this:
Year
2013 1000
2014 59882
2015 23232
datetime.strptime will convert a string to datetime object based on the format you want. Then you can get year attribute from this object like below:
from datetime import datetime
datetime.strptime('1/7/13', '%d/%m/%y').year
If df3.ACCIDENT_DATE is of dtype datetime, then you can get the components of the date with .dt accessors
d = df3.ACCIDENT_DATE
# return series of dtype int
year = d.dt.year
month = d.dt.month
day = d.dt.day
If it has a time component
# return series of dtype datetime
date_ = d.dt.date
time_ = d.dt.time
# return series of dtype int
h = d.dt.hour
m = d.dt.minute
And many more in the docs
you can use value_counts function to get number of times year occurred.
years_2["ACCIDENT_DATE"] = pd.to_datetime(years_2["ACCIDENT_DATE"])
counts = years_2["ACCIDENT_DATE"].dt.year.value_counts()
to get year as a separate column
years_2["YEAR"] = years_2["ACCIDENT_DATE"].dt.year
I am using python to read an excel file.
The excel file contains a 'Date' column.
My question is how to check in python if the date in the excel sheet is in the last 3 months of this year.
Excel sheet:
Date
2018-01-20
2018-10-01
2018-10-01
2018-11-01
2018-11-17
and my code is something like:
for i in content:
valuerContent = content[(content['valuer'] == 'ahmad') & (content['Date'] == ??)]
What should I write instead of '??' to get the needed results
Assuming content['Date'] is a string
valuerContent = content[(content['valuer'] == 'ahmad') and int(content['Date'].split('-')[1]) >= 10
Only been coding in python for 4 months, but from the looks of it you could use the datetime module. Here are some examples
import datetime as dt
check_date = (2018,11,01)
current_date = (2018,12,31) #assuming you want to check the year 2018
if (current_date[1] - check_date[1]) < 3:
#insert code
datetime allows you to access the month/day/year by tuple indexes. Where [0] is year, [1] is month, [2] is day and so on.
I've been looking through every thread that I can find, and the only one that is relevant to this type of formatting issue is here, but it's for java...
How parse 2013-03-13T20:59:31+0000 date string to Date
I've got a column with values like 201604 and 201605 that I need to turn into date values like 2016-04-01 and 2016-05-01. To accomplish this, I've done what is below.
#Create Number to build full date
df['DAY_NBR'] = '01'
#Convert Max and Min date to string to do date transformation
df['MAXDT'] = df['MAXDT'].astype(str)
df['MINDT'] = df['MINDT'].astype(str)
#Add the day number to the max date month and year
df['MAXDT'] = df['MAXDT'] + df['DAY_NBR']
#Add the day number to the min date month and year
df['MINDT'] = df['MINDT'] + df['DAY_NBR']
#Convert Max and Min date to integer values
df['MAXDT'] = df['MAXDT'].astype(int)
df['MINDT'] = df['MINDT'].astype(int)
#Convert Max date to datetime
df['MAXDT'] = pd.to_datetime(df['MAXDT'], format='%Y%m%d')
#Convert Min date to datetime
df['MINDT'] = pd.to_datetime(df['MINDT'], format='%Y%m%d')
To be honest, I can work with this output, but it's a little messy because the unique values for the two columns are...
MAXDT Values
['2016-07-01T00:00:00.000000000' '2017-09-01T00:00:00.000000000'
'2018-06-01T00:00:00.000000000' '2017-07-01T00:00:00.000000000'
'2017-03-01T00:00:00.000000000' '2018-12-01T00:00:00.000000000'
'2017-12-01T00:00:00.000000000' '2019-01-01T00:00:00.000000000'
'2018-09-01T00:00:00.000000000' '2018-10-01T00:00:00.000000000'
'2016-04-01T00:00:00.000000000' '2018-03-01T00:00:00.000000000'
'2017-05-01T00:00:00.000000000' '2018-08-01T00:00:00.000000000'
'2017-02-01T00:00:00.000000000' '2016-12-01T00:00:00.000000000'
'2018-01-01T00:00:00.000000000' '2018-02-01T00:00:00.000000000'
'2017-06-01T00:00:00.000000000' '2018-11-01T00:00:00.000000000'
'2018-05-01T00:00:00.000000000' '2019-11-01T00:00:00.000000000'
'2016-06-01T00:00:00.000000000' '2017-10-01T00:00:00.000000000'
'2016-08-01T00:00:00.000000000' '2018-04-01T00:00:00.000000000'
'2016-03-01T00:00:00.000000000' '2016-10-01T00:00:00.000000000'
'2016-11-01T00:00:00.000000000' '2019-12-01T00:00:00.000000000'
'2016-09-01T00:00:00.000000000' '2017-08-01T00:00:00.000000000'
'2016-05-01T00:00:00.000000000' '2017-01-01T00:00:00.000000000'
'2017-11-01T00:00:00.000000000' '2018-07-01T00:00:00.000000000'
'2017-04-01T00:00:00.000000000' '2016-01-01T00:00:00.000000000'
'2016-02-01T00:00:00.000000000' '2019-02-01T00:00:00.000000000'
'2019-07-01T00:00:00.000000000' '2019-10-01T00:00:00.000000000'
'2019-09-01T00:00:00.000000000' '2019-03-01T00:00:00.000000000'
'2019-05-01T00:00:00.000000000' '2019-04-01T00:00:00.000000000'
'2019-08-01T00:00:00.000000000' '2019-06-01T00:00:00.000000000'
'2020-02-01T00:00:00.000000000' '2020-01-01T00:00:00.000000000']
MINDT Values
['2016-04-01T00:00:00.000000000' '2017-07-01T00:00:00.000000000'
'2016-02-01T00:00:00.000000000' '2017-01-01T00:00:00.000000000'
'2017-02-01T00:00:00.000000000' '2018-12-01T00:00:00.000000000'
'2017-08-01T00:00:00.000000000' '2018-04-01T00:00:00.000000000'
'2017-10-01T00:00:00.000000000' '2019-01-01T00:00:00.000000000'
'2018-05-01T00:00:00.000000000' '2018-09-01T00:00:00.000000000'
'2018-10-01T00:00:00.000000000' '2016-01-01T00:00:00.000000000'
'2016-03-01T00:00:00.000000000' '2017-11-01T00:00:00.000000000'
'2017-05-01T00:00:00.000000000' '2018-07-01T00:00:00.000000000'
'2018-06-01T00:00:00.000000000' '2017-12-01T00:00:00.000000000'
'2016-10-01T00:00:00.000000000' '2018-02-01T00:00:00.000000000'
'2017-06-01T00:00:00.000000000' '2018-08-01T00:00:00.000000000'
'2018-03-01T00:00:00.000000000' '2018-11-01T00:00:00.000000000'
'2016-08-01T00:00:00.000000000' '2016-06-01T00:00:00.000000000'
'2018-01-01T00:00:00.000000000' '2016-07-01T00:00:00.000000000'
'2016-11-01T00:00:00.000000000' '2016-09-01T00:00:00.000000000'
'2017-04-01T00:00:00.000000000' '2016-05-01T00:00:00.000000000'
'2017-09-01T00:00:00.000000000' '2016-12-01T00:00:00.000000000'
'2017-03-01T00:00:00.000000000']
I'm trying to build a loop that runs through these dates, and it works, but I don't want to have an index with all of these irrelevant zeros and a T in it. How can I convert these empty timestamp values to just the date that is in yyyy-mm-dd format?
Thank you!
Unfortunately, I believe Pandas always stores datetime objects as datetime64[ns], meaning the precision has to be like that. Even if you attempt to save as datetime64[D], it will be cast to datetime64[ns].
It's possible to just store these datetime objects as strings instead, but the simplest solution is likely to just strip the extra zeroes when you're looping through them (i.e, using df['MAXDT'].to_numpy().astype('datetime64[D]') and looping through the formatted numpy array), or just reformatting using datetime.