Pandas convert mongodb bson.tz_util.FixedOffset to standard date - python

I am trying to calculate offset from epoch seconds with a Pandas dataframe created from data from a mongodb document.
An example of the data layout and operation is as per below (this codes works OK)
data = {'ds': ['2018-01-13 18:47:05.069722+00:00', '2018-01-14 18:47:05.119994+00:00', '2018-01-15 18:47:05.178768+00:00'],
'y': [38, 20, 26]}
df = pd.DataFrame(data, columns = ['ds', 'y'])
df['ds'] = pd.to_datetime(df['ds'])
t = np.array(
(df['ds'] - pd.datetime(1970, 1, 1))
.dt.total_seconds()
.astype(np.float)
) / (3600 * 24.)
Note that however the dtype of the ds column in the code above is
datetime64[ns]
and the dtype of the date column from the actual pandas data in question is
datetime64[ns, <bson.tz_util.FixedOffset objec...
The only way I have been able to work out how to get around this is to convert to string then back to date:
dates = df['ds'].apply(lambda x: x.strftime('%Y-%m-%d %H:%M:%S'))
df['ds'] = pd.to_datetime(dates)
Is there a better way to convert a
datetime64[ns, <bson.tz_util.FixedOffset object
into a datetime64[ns] column type in Pandas?

This worked for me.
import pandas as pd
# have to convert the series to date index
dsIndex = pd.Index(df['ds'])
# strip off the TZ info (assume it was <bson.tz_util.FixedOffset objec...)
dsIndex = dsIndex.tz_localize(tz=None)
# Reassign it to UTC
dsUTC = dsIndex.tz_localize(tz='UTC')

import pytz
df[column] = df[column].apply(lambda x: x.astimezone(pytz.utc))
does the trick

Related

python pandas converting UTC integer to datetime

I am calling some financial data from an API which is storing the time values as (I think) UTC (example below):
enter image description here
I cannot seem to convert the entire column into a useable date, I can do it for a single value using the following code so I know this works, but I have 1000's of rows with this problem and thought pandas would offer an easier way to update all the values.
from datetime import datetime
tx = int('1645804609719')/1000
print(datetime.utcfromtimestamp(tx).strftime('%Y-%m-%d %H:%M:%S'))
Any help would be greatly appreciated.
Simply use pandas.DataFrame.apply:
df['date'] = df.date.apply(lambda x: datetime.utcfromtimestamp(int(x)/1000).strftime('%Y-%m-%d %H:%M:%S'))
Another way to do it is by using pd.to_datetime as recommended by Panagiotos in the comments:
df['date'] = pd.to_datetime(df['date'],unit='ms')
You can use "to_numeric" to convert the column in integers, "div" to divide it by 1000 and finally a loop to iterate the dataframe column with datetime to get the format you want.
import pandas as pd
import datetime
df = pd.DataFrame({'date': ['1584199972000', '1645804609719'], 'values': [30,40]})
df['date'] = pd.to_numeric(df['date']).div(1000)
for i in range(len(df)):
df.iloc[i,0] = datetime.utcfromtimestamp(df.iloc[i,0]).strftime('%Y-%m-%d %H:%M:%S')
print(df)
Output:
date values
0 2020-03-14 15:32:52 30
1 2022-02-25 15:56:49 40

Pandas coverting columns of several dataframes to datetime doesn't work in a loop

For some reason I cannot convert columns of different dataframes:
cols_to_change = [df1['date1'], df2['date2'], df3['date3']]
for i in cols_to_change:
i = pd.to_datetime(i, format = '%Y-%m-%d')
although str can be converted to datetime.
But it works ok if I write
df1['date1'] = pd.to_datetime(df1['date1'], format = '%Y-%m-%d')
df2['date2'] = pd.to_datetime(df2['date2'], format = '%Y-%m-%d')
And if I call
pd.to_datetime(df1['date1'].loc[0], format = '%Y-%m-%d')
it returns a Timestamp. Is this the point?
pd.to_datetime returns a new Series and you assigned that series to i, which you never used after the conversion.
You need to refer to the original dataframe and column:
cols_to_change = [
(df1, 'date1'),
(df2, 'date2'),
(df3, 'date3')
]
for df, col in cols_to_change:
df[col] = pd.to_datetime(df[col], format='%Y-%m-%d')

How to get the time only from timestamps?

I have a column of (created AT) in my DataFrame which has a timestamps like shown below:
Created AT
1) 2021-04-19T09:14:10.526Z
2) 2021-04-19T09:13:06.809Z
3) 2021-04-19T09:13:06.821Z
I want to extract the time only from above column etc . It should show like:
9:14:8 etc
How to extract this ?
If your date column is a string, you need to convert it to datetime and then take a substring of the time:
df = pd.DataFrame(data = {"Created At":["2021-04-19T09:14:10.526Z","2021-04-19T09:14:10.526Z"]})
df['Created At'] = pd.to_datetime(df['Created At'])
df['Created At'] = df['Created At'].dt.time.astype(str).str[:8]
df['time'] = pd.to_datetime(df['Created AT'])
print(df['time'].dt.time)
On the first line, convert the datetime to objects and write in a new column.
On the second, we get the time from datetime objects
I have a solution to your question. It can have multiple solutions but here I am giving some solution here using time, DateTime
you can get the string using
import time
import datetime
s = '2021-04-19T09:14:10.526Z'
t = s.split('T')[1].split('.')[0]
print(t)
and for getting time stamp of it do one more line
print(datetime.datetime.strptime(t,"%H:%M:%S"))
Convert to datetime and use strftime to format exactly as you like it.
data = ['2021-04-19T09:14:10.526Z',
'2021-04-19T09:13:06.809Z',
'2021-04-19T09:13:06.821Z']
df = pd.DataFrame(data=data, columns=['Created AT'])
df['Created AT'] = pd.to_datetime(df['Created AT']).dt.strftime('%H:%M:%S')
print(df)
Created AT
0 09:14:10
1 09:13:06
2 09:13:06
First convert the column to datetime format if not already in that format:
df['Created AT'] = pd.to_datetime(df['Created AT'])
Then, add the new column time with formatting by .dt.strftime() as follows (if you don't want the nano-second part):
df['time'] = df['Created AT'].dt.strftime('%H:%M:%S')
print(df)
Created AT time
0 2021-04-19 09:14:10.526000+00:00 09:14:10
1 2021-04-19 09:13:06.809000+00:00 09:13:06
2 2021-04-19 09:13:06.821000+00:00 09:13:06

Create new column based on multiple conditions of existing column while manipulating existing column

I am new to Python/pandas coming from an R background. I am having trouble understanding how I can manipulate an existing column to create a new column based on multiple conditions of the existing column. There are 10 different conditions that need to met but for simplicity I will use a 2 case scenario.
In R:
install.packages("lubridate")
library(lubridate)
df <- data.frame("Date" = c("2020-07-01", "2020-07-15"))
df$Date <- as.Date(df$Date, format = "%Y-%m-%d")
df$Fiscal <- ifelse(day(df$Date) > 14,
paste0(year(df$Date),"-",month(df$Date) + 1,"-01"),
paste0(year(df$Date),"-",month(df$Date),"-01")
)
df$Fiscal <- as.Date(df$Fiscal, format = "%Y-%m-%d")
In Python I have:
import pandas as pd
import datetime as dt
df = {'Date': ['2020-07-01', '2020-07-15']}
df = pd.DataFrame(df)
df['Date'] = pd.to_datetime(df['Date'], yearfirst = True, format = "%Y-%m-%d")
df.loc[df['Date'].dt.day > 14,
'Fiscal'] = "-".join([dt.datetime.strftime(df['Date'].dt.year), dt.datetime.strftime(df['Date'].dt.month + 1),"01"])
df.loc[df['Date'].dt.day <= 14,
'Fiscal'] = "-".join([dt.datetime.strftime(df['Date'].dt.year), dt.datetime.strftime(df['Date'].dt.month),"01"])
If I don't convert the 'Date' field it says that it expects a string, however if I do convert the date field, I still get an error as it seems it is applying to a 'Series' object.
TypeError: descriptor 'strftime' for 'datetime.date' objects doesn't apply to a 'Series' object
I understand I may have some terminology or concepts incorrect and apologize, however the answers I have seen dealing with creating a new column with multiple conditions do not seem to be manipulating the existing column they are checking the condition on, and simply taking on an assigned value. I can only imagine there is a more efficient way of doing this that is less 'R-ey' but I am not sure where to start.
This isn't intended as a full answer, just as an illustration how strftime works: strftime is a method of a date(time) object that takes a format-string as argument:
import pandas as pd
import datetime as dt
df = {'Date': ['2020-07-01', '2020-07-15']}
df = pd.DataFrame(df)
df['Date'] = pd.to_datetime(df['Date'], yearfirst = True, format = "%Y-%m-%d")
s = [dt.date(df['Date'][i].year, df['Date'][i].month + 1, 1).strftime('%Y-%m-%d')
for i in df['Date'].index]
print(s)
Result:
['2020-08-01', '2020-08-01']
Again: No full answer, just a hint.
EDIT: You can vectorise this, for example by:
import pandas as pd
import datetime as dt
df = {'Date': ['2020-07-01', '2020-07-15']}
df = pd.DataFrame(df)
df['Date'] = pd.to_datetime(df['Date'], yearfirst=True, format='%Y-%m-%d')
df['Fiscal'] = df['Date'].apply(lambda d: dt.date(d.year, d.month, 1)
if d.day < 15 else
dt.date(d.year, d.month + 1, 1))
print(df)
Result:
Date Fiscal
0 2020-07-01 2020-07-01
1 2020-07-15 2020-08-01
Here I'm using an on-the-fly lambda function. You could also do it with an externally defined function:
def to_fiscal(date):
if date.day < 15:
return dt.date(date.year, date.month, 1)
return dt.date(date.year, date.month + 1, 1)
df['Fiscal'] = df['Date'].apply(to_fiscal)
In general vectorisation is better than looping over rows because the looping is done on a more "lower" level and that is much more efficient.
Until someone tells me otherwise I will do it this way. If there's a way to do it vectorized (or just a better way in general) I would greatly appreciate it
import pandas as pd
import datetime as dt
df = {'Date': ['2020-07-01', '2020-07-15']}
df = pd.DataFrame(df)
df['Date'] = pd.to_datetime(df['Date'], yearfirst=True, format='%Y-%m-%d')
test_list = list()
for i in df['Date'].index:
mth = df['Date'][i].month
yr = df['Date'][i].year
dy = df['Date'][i].day
if(dy > 14):
new_date = dt.date(yr, mth + 1, 1)
else:
new_date = dt.date(yr, mth, 1)
test_list.append(new_date)
df['New_Date'] = test_list

Can pandas automatically read dates from a CSV file?

Today I was positively surprised by the fact that while reading data from a data file (for example) pandas is able to recognize types of values:
df = pandas.read_csv('test.dat', delimiter=r"\s+", names=['col1','col2','col3'])
For example it can be checked in this way:
for i, r in df.iterrows():
print type(r['col1']), type(r['col2']), type(r['col3'])
In particular integer, floats and strings were recognized correctly. However, I have a column that has dates in the following format: 2013-6-4. These dates were recognized as strings (not as python date-objects). Is there a way to "learn" pandas to recognized dates?
You should add parse_dates=True, or parse_dates=['column name'] when reading, thats usually enough to magically parse it. But there are always weird formats which need to be defined manually. In such a case you can also add a date parser function, which is the most flexible way possible.
Suppose you have a column 'datetime' with your string, then:
from datetime import datetime
dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S')
df = pd.read_csv(infile, parse_dates=['datetime'], date_parser=dateparse)
This way you can even combine multiple columns into a single datetime column, this merges a 'date' and a 'time' column into a single 'datetime' column:
dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S')
df = pd.read_csv(infile, parse_dates={'datetime': ['date', 'time']}, date_parser=dateparse)
You can find directives (i.e. the letters to be used for different formats) for strptime and strftime in this page.
Perhaps the pandas interface has changed since #Rutger answered, but in the version I'm using (0.15.2), the date_parser function receives a list of dates instead of a single value. In this case, his code should be updated like so:
from datetime import datetime
import pandas as pd
dateparse = lambda dates: [datetime.strptime(d, '%Y-%m-%d %H:%M:%S') for d in dates]
df = pd.read_csv('test.dat', parse_dates=['datetime'], date_parser=dateparse)
Since the original question asker said he wants dates and the dates are in 2013-6-4 format, the dateparse function should really be:
dateparse = lambda dates: [datetime.strptime(d, '%Y-%m-%d').date() for d in dates]
You could use pandas.to_datetime() as recommended in the documentation for pandas.read_csv():
If a column or index contains an unparseable date, the entire column
or index will be returned unaltered as an object data type. For
non-standard datetime parsing, use pd.to_datetime after pd.read_csv.
Demo:
>>> D = {'date': '2013-6-4'}
>>> df = pd.DataFrame(D, index=[0])
>>> df
date
0 2013-6-4
>>> df.dtypes
date object
dtype: object
>>> df['date'] = pd.to_datetime(df.date, format='%Y-%m-%d')
>>> df
date
0 2013-06-04
>>> df.dtypes
date datetime64[ns]
dtype: object
When merging two columns into a single datetime column, the accepted answer generates an error (pandas version 0.20.3), since the columns are sent to the date_parser function separately.
The following works:
def dateparse(d,t):
dt = d + " " + t
return pd.datetime.strptime(dt, '%d/%m/%Y %H:%M:%S')
df = pd.read_csv(infile, parse_dates={'datetime': ['date', 'time']}, date_parser=dateparse)
pandas read_csv method is great for parsing dates. Complete documentation at http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html
you can even have the different date parts in different columns and pass the parameter:
parse_dates : boolean, list of ints or names, list of lists, or dict
If True -> try parsing the index. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a
separate date column. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
The default sensing of dates works great, but it seems to be biased towards north american Date formats. If you live elsewhere you might occasionally be caught by the results. As far as I can remember 1/6/2000 means 6 January in the USA as opposed to 1 Jun where I live. It is smart enough to swing them around if dates like 23/6/2000 are used. Probably safer to stay with YYYYMMDD variations of date though. Apologies to pandas developers,here but i have not tested it with local dates recently.
you can use the date_parser parameter to pass a function to convert your format.
date_parser : function
Function to use for converting a sequence of string columns to an array of datetime
instances. The default uses dateutil.parser.parser to do the conversion.
Yes - according to the pandas.read_csv documentation:
Note: A fast-path exists for iso8601-formatted dates.
So if your csv has a column named datetime and the dates looks like 2013-01-01T01:01 for example, running this will make pandas (I'm on v0.19.2) pick up the date and time automatically:
df = pd.read_csv('test.csv', parse_dates=['datetime'])
Note that you need to explicitly pass parse_dates, it doesn't work without.
Verify with:
df.dtypes
You should see the datatype of the column is datetime64[ns]
While loading csv file contain date column.We have two approach to to make pandas to
recognize date column i.e
Pandas explicit recognize the format by arg date_parser=mydateparser
Pandas implicit recognize the format by agr infer_datetime_format=True
Some of the date column data
01/01/18
01/02/18
Here we don't know the first two things It may be month or day. So in this case we have to use
Method 1:-
Explicit pass the format
mydateparser = lambda x: pd.datetime.strptime(x, "%m/%d/%y")
df = pd.read_csv(file_name, parse_dates=['date_col_name'],
date_parser=mydateparser)
Method 2:- Implicit or Automatically recognize the format
df = pd.read_csv(file_name, parse_dates=[date_col_name],infer_datetime_format=True)
In addition to what the other replies said, if you have to parse very large files with hundreds of thousands of timestamps, date_parser can prove to be a huge performance bottleneck, as it's a Python function called once per row. You can get a sizeable performance improvements by instead keeping the dates as text while parsing the CSV file and then converting the entire column into dates in one go:
# For a data column
df = pd.read_csv(infile, parse_dates={'mydatetime': ['date', 'time']})
df['mydatetime'] = pd.to_datetime(df['mydatetime'], exact=True, cache=True, format='%Y-%m-%d %H:%M:%S')
# For a DateTimeIndex
df = pd.read_csv(infile, parse_dates={'mydatetime': ['date', 'time']}, index_col='mydatetime')
df.index = pd.to_datetime(df.index, exact=True, cache=True, format='%Y-%m-%d %H:%M:%S')
# For a MultiIndex
df = pd.read_csv(infile, parse_dates={'mydatetime': ['date', 'time']}, index_col=['mydatetime', 'num'])
idx_mydatetime = df.index.get_level_values(0)
idx_num = df.index.get_level_values(1)
idx_mydatetime = pd.to_datetime(idx_mydatetime, exact=True, cache=True, format='%Y-%m-%d %H:%M:%S')
df.index = pd.MultiIndex.from_arrays([idx_mydatetime, idx_num])
For my use case on a file with 200k rows (one timestamp per row), that cut down processing time from about a minute to less than a second.
Read the existing string columns in date and time format respectively
pd.read_csv('CGMData.csv', parse_dates=['Date', 'Time'])
Resulted Columns
Concat string columns of date and time and add new column of datetype object - Remove Original columns
if want to rename the new column name then pass as dictionary as
show in below example and the new column name will be the key name,
if pass as list of column, new column name will be concate of column name passed in the list separated by _ e.g Date_Time
# parse_dates={'given_name': ['Date', 'Time']}
pd.read_csv("InsulinData.csv",low_memory=False,
parse_dates=[['Date', 'Time']])
pd.read_csv("InsulinData.csv",low_memory=False,
parse_dates={'date_time': ['Date', 'Time']})
Concat string columns of date and time and add new column of datetype object and Keep the Original columns
pd.read_csv("InsulinData.csv",low_memory=False,
parse_dates=[['Date', 'Time']], keep_date_col=True)
Want to change the format of date and time when read from csv
parser = lambda x: pd.to_datetime(x, format='%Y-%m-%d %H:%M:%S')
pd.read_csv('path', date_parser=parser, parse_dates=['date', 'time'])
If performance matters to you make sure you time:
import sys
import timeit
import pandas as pd
print('Python %s on %s' % (sys.version, sys.platform))
print('Pandas version %s' % pd.__version__)
repeat = 3
numbers = 100
def time(statement, _setup=None):
print (min(
timeit.Timer(statement, setup=_setup or setup).repeat(
repeat, numbers)))
print("Format %m/%d/%y")
setup = """import pandas as pd
import io
data = io.StringIO('''\
ProductCode,Date
''' + '''\
x1,07/29/15
x2,07/29/15
x3,07/29/15
x4,07/30/15
x5,07/29/15
x6,07/29/15
x7,07/29/15
y7,08/05/15
x8,08/05/15
z3,08/05/15
''' * 100)"""
time('pd.read_csv(data); data.seek(0)')
time('pd.read_csv(data, parse_dates=["Date"]); data.seek(0)')
time('pd.read_csv(data, parse_dates=["Date"],'
'infer_datetime_format=True); data.seek(0)')
time('pd.read_csv(data, parse_dates=["Date"],'
'date_parser=lambda x: pd.datetime.strptime(x, "%m/%d/%y")); data.seek(0)')
print("Format %Y-%m-%d %H:%M:%S")
setup = """import pandas as pd
import io
data = io.StringIO('''\
ProductCode,Date
''' + '''\
x1,2016-10-15 00:00:43
x2,2016-10-15 00:00:56
x3,2016-10-15 00:00:56
x4,2016-10-15 00:00:12
x5,2016-10-15 00:00:34
x6,2016-10-15 00:00:55
x7,2016-10-15 00:00:06
y7,2016-10-15 00:00:01
x8,2016-10-15 00:00:00
z3,2016-10-15 00:00:02
''' * 1000)"""
time('pd.read_csv(data); data.seek(0)')
time('pd.read_csv(data, parse_dates=["Date"]); data.seek(0)')
time('pd.read_csv(data, parse_dates=["Date"],'
'infer_datetime_format=True); data.seek(0)')
time('pd.read_csv(data, parse_dates=["Date"],'
'date_parser=lambda x: pd.datetime.strptime(x, "%Y-%m-%d %H:%M:%S")); data.seek(0)')
prints:
Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 03:13:28)
[Clang 6.0 (clang-600.0.57)] on darwin
Pandas version 0.23.4
Format %m/%d/%y
0.19123052499999993
8.20691274
8.143124389
1.2384357139999977
Format %Y-%m-%d %H:%M:%S
0.5238807110000039
0.9202787830000005
0.9832778819999959
12.002349824999996
So with iso8601-formatted date (%Y-%m-%d %H:%M:%S is apparently an iso8601-formatted date, I guess the T can be dropped and replaced by a space) you should not specify infer_datetime_format (which does not make a difference with more common ones either apparently) and passing your own parser in just cripples performance. On the other hand, date_parser does make a difference with not so standard day formats. Be sure to time before you optimize, as usual.
You can use the parameter date_parser with a function for converting a sequence of string columns to an array of datetime instances:
parser = lambda x: pd.to_datetime(x, format='%Y-%m-%d %H:%M:%S')
pd.read_csv('path', date_parser=parser, parse_dates=['date_col1', 'date_col2'])
Yes, this code works like breeze. Here index 0 refers to the index of the date column.
df = pd.read_csv(filepath, parse_dates=[0], infer_datetime_format = True)
No, there is no way in pandas to automatically recognize date columns.
Pandas does a poor job at type inference. It basically puts most columns as the generic object type, unless you manually work around it eg. using the abovementioned parse_dates parameter.
If you want to automatically detect columns types, you'd have to use a separate data profiling tool, eg. visions, and then cast or feed the inferred types back into your DataFrame constructor (eg. for dates and from_csv, using the parse_dates parameter).

Categories

Resources