Python weekday drop from DataFrame - python

I try to drop the weekdays from a dataframe (financial time series) and I keep getting the following error:
"AttributeError: 'Series' object has no attribute 'weekday'"
Here is my code:
df = df[df.date.weekday() < 5]
df = df.drop(df.date.weekday() < 5)
I tried a few others but nothing seemed to work.
I looked at dtypes and this is what I get:
Unnamed: 0 int64
close float32
date object
high float64
low float64
open float64
quoteVolume float64
volume float64
weightedAverage float64
dtype: object
So date is an object, but I can't transform it to datetime, I tried these:
df['date'] = df.date.astype('date')
df['date'] = df.date.astype('datetime')
both gave me the error:
TypeError: data type "date" not understood
The time format of the Series is: 2016-09-23 17:00:00 so yyyy-MM-dd hh:mm:ss.

Use pd.to_datetime:
import pandas as pd
df = df[pd.to_datetime(df.date).dt.weekday < 5]

Related

How to transform a Dataframe into a Series with Darts including the DatetimeIndex?

My Dataframe, temperature measurings over time:
[]
df.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 17545 entries, 2020-01-01 00:00:00+00:00 to 2022-01-01 00:00:00+00:00
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 T (degC) 17545 non-null float64
dtypes: float64(1)
memory usage: 274.1 KB
After transforming the dataframe into a Time Series with
df_series = TimeSeries.from_dataframe(df)
df_series
the result looks like:
For this reason, I cant plot the Series.
TypeError: Plotting requires coordinates to be numeric, boolean, or dates of type numpy.datetime64, datetime.datetime, cftime.datetime or pandas.Interval. Received data of type object instead.
I expected something like this from the darts doc (https://unit8co.github.io/darts/):
df
The DataFrame
time_col
The time column name. If set, the column will be cast to a pandas DatetimeIndex.
If not set, the DataFrame index will be used. In this case the DataFrame must contain an index that is
either a pandas DatetimeIndex or a pandas RangeIndex. If a DatetimeIndex is
used, it is better if it has no holes; alternatively setting fill_missing_dates can in some casees solve
these issues (filling holes with NaN, or with the provided fillna_value numeric value, if any).
In case about the above method description I don't know why it changed my DatetimeIndex to object.
Any suggestions on that?
Thanks.
I had the same issue. Darts doesn't work with datetime64[ns, utc], but works with datetime64[ns]. Darts doesn't recognise datetime64[ns, utc] as datatime type of value.
This fix it by doing datetime64[ns, utc] -> datetime64[ns]:
def set_index(df):
df['open_time'] = pd.to_datetime(df['open_time'], infer_datetime_format=True).dt.tz_localize(None)
df.set_index(keys='open_time', inplace=True, drop=True)
return df

Pandas - Problem reading a datetime64 object from a json file

I'm trying to save a Pandas data frame as a JSON file. One of the columns has the type datetime64[ns]. When I print the column the correct date is displayed. However, when I save it as a JSON file and read it back later the date value changes even after I change the column type back to datetime64[ns]. Here is a sample app which shows the issue I am running into:
#!/usr/bin/python3
import json
import pandas as pd
s = pd.Series(['1/1/2000'])
in_df = pd.DataFrame(s)
in_df[0] = pd.to_datetime(in_df[0], format='%m/%d/%Y')
print("in_df\n")
print(in_df)
print("\nin_df dtypes\n")
print(in_df.dtypes)
in_df.to_json("test.json")
out_df = pd.read_json("test.json")
out_df[0] = out_df[0].astype('datetime64[ns]')
print("\nout_df\n")
print(out_df)
print("\nout_df dtypes\n")
print(out_df.dtypes)
Here is the output:
in_df
0
0 2000-01-01
in_df dtypes
0 datetime64[ns]
dtype: object
out_df
0
0 1970-01-01 00:15:46.684800 <--- Why don't I get 2000-1-1 here?
out_df dtypes
0 datetime64[ns]
dtype: object
I'm expecting the get the original date displayed (2000-1-1) when I read back the JSON file. What am I doing wrong with my conversion? Thanks!
df = pd.read_json("test.json")
df[0] = pd.to_datetime(df[0], unit='ms')
print("\ndf\n")
print(df)
print("\ndf dtypes\n")
print(df.dtypes)
will give you
df
0
0 2000-01-01
df dtypes
0 datetime64[ns]
dtype: object
This should work for all millisecond json columns you need

Pandas convert partial column Index to Datetime

DataFrame below contains housing price dataset from 1996 to 2016.
Other than the first 6 columns, other columns need to be converted to Datetime type.
I tried to run the following code:
HousingPrice.columns[6:] = pd.to_datetime(HousingPrice.columns[6:])
but I got the error:
TypeError: Index does not support mutable operations
I wish to convert some columns in the columns Index to Datetime type, but not all columns.
The pandas index is immutable, so you can't do that.
However, you can access and modify the column index with array, see doc here.
HousingPrice.columns.array[6:] = pd.to_datetime(HousingPrice.columns[6:])
should work.
Note that this would change the column index only. In order to convert the columns values, you can do this :
date_cols = HousingPrice.columns[6:]
HousingPrice[date_cols] = HousingPrice[date_cols].apply(pd.to_datetime, errors='coerce', axis=1)
EDIT
Illustrated example:
data = {'0ther_col': [1,2,3], '1996-04': ['1996-04','1996-05','1996-06'], '1995-05':['1996-02','1996-08','1996-10']}
print('ORIGINAL DATAFRAME')
df = pd.DataFrame.from_records(data)
print(df)
print("\nDATE COLUMNS")
date_cols = df.columns[-2:]
print(df.dtypes)
print('\nCASTING DATE COLUMNS TO DATETIME')
df[date_cols] = df[date_cols].apply(pd.to_datetime, errors='coerce', axis=1)
print(df.dtypes)
print('\nCASTING DATE COLUMN INDEXES TO DATETIME')
print("OLD INDEX -", df.columns)
df.columns.array[-2:] = pd.to_datetime(df[date_cols].columns)
print("NEW INDEX -",df.columns)
print('\nFINAL DATAFRAME')
print(df)
yields:
ORIGINAL DATAFRAME
0ther_col 1995-05 1996-04
0 1 1996-02 1996-04
1 2 1996-08 1996-05
2 3 1996-10 1996-06
DATE COLUMNS
0ther_col int64
1995-05 object
1996-04 object
dtype: object
CASTING DATE COLUMNS TO DATETIME
0ther_col int64
1995-05 datetime64[ns]
1996-04 datetime64[ns]
dtype: object
CASTING DATE COLUMN INDEXES TO DATETIME
OLD INDEX - Index(['0ther_col', '1995-05', '1996-04'], dtype='object')
NEW INDEX - Index(['0ther_col', 1995-05-01 00:00:00, 1996-04-01 00:00:00], dtype='object')
FINAL DATAFRAME
0ther_col 1995-05-01 00:00:00 1996-04-01 00:00:00
0 1 1996-02-01 1996-04-01
1 2 1996-08-01 1996-05-01
2 3 1996-10-01 1996-06-01

Create an object of type datetime64[ns] in python

I have a pandas series like the one below:
import pandas as pd
import numpy as np
s = pd.Series(np.array([20201018, 20201019, 20201020]), index = [0, 1, 2])
s = pd.to_datetime(s, format='%Y%m%d')
print(s)
0 2020-10-18
1 2020-10-19
2 2020-10-20
dtype: datetime64[ns]
I want to check if say the date 2020-10-18 is present in the series. If I do the below I get false.
date = pd.to_datetime(20201018, format='%Y%m%d')
print(date in s)
I guess this is due to the series containing the date in the type datetime64[ns] while the object I created is of type `pandas._libs.tslibs.timestamps.Timestamp'. How can I go about checking if a date is present in such a series?
Actually date in s will check for date in s.index. For example:
0 in s
returns True since s.index is [0,1,2].
For this case, use comparison:
s.eq(date).any()
or, for several dates, use isin:
s.isin([date1, date2]).any()

Reading CSV file in Pandas with historical dates

I'm trying to read a file in with dates in the (UK) format 13/01/1800, however some of the dates are before 1667, which cannot be represented by the nanosecond timestamp (see http://pandas.pydata.org/pandas-docs/stable/gotchas.html#gotchas-timestamp-limits). I understand from that page I need to create my own PeriodIndex to cover the range I need (see http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-oob) but I can't understand how I convert the string in the csv reader to a date in this periodindex.
So far I have:
span = pd.period_range('1000-01-01', '2100-01-01', freq='D')
df_earliest= pd.read_csv("objects.csv", index_col=0, names=['Object Id', 'Earliest Date'], parse_dates=[1], infer_datetime_format=True, dayfirst=True)
How do I apply the span to the date reader/converter so I can create a PeriodIndex / DateTimeIndex column in the dataframe ?
you can try to do it this way:
fn = r'D:\temp\.data\36987699.csv'
def dt_parse(s):
d,m,y = s.split('/')
return pd.Period(year=int(y), month=int(m), day=int(d), freq='D')
df = pd.read_csv(fn, parse_dates=[0], date_parser=dt_parse)
Input file:
Date,col1
13/01/1800,aaa
25/12/1001,bbb
01/03/1267,ccc
Test:
In [16]: df
Out[16]:
Date col1
0 1800-01-13 aaa
1 1001-12-25 bbb
2 1267-03-01 ccc
In [17]: df.dtypes
Out[17]:
Date object
col1 object
dtype: object
In [18]: df['Date'].dt.year
Out[18]:
0 1800
1 1001
2 1267
Name: Date, dtype: int64
PS you may want to add try ... catch block in the dt_parse() function for catching ValueError: exceptions - result of int()...

Categories

Resources