.json extension file + timestamp + Pandas + Python - python

I have a .json file extension (logs.json) that was sent to me with the following data in it (I am showing only some of it as there are over 2,000 entries):
["2012-03-01T00:05:55+00:00", "2012-03-01T00:06:23+00:00", "2012-03-01T00:06:52+00:00", "2012-03-01T00:11:23+00:00", "2012-03-01T00:12:47+00:00", "2012-03-01T00:12:54+00:00", "2012-03-01T00:16:14+00:00", "2012-03-01T00:17:31+00:00", "2012-03-01T00:21:23+00:00", "2012-03-01T00:21:26+00:00", "2012-03-01T00:22:25+00:00", "2012-03-01T00:28:24+00:00", "2012-03-01T00:31:21+00:00", "2012-03-01T00:32:20+00:00", "2012-03-01T00:33:32+00:00", "2012-03-01T00:35:21+00:00", "2012-03-01T00:38:14+00:00", "2012-03-01T00:39:24+00:00", "2012-03-01T00:43:12+00:00", "2012-03-01T00:46:13+00:00", "2012-03-01T00:46:31+00:00", "2012-03-01T00:48:03+00:00", "2012-03-01T00:49:34+00:00", "2012-03-01T00:49:54+00:00", "2012-03-01T00:55:19+00:00", "2012-03-01T00:56:27+00:00", "2012-03-01T00:56:32+00:00"]
Using Pandas, I did:
import pandas as pd
logs = pd.read_json('logs.json')
logs.head()
And I get the following:
0
0 2012-03-01T00:05:55+00:00
1 2012-03-01T00:06:23+00:00
2 2012-03-01T00:06:52+00:00
3 2012-03-01T00:11:23+00:00
4 2012-03-01T00:12:47+00:00
[5 rows x 1 columns]
Then, in order to assign the proper data type including the UTC zone, I do:
logs = pd.to_datetime(logs[0], utc=True)
logs.head()
And get:
0 2012-03-01 00:05:55
1 2012-03-01 00:06:23
2 2012-03-01 00:06:52
3 2012-03-01 00:11:23
4 2012-03-01 00:12:47
Name: 0, dtype: datetime64[ns]
Here are my questions:
Is the above code correct to get my data in the right format?
where did my UTC zone go? and what if I want to create a column with the corresponding PST time and add it to this dataset in a data frame format?
I seem to recall that in order to obtain counts per day/week, or year, I need to add .day, .week, or .year somewhere (logs.day?), but I cannot figure it out and I am guessing that it is because of the current shape of my data. How do I get counts by day? week? year? so that I can plot the data? and how would I go with plotting the data?
Such simple questions that seem so hard for someone who is transitioning from R to using Python for Data Analysis! I hope you guys can help!

I think there may be a bug in the tz handling here, it's certainly possible that this should be converted by default (I was surprised that it wasn't, I suspect it's because it's just a list).
In [21]: s = pd.read_json(js, convert_dates=[0], typ='Series')  # more honestly this is a Series
In [22]: s.head()
Out[22]:
0 2012-03-01 00:05:55
1 2012-03-01 00:06:23
2 2012-03-01 00:06:52
3 2012-03-01 00:11:23
4 2012-03-01 00:12:47
dtype: datetime64[ns]
To get counts of year, month, etc. I would probably use a DatetimeIndex (at the moment date-like columns don't have year/month etc methods, though I think they (c|sh)ould):
In [23]: dti = pd.DatetimeIndex(s)
In [24]: s.groupby(dti.year).size()
Out[24]:
2012 27
dtype: int64
In [25]: s.groupby(dti.month).size()
Out[25]:
3 27
dtype: int64
Perhaps it makes more sense to view the data as a TimeSeries:
In [31]: ts = pd.Series(1, dti)
In [32]: ts.head()
Out[32]:
2012-03-01 00:05:55 1
2012-03-01 00:06:23 1
2012-03-01 00:06:52 1
2012-03-01 00:11:23 1
2012-03-01 00:12:47 1
dtype: int64
This way you can use resample:
In [33]: ts.resample('M', how='sum')
Out[33]:
2012-03-31 27
Freq: M, dtype: int64

Related

Extract Day, Month and Hour from Timestamp string in Python [duplicate]

I have a Dataframe, df, with the following column:
df['ArrivalDate'] =
...
936 2012-12-31
938 2012-12-29
965 2012-12-31
966 2012-12-31
967 2012-12-31
968 2012-12-31
969 2012-12-31
970 2012-12-29
971 2012-12-31
972 2012-12-29
973 2012-12-29
...
The elements of the column are pandas.tslib.Timestamp.
I want to just include the year and month. I thought there would be simple way to do it, but I can't figure it out.
Here's what I've tried:
df['ArrivalDate'].resample('M', how = 'mean')
I got the following error:
Only valid with DatetimeIndex or PeriodIndex
Then I tried:
df['ArrivalDate'].apply(lambda(x):x[:-2])
I got the following error:
'Timestamp' object has no attribute '__getitem__'
Any suggestions?
Edit: I sort of figured it out.
df.index = df['ArrivalDate']
Then, I can resample another column using the index.
But I'd still like a method for reconfiguring the entire column. Any ideas?
If you want new columns showing year and month separately you can do this:
df['year'] = pd.DatetimeIndex(df['ArrivalDate']).year
df['month'] = pd.DatetimeIndex(df['ArrivalDate']).month
or...
df['year'] = df['ArrivalDate'].dt.year
df['month'] = df['ArrivalDate'].dt.month
Then you can combine them or work with them just as they are.
The df['date_column'] has to be in date time format.
df['month_year'] = df['date_column'].dt.to_period('M')
You could also use D for Day, 2M for 2 Months etc. for different sampling intervals, and in case one has time series data with time stamp, we can go for granular sampling intervals such as 45Min for 45 min, 15Min for 15 min sampling etc.
You can directly access the year and month attributes, or request a datetime.datetime:
In [15]: t = pandas.tslib.Timestamp.now()
In [16]: t
Out[16]: Timestamp('2014-08-05 14:49:39.643701', tz=None)
In [17]: t.to_pydatetime() #datetime method is deprecated
Out[17]: datetime.datetime(2014, 8, 5, 14, 49, 39, 643701)
In [18]: t.day
Out[18]: 5
In [19]: t.month
Out[19]: 8
In [20]: t.year
Out[20]: 2014
One way to combine year and month is to make an integer encoding them, such as: 201408 for August, 2014. Along a whole column, you could do this as:
df['YearMonth'] = df['ArrivalDate'].map(lambda x: 100*x.year + x.month)
or many variants thereof.
I'm not a big fan of doing this, though, since it makes date alignment and arithmetic painful later and especially painful for others who come upon your code or data without this same convention. A better way is to choose a day-of-month convention, such as final non-US-holiday weekday, or first day, etc., and leave the data in a date/time format with the chosen date convention.
The calendar module is useful for obtaining the number value of certain days such as the final weekday. Then you could do something like:
import calendar
import datetime
df['AdjustedDateToEndOfMonth'] = df['ArrivalDate'].map(
lambda x: datetime.datetime(
x.year,
x.month,
max(calendar.monthcalendar(x.year, x.month)[-1][:5])
)
)
If you happen to be looking for a way to solve the simpler problem of just formatting the datetime column into some stringified representation, for that you can just make use of the strftime function from the datetime.datetime class, like this:
In [5]: df
Out[5]:
date_time
0 2014-10-17 22:00:03
In [6]: df.date_time
Out[6]:
0 2014-10-17 22:00:03
Name: date_time, dtype: datetime64[ns]
In [7]: df.date_time.map(lambda x: x.strftime('%Y-%m-%d'))
Out[7]:
0 2014-10-17
Name: date_time, dtype: object
If you want the month year unique pair, using apply is pretty sleek.
df['mnth_yr'] = df['date_column'].apply(lambda x: x.strftime('%B-%Y'))
Outputs month-year in one column.
Don't forget to first change the format to date-time before, I generally forget.
df['date_column'] = pd.to_datetime(df['date_column'])
SINGLE LINE: Adding a column with 'year-month'-paires:
('pd.to_datetime' first changes the column dtype to date-time before the operation)
df['yyyy-mm'] = pd.to_datetime(df['ArrivalDate']).dt.strftime('%Y-%m')
Accordingly for an extra 'year' or 'month' column:
df['yyyy'] = pd.to_datetime(df['ArrivalDate']).dt.strftime('%Y')
df['mm'] = pd.to_datetime(df['ArrivalDate']).dt.strftime('%m')
Extracting the Year say from ['2018-03-04']
df['Year'] = pd.DatetimeIndex(df['date']).year
The df['Year'] creates a new column. While if you want to extract the month just use .month
You can first convert your date strings with pandas.to_datetime, which gives you access to all of the numpy datetime and timedelta facilities. For example:
df['ArrivalDate'] = pandas.to_datetime(df['ArrivalDate'])
df['Month'] = df['ArrivalDate'].values.astype('datetime64[M]')
#KieranPC's solution is the correct approach for Pandas, but is not easily extendible for arbitrary attributes. For this, you can use getattr within a generator comprehension and combine using pd.concat:
# input data
list_of_dates = ['2012-12-31', '2012-12-29', '2012-12-30']
df = pd.DataFrame({'ArrivalDate': pd.to_datetime(list_of_dates)})
# define list of attributes required
L = ['year', 'month', 'day', 'dayofweek', 'dayofyear', 'weekofyear', 'quarter']
# define generator expression of series, one for each attribute
date_gen = (getattr(df['ArrivalDate'].dt, i).rename(i) for i in L)
# concatenate results and join to original dataframe
df = df.join(pd.concat(date_gen, axis=1))
print(df)
ArrivalDate year month day dayofweek dayofyear weekofyear quarter
0 2012-12-31 2012 12 31 0 366 1 4
1 2012-12-29 2012 12 29 5 364 52 4
2 2012-12-30 2012 12 30 6 365 52 4
Thanks to jaknap32, I wanted to aggregate the results according to Year and Month, so this worked:
df_join['YearMonth'] = df_join['timestamp'].apply(lambda x:x.strftime('%Y%m'))
Output was neat:
0 201108
1 201108
2 201108
There is two steps to extract year for all the dataframe without using method apply.
Step1
convert the column to datetime :
df['ArrivalDate']=pd.to_datetime(df['ArrivalDate'], format='%Y-%m-%d')
Step2
extract the year or the month using DatetimeIndex() method
pd.DatetimeIndex(df['ArrivalDate']).year
df['Month_Year'] = df['Date'].dt.to_period('M')
Result :
Date Month_Year
0 2020-01-01 2020-01
1 2020-01-02 2020-01
2 2020-01-03 2020-01
3 2020-01-04 2020-01
4 2020-01-05 2020-01
df['year_month']=df.datetime_column.apply(lambda x: str(x)[:7])
This worked fine for me, didn't think pandas would interpret the resultant string date as date, but when i did the plot, it knew very well my agenda and the string year_month where ordered properly... gotta love pandas!
Then I tried:
df['ArrivalDate'].apply(lambda(x):x[:-2])
I think here the proper input should be string.
df['ArrivalDate'].astype(str).apply(lambda(x):x[:-2])

Increment attributes of a datetime Series in pandas

I have a Series containing datetime64[ns] elements called series, and would like to increment the months. I thought the following would work fine, but it doesn't:
series.dt.month += 1
The error is
ValueError: modifications to a property of a datetimelike object are not supported. Change values on the original.
Is there a simple way to achieve this without needing to redefine things?
First, I created timeseries date example:
import datetime
t = [datetime.datetime(2015,4,18,23,33,58),datetime.datetime(2015,4,19,14,32,8),datetime.datetime(2015,4,20,18,42,44),datetime.datetime(2015,4,20,21,41,19)]
import pandas as pd
df = pd.DataFrame(t,columns=['Date'])
Timeseries:
df
Out[]:
Date
0 2015-04-18 23:33:58
1 2015-04-19 14:32:08
2 2015-04-20 18:42:44
3 2015-04-20 21:41:19
Now increment part, you can use offset option.
df['Date']+pd.DateOffset(days=30)
Output:
df['Date']+pd.DateOffset(days=30)
Out[66]:
0 2015-05-18 23:33:58
1 2015-05-19 14:32:08
2 2015-05-20 18:42:44
3 2015-05-20 21:41:19
Name: Date, dtype: datetime64[ns]

Convert content of Object datatype to Date datatype in Python

I am using Jupyter Notebook, Pandas framework and Python as the programming language.
I have a dataframe which is of the following shape (10500, 4). So it has 4 columns and 10500 records.
Initial_Date is one out of the 4 columns which is an Object datatype. This is the type of information it contains:
Initial_Date
1971
11969
102006
03051992
00131954
27001973
45061987
1996
It is easy to make out the format of the column as DDMMYYYY (03051992 is 3rd May 1992)
Note: As you can see there are invalid MM (00 and 13) and invalid DD (00 and 45).
I would like to use regex to extract whatever is available in the field. I don't know how to read YYYY separately to MM or DD so please enlighten me here. After the extraction occurs, I would like to test whether the YYYY, DD and MM are valid. If either of them are not valid then assign NaT else DD-MM-YYYY or DD/MM/YYYY (not fussy with the end format).
For example: 051992 is considered as invalid since this becomes DD/05/1992
A field that has full 8 digits for example 10081996 is considered valid 10/08/1996
PS. I am starting out with Pandas, Jupyter notebook and slowing reviving my Python skills. FYI If you guys think there is a better way to convert each field to a valid Date datatype then please do enlighten me.
you can do it this way:
result = pd.to_datetime(d.Initial_Date.astype(str), dayfirst=True, errors='coerce')
result.ix[result.isnull()] = pd.to_datetime(d.Initial_Date.astype(str), format='%d%m%Y', dayfirst=True, errors='coerce')
#format is set to %d%m%Y
result:
In [88]: result
Out[88]:
0 1971-01-01
1 NaT
2 2006-10-20
3 1992-03-05
4 1954-01-03
5 NaT
6 NaT
7 1996-01-01
Name: Initial_Date, dtype: datetime64[ns]
original DF
In [89]: d
Out[89]:
Initial_Date
0 1971
1 11969
2 102006
3 3051992
4 131954
5 27001973
6 45061987
7 1996

Fill missing days in timeseries (with duplicate keys)

Here is what I'm trying to do in Pandas:
load CSV file containing information about stocks for certain days
find the earliest and latest dates in the column date
create a new dataframe where all the days between the earliest and latest are filled (NaN or something like "missing" for all columns would be fine)
Currently it looks like this:
import pandas as pd
import dateutil
df = pd.read_csv("https://dl.dropboxusercontent.com/u/84641/temp/berkshire_new.csv")
df['date'] = df['date'].apply(dateutil.parser.parse)
new_date_range = pd.date_range(df['date'].min(), df['date'].max())
df = df.set_index('date')
df.reindex(new_date_range)
Unfortunately this throws the following error which I don't quite understand:
ValueError: Shape of passed values is (3, 4825), indices imply (3, 4384)
I've tried a dozen variations of this - without any luck. Any help would be much appreciated.
Edit:
After investigating this further, it looks like the problem is caused by duplicate indexes. The CSV does contain several entries for each date, which is probably causing the errors.
The question is still relevant though: How can I fill the gaps in between, although there are duplicate entries for each date?
So you have duplicates when considering symbol,date,action.
In [99]: df.head(10)
Out[99]:
symbol date change action
0 FDC 2001-08-15 00:00:00 15.069360 new
1 GPS 2001-08-15 00:00:00 19.653780 new
2 HON 2001-08-15 00:00:00 8.604316 new
3 LIZ 2001-08-15 00:00:00 6.711568 new
4 NKE 2001-08-15 00:00:00 22.686257 new
5 ODP 2001-08-15 00:00:00 5.686902 new
6 OSI 2001-08-15 00:00:00 5.893340 new
7 USB 2001-08-15 00:00:00 15.694478 new
8 NEE 2001-11-15 00:00:00 100.000000 new
9 GPS 2001-11-15 00:00:00 142.522231 increase
Create the new date index
In [102]: idx = pd.date_range(df.date.min(),df.date.max())
In [103]: idx
Out[103]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2001-08-15 00:00:00, ..., 2013-08-15 00:00:00]
Length: 4384, Freq: D, Timezone: None
This will, group by symbol and action
Then reindex that set to the full dates (idx)
Select out the only remaining column (change)
As now the index is symbol/date
In [100]: df.groupby(['symbol','action']).apply(
lambda x: x.set_index('date').reindex(idx)
)['change'].reset_index(level=1).head()
Out[100]:
action change
symbol
ADM 2001-08-15 decrease NaN
2001-08-16 decrease NaN
2001-08-17 decrease NaN
2001-08-18 decrease NaN
2001-08-19 decrease NaN
In [101]: df.groupby(['symbol','action']).apply(lambda x: x.set_index('date').reindex(idx))['change'].reset_index(level=1)
Out[101]:
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 977632 entries, (ADM, 2001-08-15 00:00:00) to (svm, 2013-08-15 00:00:00)
Data columns (total 2 columns):
action 977632 non-null values
change 490 non-null values
dtypes: float64(1), object(1)
You can then fill forward or whatever you need. FYI, not sure what you are going to do with this, but this is not a very common type of operation as you have mostly empty data.
I'm having a similar problem at the moment, I think you shouldn't use reindex but something like asfreq or resample.
with them you don't need to create an index, thy will.

Data Cleaning Consequence on a Pre-Built Index

Objective:
To create an Index that accommodates a pre-existing set of price data from a csv file. I can build an index using list comprehensions. If it's done in that way, the construction would give me a filtered list of length 86,772--when run over 1/3/2007-8/30/2012 for 42 times (i.e. 10 minute intervals). However, my data of prices coming from the csv is length: 62,034. Observe that the difference in length is due to data cleaning issues.
That said, I am not sure how to overcome the apparent mismatch between the real data and this pre-built (list comp) dataframe.
Attempt:
Am I using the first two lines incorrectly?
data=pd.read_csv('___.csv', parse_dates={'datetime':[0,1]}).set_index('datetime')
dt_index = pd.DatetimeIndex([datetime.combine(i.date,i.time) for i in data.index])
ts = pd.Series(data.prices.values, dt_index)
Questions:
As I understand it, I should use 'combine' since I want the index construction to be completely informed by my csv file. And, 'combine' returns a new datetime object whose date components are equal to the given date object’s, and whose time components are equal to the given time object’s.
When I parse_dates, is it lumping the time and date together and considering it to be a 'date'?
Is there a better way to achieve the stated objective?
Traceback Error:
AttributeError: 'unicode' object has no attribute 'date'
You can write this neatly as follows:
ts = df1.prices
Here's an example:
In [1]: df = pd.read_csv('prices.csv',
parse_dates={'datetime': [0,1]}).set_index('datetime')
In [2]: df # dataframe
Out[2]:
prices duty
datetime
2012-11-12 10:00:00 1 0
2012-12-12 10:00:00 2 0
2012-12-12 10:00:00 3 1
In [3]: df.prices # timeseries
Out[3]:
datetime
2012-11-12 10:00:00 1
2012-12-12 10:00:00 2
2012-12-12 11:00:00 3
Name: prices
In [4]: ts = df.prices
You can groupby date like so (similar to this example from the docs):
In [5]: key = lambda x: x.date()
In [6]: df.groupby(key).sum()
Out[6]:
prices duty
2012-11-12 1 0
2012-12-12 5 1
In [7]: ts.groupby(key).sum()
Out[7]:
2012-11-12 1
2012-12-12 5
Where prices.csv contains:
date,time,prices,duty
11/12/2012,10:00,1,0
12/12/2012,10:00,2,0
12/12/2012,11:00,3,1

Categories

Resources