I created a Holiday calendar for Germany (not all days included) as followed:
from pandas.tseries.holiday import Holiday,AbstractHolidayCalendar
class GermanHolidays(AbstractHolidayCalendar):
rules = [Holiday('New Years Day', month=1, day=1),
Holiday('First of May', month=5, day=1),
Holiday('German Unity Day', month=10,day=3),
...]
cal = GermanHolidays()
Now I want that a column displays when a holiday appears or not with ("1" or "0"). So I did the following:
holidays = cal.holidays(start=X['Time (CET)'].min(), end = X['Time (CET)'].max())
X['Holidays'] = X['Time (CET)'].isin(holidays)
X['Holidays'] = X['Holidays'].astype(float)
X is a dataframe where Time (CET) is column in the format %d.%m.%Y %H:%M:%S. Unfortunately this is not working. There is no error raised but all columns are marked with "0". So there is no matching happening and I really dont know why.
I thought that it is maybe because the frequency of holidays is daily and not hourly as it is in the column Time (CET).
Would be great if you could help me! Thank you!
There might be a few reasons for that.
One of them as mentioned by #unutbu - is a wrong (string) dtype. Make sure your X['Time (CET)'] column is of datetime dtype. This can be done as follows:
X['Time (CET)'] = pd.to_datetime(X['Time (CET)'], dayfirst=True, errors='coerce')
Another reason as you said is the time part.
Here is a demo:
In [28]: df = pd.DataFrame({'Date':pd.date_range('2017-01-01 01:01:01',
freq='9H', periods=1000)})
yields:
In [30]: df
Out[30]:
Date
0 2017-01-01 01:01:01
1 2017-01-01 10:01:01
2 2017-01-01 19:01:01
3 2017-01-02 04:01:01
4 2017-01-02 13:01:01
5 2017-01-02 22:01:01
6 2017-01-03 07:01:01
7 2017-01-03 16:01:01
8 2017-01-04 01:01:01
9 2017-01-04 10:01:01
.. ...
990 2018-01-07 07:01:01
991 2018-01-07 16:01:01
992 2018-01-08 01:01:01
993 2018-01-08 10:01:01
994 2018-01-08 19:01:01
995 2018-01-09 04:01:01
996 2018-01-09 13:01:01
997 2018-01-09 22:01:01
998 2018-01-10 07:01:01
999 2018-01-10 16:01:01
[1000 rows x 1 columns]
filtering by holidays isn't working because of not matching time part:
In [29]: df.loc[df.Date.isin(holidays)]
Out[29]:
Empty DataFrame
Columns: [Date]
Index: []
We can make it working by normalizing (truncate time part or set time to 00:00:00) our datetime column:
In [31]: df.loc[df.Date.dt.normalize().isin(holidays)]
Out[31]:
Date
0 2017-01-01 01:01:01
1 2017-01-01 10:01:01
2 2017-01-01 19:01:01
320 2017-05-01 01:01:01
321 2017-05-01 10:01:01
322 2017-05-01 19:01:01
734 2017-10-03 07:01:01
735 2017-10-03 16:01:01
This is basically what you already have. Given that this works and yours doesn't, it is likely because the values are text instead of timestamps as noted already by #unutbu and #MaxU.
Also, your post states:
displays when a holiday appears or not with ("1" or "0")
Did you really want a text value? You tried to convert to floats, but you probably just want integers.
X = pd.DataFrame({'Time (CET)': pd.DatetimeIndex(start='2017-01-01', end='2017-12-31', freq='12H')})
X = X.assign(Holidays=X['Time (CET)'].isin(cal.holidays()).astype(int))
>>> X
Time (CET) Holidays
0 2017-01-01 00:00:00 1
1 2017-01-01 12:00:00 0
2 2017-01-02 00:00:00 0
...
Related
I have a dataframe with 3 columns:
file = glob.glob('InputFile.csv')
for i in file:
df = pd.read_csv(i)
df['Date'] = pd.to_datetime(df['Date'])
print(df)
Date X Y
0 2020-02-13 00:11:59 -91.3900 -31.7914
1 2020-02-13 01:11:59 -87.1513 -34.6838
2 2020-02-13 02:11:59 -82.9126 -37.5762
3 2020-02-13 03:11:59 -79.3558 -40.2573
4 2020-02-13 04:11:59 -73.2293 -44.2463
... ... ... ...
2034 2020-05-04 18:00:00 -36.4645 -18.3421
2035 2020-05-04 19:00:00 -36.5767 -16.8311
2036 2020-05-04 20:00:00 -36.0170 -14.9356
2037 2020-05-04 21:00:00 -36.4354 -11.0533
2038 2020-05-04 22:00:00 -40.3424 -11.4000
[2039 rows x 3 columns]
print(converted_file.dtypes)
Date datetime64[ns]
xTilt float64
yTilt float64
dtype: object
I would like the output to be:
Date X Y X_Diff Y_Diff
0 2020-02-16 00:11:59 -38.46270 -70.8352 -38.46270 -70.8352
1 2020-02-23 00:11:59 -80.70250 -7.1893 -42.23980 63.6459
2 2020-03-01 00:11:59 -47.38980 -39.2652 33.31270 -32.0759
3 2020-03-08 00:00:00 -35.65350 -64.5058 11.73630 -25.2406
4 2020-03-15 00:00:00 -43.03290 -15.8425 -7.37940 48.6633
5 2020-03-22 00:00:00 -19.77130 -25.5298 23.26160 -9.6873
6 2020-03-29 00:00:00 -13.18940 12.4093 6.58190 37.9391
7 2020-04-05 00:00:00 -8.49098 27.8407 4.69842 15.4314
8 2020-04-12 00:00:00 -19.05360 20.0445 -10.56262 -7.7962
9 2020-04-26 00:00:00 -25.61330 31.6306 -6.55970 11.5861
10 2020-05-03 00:00:00 -46.09250 -30.3557 -20.47920 -61.9863
In such a way that I would like to search from the InputFile.csv file all dates that are in Sundays and extract every first occurence of every Sunday (that is the first entry on that day and not the other times) along with the X and Y values that corresponds to that selected day. Then save it to a new dataframe where I could do subtraction in the X and Y. Copying the very first X and Y to be copied on columns X_Diff and Y_Diff, respectively. Then for the next entries of the output file, loop in all rows to get the difference of the next X minus the previous X then result will be appended in the X_Diff. Same goes with Y until the end of the file.
Here is my solution.
1. Preparation: I will need to generate some random data to be worked on.
import pandas as pd
import numpy as np
df = pd.date_range('2020-02-13', '2020-05-04', freq='1H').to_frame(name='Date').reset_index(drop=True)
df['X'] = np.random.randn(df.shape[0]) * 100
df['Y'] = np.random.randn(df.shape[0]) * 100
The data is like this:
Date X Y
0 2020-02-13 00:00:00 -12.044751 165.962038
1 2020-02-13 01:00:00 63.537406 65.137176
2 2020-02-13 02:00:00 67.555256 114.186898
... ... ... ..
2. Filter the dataframe to get Sunday only. Then, generate another column with date only for grouping purpose.
df = df[df.Date.dt.dayofweek == 0]
df['date_only'] = df.Date.dt.date
Then, it looks like this.
Date X Y date_only
96 2020-02-17 00:00:00 26.632391 120.311315 2020-02-17
97 2020-02-17 01:00:00 -14.111209 21.543440 2020-02-17
98 2020-02-17 02:00:00 -11.941086 -51.303122 2020-02-17
99 2020-02-17 03:00:00 -48.612563 137.023917 2020-02-17
100 2020-02-17 04:00:00 133.843010 -47.168805 2020-02-17
... ... ... ... ...
1796 2020-04-27 20:00:00 -158.310600 30.149292 2020-04-27
1797 2020-04-27 21:00:00 170.212825 181.626611 2020-04-27
1798 2020-04-27 22:00:00 59.773796 11.262186 2020-04-27
1799 2020-04-27 23:00:00 -99.757428 83.529157 2020-04-27
1944 2020-05-04 00:00:00 -168.435315 245.884281 2020-05-04
3. Next step, sort the data frame by "Date". Then, group the dataframe by "date_only". After that, take the first row of each group.
df = df.sort_values(by=['Date'])
df = df.groupby('date_only').apply(lambda g: g.head(1)).reset_index(drop=True).drop(columns=['date_only'])
Results:
Date X Y
0 2020-02-17 4.196690 -205.843619
1 2020-02-24 -189.811351 -5.294274
2 2020-03-02 -231.596763 -46.989246
3 2020-03-09 76.561269 -40.188202
4 2020-03-16 -18.653363 52.376442
5 2020-03-23 106.758484 22.969963
6 2020-03-30 -133.601545 185.561830
7 2020-04-06 -57.748555 -187.878427
8 2020-04-13 57.648834 10.365917
9 2020-04-20 -47.959093 177.455676
10 2020-04-27 -30.527067 -37.046330
11 2020-05-04 -52.854252 -136.069205
4. Last step, get the difference for each X/Y value with their previous value.
df['X_Diff'] = df.X.diff()
df['Y_Diff'] = df.Y.diff()
Results:
Date X Y X_Diff Y_Diff
0 2020-02-17 4.196690 -205.843619 NaN NaN
1 2020-02-24 -189.811351 -5.294274 -194.008042 200.549345
2 2020-03-02 -231.596763 -46.989246 -41.785412 -41.694972
3 2020-03-09 76.561269 -40.188202 308.158031 6.801044
4 2020-03-16 -18.653363 52.376442 -95.214632 92.564644
5 2020-03-23 106.758484 22.969963 125.411847 -29.406479
6 2020-03-30 -133.601545 185.561830 -240.360029 162.591867
7 2020-04-06 -57.748555 -187.878427 75.852990 -373.440257
8 2020-04-13 57.648834 10.365917 115.397389 198.244344
9 2020-04-20 -47.959093 177.455676 -105.607927 167.089758
10 2020-04-27 -30.527067 -37.046330 17.432026 -214.502006
11 2020-05-04 -52.854252 -136.069205 -22.327185 -99.022874
5. If you are not happy with the "NaN" for the first row, then just fill it with the X/Y columns' original values.
df['X_Diff'] = df['X_Diff'].fillna(df.X)
df['Y_Diff'] = df['Y_Diff'].fillna(df.Y)
Final results:
Date X Y X_Diff Y_Diff
0 2020-02-17 4.196690 -205.843619 4.196690 -205.843619
1 2020-02-24 -189.811351 -5.294274 -194.008042 200.549345
2 2020-03-02 -231.596763 -46.989246 -41.785412 -41.694972
3 2020-03-09 76.561269 -40.188202 308.158031 6.801044
4 2020-03-16 -18.653363 52.376442 -95.214632 92.564644
5 2020-03-23 106.758484 22.969963 125.411847 -29.406479
6 2020-03-30 -133.601545 185.561830 -240.360029 162.591867
7 2020-04-06 -57.748555 -187.878427 75.852990 -373.440257
8 2020-04-13 57.648834 10.365917 115.397389 198.244344
9 2020-04-20 -47.959093 177.455676 -105.607927 167.089758
10 2020-04-27 -30.527067 -37.046330 17.432026 -214.502006
11 2020-05-04 -52.854252 -136.069205 -22.327185 -99.022874
Note: There is no time displayed in the "Date" field in the final result. This is because the data I generated for those dates are hourly. So, the first row of each Sunday is XXXX-XX-XX 00:00:00, and the time 00:00:00 will not be displayed in pandas, although they actually exist.
Here is the Colab Link. You can have all my code in a notebook here.
https://colab.research.google.com/drive/1ecSSvJW0waCU19KPoj5uiiYmHp9SSQOf?usp=sharing
I will create a dataframe as Christopher did:
import pandas as pd
import numpy as np
df = pd.date_range('2020-02-13', '2020-05-04', freq='1H').to_frame(name='Date').reset_index(drop=True)
df['X'] = np.random.randn(df.shape[0]) * 100
df['Y'] = np.random.randn(df.shape[0]) * 100
Dataframe view
At First, set the datetime column as index
df = df.set_index('Date')
Secondly, get the rows only for sundays:
sunday_df= df[df.index.dayofweek == 6]
Third, resample the values to day format, take the last value of the day and remove rows with empty hours
sunday_df = sunday_df.resample('D').last().dropna()
Lastly, do the subtraction:
sunday_df['X_Diff'] = sunday_df.X.diff()
sunday_df['Y_Diff'] = sunday_df.Y.diff()
The last view of the new dataframe
Rookie here so please excuse my question format:
I got an event time series dataset for two months (columns for "date/time" and "# of events", each row representing an hour).
I would like to highlight the 10 hours with the lowest numbers of events for each week. Is there a specific Pandas function for that? Thanks!
Let's say you have a dataframe df with column col as well as a datetime column.
You can simply sort the column with
import pandas as pd
df = pd.DataFrame({'col' : [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],
'datetime' : ['2019-01-01 00:00:00','2015-02-01 00:00:00','2015-03-01 00:00:00','2015-04-01 00:00:00',
'2018-05-01 00:00:00','2016-06-01 00:00:00','2017-07-01 00:00:00','2013-08-01 00:00:00',
'2015-09-01 00:00:00','2015-10-01 00:00:00','2015-11-01 00:00:00','2015-12-01 00:00:00',
'2014-01-01 00:00:00','2020-01-01 00:00:00','2014-01-01 00:00:00']})
df = df.sort_values('col')
df = df.iloc[0:10,:]
df
Output:
col datetime
0 1 2019-01-01 00:00:00
1 2 2015-02-01 00:00:00
2 3 2015-03-01 00:00:00
3 4 2015-04-01 00:00:00
4 5 2018-05-01 00:00:00
5 6 2016-06-01 00:00:00
6 7 2017-07-01 00:00:00
7 8 2013-08-01 00:00:00
8 9 2015-09-01 00:00:00
9 10 2015-10-01 00:00:00
I know there's a function called nlargest. I guess there should be an nsmallest counterpart. pandas.DataFrame.nsmallest
df.nsmallest(n=10, columns=['col'])
My bad, so your DateTimeIndex is a Hourly sampling. And you need the hour(s) with least events weekly.
...
Date n_events
2020-06-06 08:00:00 3
2020-06-06 09:00:00 3
2020-06-06 10:00:00 2
...
Well I'd start by converting each hour into columns.
1. Create an Hour column that holds the hour of the day.
df['hour'] = df['date'].hour
Pivot the hour values into columns having values as n_events.
So you'll then have 1 datetime index, 24 hour columns, with values denoting #events. pandas.DataFrame.pivot_table
...
Date hour0 ... hour8 hour9 hour10 ... hour24
2020-06-06 0 3 3 2 0
...
Then you can resample it to weekly level aggregate using sum.
df.resample('w').sum()
The last part is a bit tricky to do on the dataframe. But fairly simple if you just need the output.
for row in df.itertuples():
print(sorted(row[1:]))
I have a csv file like this and this is the code I wrote to filter the date
example['date_1'] = pd.to_datetime(example['date_1'])
example['date_2'] = pd.to_datetime(example['date_2'])
example
date_1 ID date_2
2015-01-12 111 2016-01-20 08:34:00
2016-01-11 222 2016-12-15 08:34:00
2016-01-11 7770 2016-12-15 08:34:00
2016-01-10 7881 2016-11-17 08:32:00
2016-01-03 90243 2016-04-14 08:35:00
2016-01-03 90354 2016-04-14 08:35:00
2015-01-11 1140303 2015-12-15 08:43:00
2015-01-11 1140414 2015-12-15 08:43:00
example[(example['date_1'] <= '2016-11-01')
& (example['date_1'] >= '2015-11-01')
& (example['date_2'] <= '2016-12-16')
& (example['date_2'] >= '2015-12-15')]
Output:
2016-01-11 222 2016-12-15 08:34:00
2016-01-11 7770 2016-12-15 08:34:00
2016-01-10 7881 2016-11-17 08:32:00
2016-01-03 90243 2016-04-14 08:35:00
2016-01-03 90354 2016-04-14 08:35:00
I don't understand why it changes the format of the date, and it seems like it mix up the month&day in the date, with the conditional filter, the expected result should be the same with the original dataset, but it erased several lines? Can someone help me with it, many thanks.
Some locales format the date as dd/mm/YYYY, while others use mm/dd/YYYY. By default pandas uses the american format of mm/dd/YYYY unless it can infer the alternate format from the values (when a day number is greater than 12...).
So if you know that you input date format is dd/mm/YYYY, you must say it to pandas:
example['date_1'] = pd.to_datetime(example['date_1'], dayfirst=True)
example['date_2'] = pd.to_datetime(example['date_2'], dayfirst=True)
Once pandas has a Timestamp column, it internally stores a number of nano seconds from 1970-01-01 00:00, and by default displays it according to ISO-8601, striping parts that are 0 for the columns. Parts being the full time, fractions of seconds or nanoseconds.
You should not care if you want to process the Timestamps. If at the end you want to force a format, explicitely change the column to its string representation:
df['date_1'] = df['date_1'].df.strftime('%d/%m/%Y %H:%M')
I have a dataset of userids and the all the times they use a particular pass. I need to find out how many days since each of them first used the pass. I was thinking of running through the dataset and store the first use in a dictionary and minus it off today's date. I cant seem to get it to work.
Userid Start use Day
1712 2019-01-04 Friday
1712 2019-01-05 Saturday
9050 2019-01-04 Friday
9050 2019-01-04 Friday
9050 2019-01-06 Sunday
9409 2019-01-05 Saturday
9683 2019-05-20 Monday
8800 2019-05-17 Friday
8800 2019-05-17 Friday
This is the part of the dataset. Date format is Y-m-d
usedict={}
keys = df.user_id
values = df.start_date
for i in keys:
if (usedict[i] == keys):
continue
else:
usedict[i] = values[i]
prints(usedict)
user_id use_count days_used Ave Daily Trips register_date days_since_reg
12 42 23 1.826087 NaT NaT
17 28 13 2.153846 NaT NaT
114 54 24 2.250000 2019-02-04 107 days
169 31 17 1.823529 NaT NaT
1414 49 20 2.450000 NaT NaT
1712 76 34 2.235294 NaT NaT
2388 24 12 2.000000 NaT NaT
6150 10 5 2.000000 2019-02-05 106 days
You can achieve what you want with the following. I have used only 2 user ids from the example given by you, but the same will apply to all.
import pandas as pd
import datetime
df = pd.DataFrame([{'Userid':'1712','use_date':'2019-01-04'},
{'Userid':'1712','use_date':'2019-01-05'},
{'Userid':'9050','use_date':'2019-01-04'},
{'Userid':'9050','use_date':'2019-01-04'},
{'Userid':'9050','use_date':'2019-01-06'}])
df.use_date = pd.to_datetime(df.use_date).dt.date
group_df = df.sort_values(by='use_date').groupby('Userid', as_index=False).agg({'use_date':'first'}).rename(columns={'use_date':'first_use_date'})
group_df['diff_from_today'] = datetime.datetime.today().date() - group_df.first_use_date
The output is:
print(group_df)
Userid first_use_date diff_from_today
0 1712 2019-01-04 139 days
1 9050 2019-01-04 139 days
Check sort_values and groupby for more details.
I am only looking at two columns but you could find the min for each id with groupby and then use apply to get difference (I have done difference in days)
import pandas as pd
import datetime
user_id = [1712, 1712, 9050, 9050, 9050, 9409, 9683, 8800, 8800]
start = ['2019-01-04', '2019-01-05', '2019-01-04', '2019-01-04', '2019-01-06', '2019-01-05', '2019-05-20', '2019-05-17', '2019-05-17']
df = pd.DataFrame(list(zip(user_id, start)), columns = ['UserId', 'Start'])
df['Start']= pd.to_datetime(df['Start'])
df = df.groupby('UserId')['Start'].agg([pd.np.min])
now = datetime.datetime.now()
df['days'] = df['amin'].apply(lambda x: (now - x).days)
a_dict = pd.Series(df.days.values,index = df.index).to_dict()
print(a_dict)
References:
to_dict() method taken from #jeff
Output:
I have an Excel file with a column named StartTime having hh:mm:ss XX data and the cells are in `h:mm:ss AM/FM' custom format. For example,
ID StartTime
1 12:00:00 PM
2 1:00:00 PM
3 2:00:00 PM
I used the following code to read the file
df = pd.read_excel('./mydata.xls',
sheet_name='Sheet1',
converters={'StartTime' : str},
)
df shows
ID StartTime
1 12:00:00
2 1:00:00
3 2:00:00
Is it a bug or how do you overcome this? Thanks.
[Update: 7-Dec-2018]
I guess I may have made changes to the Excel file that made it weird. I created another Excel file and present here (I could not attach an Excel file here, and it is not safe too):
I created the following code to test:
import pandas as pd
df = pd.read_excel('./Book1.xlsx',
sheet_name='Sheet1',
converters={'StartTime': str,
'EndTime': str
}
)
df['Hours1'] = pd.NaT
df['Hours2'] = pd.NaT
print(df,'\n')
df.loc[~df.StartTime.isnull() & ~df.EndTime.isnull(),
'Hours1'] = pd.to_datetime(df.EndTime) - pd.to_datetime(df.StartTime)
df['Hours2'] = pd.to_datetime(df.EndTime) - pd.to_datetime(df.StartTime)
print(df)
The outputs are
ID StartTime EndTime Hours1 Hours2
0 0 11:00:00 12:00:00 NaT NaT
1 1 12:00:00 13:00:00 NaT NaT
2 2 13:00:00 14:00:00 NaT NaT
3 3 NaN NaN NaT NaT
4 4 14:00:00 NaN NaT NaT
ID StartTime EndTime Hours1 Hours2
0 0 11:00:00 12:00:00 3600000000000 01:00:00
1 1 12:00:00 13:00:00 3600000000000 01:00:00
2 2 13:00:00 14:00:00 3600000000000 01:00:00
3 3 NaN NaN NaT NaT
4 4 14:00:00 NaN NaT NaT
Now the question has become: "Using pandas to perform time delta from 2 "hh:mm:ss XX" columns in Microsoft Excel". I have changed the title of the question too. Thank you for those who replied and tried it out.
The question is
How to represent the time value to hour instead of microseconds?
It seems that the StartTime column is formated as text in your file.
Have you tried reading it with parse_dates along with a parser function specified via the date_parser parameter? Should work similar to read_csv() although the docs don't list the above options explicitly despite them being available.
Like so:
pd.read_excel(r'./mydata.xls',
parse_dates=['StartTime'],
date_parser=lambda x: pd.datetime.strptime(x, '%I:%M:%S %p').time())
Given the update:
pd.read_excel(r'./mydata.xls', parse_dates=['StartTime', 'EndTime'])
(df['EndTime'] - df['StartTime']).dt.seconds//3600
alternatively
# '//' is available since pandas v0.23.4, otherwise use '/' and round
(df['EndTime'] - df['StartTime'])//pd.Timedelta(1, 'h')
both resulting in the same
0 1
1 1
2 1
dtype: int64