I want to add a column to my data frame prod_data based on a range of dates. This is an example of the data in the column ['Mount Time'] I want to modify the new column from:
0 2022-08-17 06:07:00
1 2022-08-17 06:12:00
2 2022-08-17 06:40:00
3 2022-08-17 06:45:00
4 2022-08-17 06:47:00
The new column is named ['Week'] and I want it to run from M-S, with week 1 starting on 9/5/22, running through 9/11/22 and then week 2 the next M-S, and so on until the last week which would be 53. I would also like weeks previous to 9/5 to have negative week numbers, so 8/29/22 would be the start of week -1 and so on.
The only thing I could think of was to create 2 massive lists and use np.select to define the parameters of the column, but there has to be a cleaner way of doing this, right?
You can use pandas datetime objects to figure out how many days away a date is from your start date, 9/5/2022, and then use floor division to convert that to week numbers. I made the "mount_time" column just to emphasize that the original column should be a datetime object.
prod_data["mount_time"] = pd.to_datetime( prod_data[ "Mount Time" ] )
start_date = pd.to_datetime( "9/5/2022" )
days_away = prod_data.mount_time - start_date
prod_data["Week"] = ( days_away.dt.days // 7 ) + 1
As intended, 9/5/2022 through 9/11/2022 will have a value of 1. 8/29/2022 would start week 0 (not -1 as you wrote) unless you want 9/5/2022 to start as week 0 (in which case just delete the + 1 from the code). Some more examples:
>>> test[ ["date", "Week" ] ]
date Week
0 2022-08-05 -4
1 2022-08-14 -3
2 2022-08-28 -1
3 2022-08-29 0
4 2022-08-30 0
5 2022-09-05 1
6 2022-09-11 1
7 2022-09-12 2
I'm trying to subtract two columns on a dataset which have string times in order to get a time value for statistical analysis.
Basically, TOC is start time and IA is end time.
Something is slightly wrong:
dfc = pd.DataFrame(zip(*[TOC,IA]),columns=['TOC','IA'])
print (dfc)
dfc.['TOC']= dfc.['TOC'].astype(dt.datetime)
dfc['TOC'] = pd.to_datetime(dfc['TOC'])
dfc['TOC'] = [time.time() for time in dfc['TOC']]
Convert the columns to datetime before subtracting:
>>> pd.to_datetime(dfc["IA"], format="%H:%M:%S")-pd.to_datetime(dfc["TOC"], format="%H:%M:%S")
0 0 days 00:08:07
1 0 days 00:15:29
2 0 days 00:11:14
3 0 days 00:27:50
dtype: timedelta64[ns]
I need to import a .xlsx sheet into pandas which has a column for the processing time of an associated activity. All entries in this column look somewhat like this:
01:20:34
12:22:30
25:01:02
155:20:56
Which says how much hours, minutes and seconds were needed. When I use pd.read_excel pandas correctly interprets each of the timestamps with less than 24 hours, and reads them as above in the first two cases. The timestamps with more than 24h (last two) on the other hand are converted into a datetime object, which in turn looks like this: 1900-01-02T14:58:03 instead of 62:58:03.
Is there a simple solution?
I think that part of the problem is not in Python/Pandas, but in Excel. Date '1900-01-01' is the base date used by Excel represented by number '1'. You can check that if you write '0' in a cell and then formate that cell to date, you get '1900-01-00' and '1' you get '1900-01-01'.
So, try to export your Excel file to a CSV file before importing to pandas and then import this way:
import pandas as pd
df1 = pd.read_csv('sample_data.csv')
In this case, you can get this DataFrame with the column Duration as a string (I added a column id for reference).
duration id
0 01:20:34 1
1 12:22:30 2
2 25:01:02 3
3 155:20:56 4
Then for your purpose, I suggest you Do not try to convert those values to datetime type, but a timedelta. A strategy will be to split the strings by colons and then build an instance of timedelta using those three fields: hours, minutes, and seconds.
import datetime as dt
def converter1(x):
vals = x.split(':')
vals = [int(val) for val in vals ]
out = dt.timedelta(hours=vals[0], minutes=vals[1], seconds=vals[2])
return out
df1['deltat'] = df1['duration'].apply(converter1)
duration id deltat
0 01:20:34 1 0 days 01:20:34
1 12:22:30 2 0 days 12:22:30
2 25:01:02 3 1 days 01:01:02
3 155:20:56 4 6 days 11:20:56
If you need to convert those values to a number of decimals hours or other new fields use the total_seconds() method from timedelta:
df1['deltat_hr'] = df1['deltat'].apply(lambda x: x.total_seconds()/3600)
duration id deltat deltat_hr
0 01:20:34 1 0 days 01:20:34 1.342778
1 12:22:30 2 0 days 12:22:30 12.375000
2 25:01:02 3 1 days 01:01:02 25.017222
3 155:20:56 4 6 days 11:20:56 155.348889
I try to calculate number of days until and since last and next holiday. My method of calculation it is like below:
holidays = pd.Series(pd.to_datetime(["01.01.2013", "06.01.2013", "14.02.2013","29.03.2013",
"31.03.2013", "01.04.2013", "01.05.2013", "03.05.2013",
"19.05.2013", "26.05.2013", "30.05.2013", "23.06.2013",
"15.07.2013", "27.10.2013", "01.11.2013", "11.11.2013",
"24.12.2013", "25.12.2013", "26.12.2013", "31.12.2013",
"01.01.2014", "06.01.2014", "14.02.2014", "30.03.2014",
"18.04.2014", "20.04.2014", "21.04.2014", "01.05.2014",
"03.05.2014", "03.05.2014", "26.05.2014", "08.06.2014",
"19.06.2014", "23.06.2014", "15.08.2014", "26.10.2014",
"01.11.2014", "11.11.2014", "24.12.2014", "25.12.2014",
"26.12.2014", "31.12.2014",
"01.01.2015", "06.01.2015", "14.02.2015", "29.03.2015",
"03.04.2015", "05.04.2015", "06.04.2015", "01.05.2015",
"03.05.2015", "24.05.2015", "26.05.2015", "04.06.2015",
"23.06.2015", "15.08.2015", "25.10.2015", "01.11.2015",
"11.11.2015", "24.12.2015", "25.12.2015", "26.12.2015",
"31.12.2015"], dayfirst=True))
#Number of days until next holiday
d_until_next_holiday = []
#Number of days since last holiday
d_since_last_holiday = []
for row in data.itertuples():
next_special_date = holidays[holidays >= row["Date"]].iloc[0]
d_until_next_holiday.append((next_special_date - row["Date"])/pd.Timedelta('1D'))
previous_special_date = holidays[holidays <= row.index].iloc[-1]
d_since_last_holiday.append((row["Date"] - previous_special_date)/pd.Timedelta('1D'))
#Add new cols to DF
sto2STG14["d_until_next_holiday"] = d_until_next_holiday
sto2STG14["d_since_last_holiday"] = d_since_last_holiday
Nevertheless, I have en error like below:
TypeError: tuple indices must be integers or slices, not str
Why I have this erro ? I know that row is tuple, but i use in my code .iloc[0] and .iloc[-1] ? WHat can I do ?
With pandas, you rarely need to loop. In this case, the .shift method allows you to compute everything in one go:
import pandas
holidays = pandas.Series(pandas.to_datetime([
"01.01.2013", "06.01.2013", "14.02.2013","29.03.2013",
"31.03.2013", "01.04.2013", "01.05.2013", "03.05.2013",
"19.05.2013", "26.05.2013", "30.05.2013", "23.06.2013",
"15.07.2013", "27.10.2013", "01.11.2013", "11.11.2013",
"24.12.2013", "25.12.2013", "26.12.2013", "31.12.2013",
"01.01.2014", "06.01.2014", "14.02.2014", "30.03.2014",
"18.04.2014", "20.04.2014", "21.04.2014", "01.05.2014",
"03.05.2014", "03.05.2014", "26.05.2014", "08.06.2014",
"19.06.2014", "23.06.2014", "15.08.2014", "26.10.2014",
"01.11.2014", "11.11.2014", "24.12.2014", "25.12.2014",
"26.12.2014", "31.12.2014",
"01.01.2015", "06.01.2015", "14.02.2015", "29.03.2015",
"03.04.2015", "05.04.2015", "06.04.2015", "01.05.2015",
"03.05.2015", "24.05.2015", "26.05.2015", "04.06.2015",
"23.06.2015", "15.08.2015", "25.10.2015", "01.11.2015",
"11.11.2015", "24.12.2015", "25.12.2015", "26.12.2015",
"31.12.2015"
], dayfirst=True)
)
results = (
holidays
.sort_values()
.to_frame('holiday')
.assign(
days_since_prev=lambda df: df['holiday'] - df['holiday'].shift(1),
days_until_next=lambda df: df['holiday'].shift(-1) - df['holiday'],
)
)
results.head(10)
And I get:
holiday days_since_prev days_until_next
0 2013-01-01 NaT 5 days
1 2013-01-06 5 days 39 days
2 2013-02-14 39 days 43 days
3 2013-03-29 43 days 2 days
4 2013-03-31 2 days 1 days
5 2013-04-01 1 days 30 days
6 2013-05-01 30 days 2 days
7 2013-05-03 2 days 16 days
8 2013-05-19 16 days 7 days
9 2013-05-26 7 days 4 days
With Pandas I have created a DataFrame from an imported .csv file (this file is generated through simulation). The DataFrame consists of half-hourly energy consumption data for a single year. I have already created a DateTimeindex for the dates.
I would like to be able to reformat this data into average hourly week and weekend profile results. With the week profile excluding holidays.
DataFrame:
Date_Time Equipment:Electricity:LGF Equipment:Electricity:GF
01/01/2000 00:30 0.583979872 0.490327348
01/01/2000 01:00 0.583979872 0.490327348
01/01/2000 01:30 0.583979872 0.490327348
01/01/2000 02:00 0.583979872 0.490327348
I found an example (Getting the average of a certain hour on weekdays over several years in a pandas dataframe) that explains doing this for several years, but not explicitly for a week (without holidays) and weekend.
I realised that there is no resampling techniques in Pandas that do this directly, I used several aliases (http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) for creating Monthly and Daily profiles.
I was thinking of using the business day frequency and create a new dateindex with working days and compare that to my DataFrame datetimeindex for every half hour. Then return values for working days and weekend days when true or false respectively to create a new dataset, but am not sure how to do this.
PS; I am just getting into Python and Pandas.
Dummy data (for future reference, more likely to get an answer if you post some in a copy-paste-able form)
df = pd.DataFrame(data={'a':np.random.randn(1000)},
index=pd.date_range(start='2000-01-01', periods=1000, freq='30T'))
Here's an approach. First define a US (or modify as appropriate) business day offset with holidays, and generate and range covering your dates.
from pandas.tseries.holiday import USFederalHolidayCalendar
from pandas.tseries.offsets import CustomBusinessDay
bday_us = CustomBusinessDay(calendar=USFederalHolidayCalendar())
bday_over_df = pd.date_range(start=df.index.min().date(),
end=df.index.max().date(), freq=bday_us)
Then, develop your two grouping columns. An hour column is easy.
df['hour'] = df.index.hour
For weekday/weekend/holiday, define a function to group the data.
def group_day(date):
if date.weekday() in [5,6]:
return 'weekend'
elif date.date() in bday_over_df:
return 'weekday'
else:
return 'holiday'
df['day_group'] = df.index.map(group_day)
Then, just group by the two columns as you wish.
In [140]: df.groupby(['day_group', 'hour']).sum()
Out[140]:
a
day_group hour
holiday 0 1.890621
1 -0.029606
2 0.255001
3 2.837000
4 -1.787479
5 0.644113
6 0.407966
7 -1.798526
8 -0.620614
9 -0.567195
10 -0.822207
11 -2.675911
12 0.940091
13 -1.601885
14 1.575595
15 1.500558
16 -2.512962
17 -1.677603
18 0.072809
19 -1.406939
20 2.474293
21 -1.142061
22 -0.059231
23 -0.040455
weekday 0 9.192131
1 2.759302
2 8.379552
3 -1.189508
4 3.796635
5 3.471802
... ...
18 -5.217554
19 3.294072
20 -7.461023
21 8.793223
22 4.096128
23 -0.198943
weekend 0 -2.774550
1 0.461285
2 1.522363
3 4.312562
4 0.793290
5 2.078327
6 -4.523184
7 -0.051341
8 0.887956
9 2.112092
10 -2.727364
11 2.006966
12 7.401570
13 -1.958666
14 1.139436
15 -1.418326
16 -2.353082
17 -1.381131
18 -0.568536
19 -5.198472
20 -3.405137
21 -0.596813
22 1.747980
23 -6.341053
[72 rows x 1 columns]