How to use Prophet's make_future_dataframe with multiple regressors? - python

make_future_dataframe seems to only produce a dataframe with date (ds) values, which in turn results in ValueError: Regressor 'var' missing from dataframe when attempting to generate forecasts when using the code below.
m = Prophet()
m.add_country_holidays(country_name='US')
m.add_regressor('var')
m.fit(df)
forecasts = m.predict(m.make_future_dataframe(periods=7))
Looking through the python docs, there doesn't seem to be any mention of how to combat this issue using Prophet. Is my only option to write additional code to lag all regressors by the period for which I want to generate forecasts (ex. take var at t-7 to produce a 7 day daily forecast)?

The issue here is that the future = m.make_future_dataframe method creates a dataset future where the only column is the ds date column. In order to predict using a model with regressors you also need columns for each regressor in the future dataset.
Using my original training data which I called regression_data, I solved this by predicting the values for the regressor variables and then filling those into a future_w_regressors dataset which was a merge of future and regression_data.
Assume you have a trained model model ready.
# List of regressors
regressors = ['Total Minutes','Sent Emails','Banner Active']
# My data is weekly so I project out 1 year (52 weeks), this is what I want to forecast
future = model.make_future_dataframe(52, freq='W')
at this point if you run model.predict(future) you will get the error you've been getting. What we need to do is incorporate the regressors. . I merge regression_data with future so that the observations from the past are filled. As you can see, the observations looking forward are empty (towards the end of the table)
# regression_data is the dataframe I used to train the model (include all covariates)
# merge the data you used to train the model
future_w_regressors = regression_data[regressors+['ds']].merge(future, how='outer', on='ds')
future_w_regressors
Total Minutes Sent Emails Banner Active ds
0 7.129552 9.241493e-03 0.0 2018-01-07
1 7.157242 8.629305e-14 0.0 2018-01-14
2 7.155367 8.629305e-14 0.0 2018-01-21
3 7.164352 8.629305e-14 0.0 2018-01-28
4 7.165526 8.629305e-14 0.0 2018-02-04
... ... ... ... ...
283 NaN NaN NaN 2023-06-11
284 NaN NaN NaN 2023-06-18
285 NaN NaN NaN 2023-06-25
286 NaN NaN NaN 2023-07-02
287 NaN NaN NaN 2023-07-09
Solution 1: Predict Regressors
For the next step I create a dataset with only the empty regressor values in it, loop through each regressor, train a naive prophet model on each, predict their values for the future date, fill those values into the empty regressors dataset and place those values into the future_w_regressors dataset.
# Get the segment for which we have no regressor values
empty_future = future_w_regressors[future_w_regressors[regressors[0]].isnull()]
only_future = empty_future[['ds']]
# Create a dictionary to hold the different independent variable forecasts
for regressor in regressors:
# Prep a new training dataset
train = regression_data[['ds',regressor]]
train.columns = ['ds','y'] # rename the variables so they can be submitted to the prophet model
# Train a model for this regressor
rmodel = Prophet()
rmodel.weekly_seasonality = False # this is specific to my case
rmodel.fit(train)
regressor_predictions = rmodel.predict(only_future)
# Replace the empty values in the empty dataset with the predicted values from the regressor model
empty_future[regressor] = regressor_predictions['yhat'].values
# Fill in the values for all regressors in the future_w_regressors dataset
future_w_regressors.loc[future_w_regressors[regressors[0]].isnull(), regressors] = empty_future[regressors].values
Now the future_w_regressors table no longer has missing values
future_w_regressors
Total Minutes Sent Emails Banner Active ds
0 7.129552 9.241493e-03 0.000000 2018-01-07
1 7.157242 8.629305e-14 0.000000 2018-01-14
2 7.155367 8.629305e-14 0.000000 2018-01-21
3 7.164352 8.629305e-14 0.000000 2018-01-28
4 7.165526 8.629305e-14 0.000000 2018-02-04
... ... ... ... ...
283 7.161023 -1.114906e-02 0.548577 2023-06-11
284 7.156832 -1.138025e-02 0.404318 2023-06-18
285 7.150829 -5.642398e-03 0.465311 2023-06-25
286 7.146200 -2.989316e-04 0.699624 2023-07-02
287 7.145258 1.568782e-03 0.962070 2023-07-09
And I can run the predict command to get my forecasts which now extend into 2023 (original data ended in 2022):
model.predict(future_w_regressors)
ds trend yhat_lower yhat_upper trend_lower trend_upper Banner Active Banner Active_lower Banner Active_upper Sent Emails Sent Emails_lower Sent Emails_upper Total Minutes Total Minutes_lower Total Minutes_upper additive_terms additive_terms_lower additive_terms_upper extra_regressors_additive extra_regressors_additive_lower extra_regressors_additive_upper yearly yearly_lower yearly_upper multiplicative_terms multiplicative_terms_lower multiplicative_terms_upper yhat
0 2018-01-07 2.118724 2.159304 2.373065 2.118724 2.118724 0.000000 0.000000 0.000000 3.681765e-04 3.681765e-04 3.681765e-04 0.076736 0.076736 0.076736 0.152302 0.152302 0.152302 0.077104 0.077104 0.077104 0.075198 0.075198 0.075198 0.0 0.0 0.0 2.271026
1 2018-01-14 2.119545 2.109899 2.327498 2.119545 2.119545 0.000000 0.000000 0.000000 3.437872e-15 3.437872e-15 3.437872e-15 0.077034 0.077034 0.077034 0.098945 0.098945 0.098945 0.077034 0.077034 0.077034 0.021911 0.021911 0.021911 0.0 0.0 0.0 2.218490
2 2018-01-21 2.120366 2.074524 2.293829 2.120366 2.120366 0.000000 0.000000 0.000000 3.437872e-15 3.437872e-15 3.437872e-15 0.077014 0.077014 0.077014 0.064139 0.064139 0.064139 0.077014 0.077014 0.077014 -0.012874 -0.012874 -0.012874 0.0 0.0 0.0 2.184506
3 2018-01-28 2.121187 2.069461 2.279815 2.121187 2.121187 0.000000 0.000000 0.000000 3.437872e-15 3.437872e-15 3.437872e-15 0.077110 0.077110 0.077110 0.050180 0.050180 0.050180 0.077110 0.077110 0.077110 -0.026931 -0.026931 -0.026931 0.0 0.0 0.0 2.171367
4 2018-02-04 2.122009 2.063122 2.271638 2.122009 2.122009 0.000000 0.000000 0.000000 3.437872e-15 3.437872e-15 3.437872e-15 0.077123 0.077123 0.077123 0.046624 0.046624 0.046624 0.077123 0.077123 0.077123 -0.030498 -0.030498 -0.030498 0.0 0.0 0.0 2.168633
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
283 2023-06-11 2.062645 2.022276 2.238241 2.045284 2.078576 0.025237 0.025237 0.025237 -4.441732e-04 -4.441732e-04 -4.441732e-04 0.077074 0.077074 0.077074 0.070976 0.070976 0.070976 0.101867 0.101867 0.101867 -0.030891 -0.030891 -0.030891 0.0 0.0 0.0 2.133621
284 2023-06-18 2.061211 1.975744 2.199376 2.043279 2.077973 0.018600 0.018600 0.018600 -4.533835e-04 -4.533835e-04 -4.533835e-04 0.077029 0.077029 0.077029 0.025293 0.025293 0.025293 0.095176 0.095176 0.095176 -0.069883 -0.069883 -0.069883 0.0 0.0 0.0 2.086504
285 2023-06-25 2.059778 1.951075 2.162531 2.041192 2.077091 0.021406 0.021406 0.021406 -2.247903e-04 -2.247903e-04 -2.247903e-04 0.076965 0.076965 0.076965 0.002630 0.002630 0.002630 0.098146 0.098146 0.098146 -0.095516 -0.095516 -0.095516 0.0 0.0 0.0 2.062408
286 2023-07-02 2.058344 1.953027 2.177666 2.039228 2.076373 0.032185 0.032185 0.032185 -1.190929e-05 -1.190929e-05 -1.190929e-05 0.076915 0.076915 0.076915 0.006746 0.006746 0.006746 0.109088 0.109088 0.109088 -0.102342 -0.102342 -0.102342 0.0 0.0 0.0 2.065090
287 2023-07-09 2.056911 1.987989 2.206830 2.037272 2.075110 0.044259 0.044259 0.044259 6.249949e-05 6.249949e-05 6.249949e-05 0.076905 0.076905 0.076905 0.039813 0.039813 0.039813 0.121226 0.121226 0.121226 -0.081414 -0.081414 -0.081414 0.0 0.0 0.0 2.096724
288 rows × 28 columns
Note that I trained the model for each regressor naively. However, you could optimize prediction for those independent variables if you wanted to.
Solution 2: Use last year's regressor values
Alternatively, you could just say that you don't want to compound the uncertainty of regressor forecasts on your main forecast and just want an idea of how forecasts might change for different values of the regressor. In that case you might just copy the regressor values from the last year into the missing future_w_regressors dataset. This has the added benefit of easily simulating drops or increases relative to current regressor levels:
from datetime import timedelta
last_date = regression_data.iloc[-1]['ds']
one_year_ago = last_date - timedelta(days=365) # works with data at any scale
last_year_of_regressors = regression_data.loc[regression_data['ds']>one_year_ago, regressors]
# If you want to simulate a 10% drop in levels compared to this year
last_year_of_regressors = last_year_of_regressors * 0.9
future_w_regressors.loc[future_w_regressors[regressors[0]].isnull(), regressors] = last_year_of_regressors.iloc[:len(future_w_regressors[future_w_regressors[regressors[0]].isnull()])].values

Related

Algorithm for predict(start=start_date, end=end_date) on unique same named weather station

Story that pertain to a new design solution
Goal is to use weather data to run ARIMA model fit on each group of like named 'stations' with their associated precipitation data, then finally execute a 30 day forward forecast. Looking to process specific same named stations and then next process the next unique same named stations, etc.
The algorithm to add question
How to write algorithm to run ARIMA model for each UNIQUE 'station' and, perhaps grouping stations to be unique groups to run ARIMA model on the group, and then fit for a 30 day forward forecast? The ARIMA(2,1,1) is a working order terms from auto.arima().
How to write a group algorithm for same named 'stations' before running the ARIMA model, fit, forecast? Or what other approach would achieve a set of like named stations to process specific same named stations and then move unto the next unique same named stations.
Working code executes but needs broader algorithm
Code was working, but on last run, predict(start=start_date, end=end_date) issued a key error. Removed NA, so this may fix the predict(start, end)
wd.weather_data = wd.weather_data[wd.weather_data['date'].notna()]
forecast_models = [50000]
n = 1
df_all_stations = data_prcp.drop(['level_0', 'index', 'prcp'], axis=1)
wd.weather_data.sort_values("date", axis = 0, ascending = True, inplace = True)
for station_name in wd.weather_data['station']:
start_date = pd.to_datetime(wd.weather_data['date'])
number_of_days = 31
end_date = pd.to_datetime(start_date) + pd.DateOffset(days=30)
model = statsmodels.tsa.arima_model.ARIMA(wd.weather_data['prcp'], order=(2,1,1))
model_fit = model.fit()
forecast = model_fit.predict(start=start_date, end=end_date)
forecast_models.append(forecast)
Data Source
<bound method NDFrame.head of station date tavg tmin tmax prcp snow
0 Anchorage, AK 2018-01-01 -4.166667 -8.033333 -0.30 0.3 80.0
35328 Grand Forks, ND 2018-01-01 -14.900000 -23.300000 -6.70 0.0 0.0
86016 Key West, FL 2018-01-01 20.700000 16.100000 25.60 0.0 0.0
59904 Wilmington, NC 2018-01-01 -2.500000 -7.100000 0.00 0.0 0.0
66048 State College, PA 2018-01-01 -13.500000 -17.000000 -10.00 4.5 0.0
... ... ... ... ... ... ... ...
151850 Kansas City, MO 2022-03-30 9.550000 3.700000 16.55 21.1 0.0
151889 Springfield, MO 2022-03-30 12.400000 4.500000 17.10 48.9 0.0
151890 St. Louis, MO 2022-03-30 14.800000 8.000000 17.60 24.9 0.0
151891 State College, PA 2022-03-30 0.400000 -5.200000 6.20 0.2 0.0
151899 Wilmington, NC 2022-03-30 14.400000 6.200000 20.20 0.0 0.0
wdir wspd pres
0 143.0 5.766667 995.133333
35328 172.0 33.800000 1019.200000
86016 4.0 13.000000 1019.900000
59904 200.0 21.600000 1017.000000
66048 243.0 12.700000 1015.200000
... ... ... ...
151850 294.5 24.400000 998.000000
151889 227.0 19.700000 997.000000
151890 204.0 20.300000 996.400000
151891 129.0 10.800000 1020.400000
151899 154.0 16.400000 1021.900000
Error
KeyError: 'The `start` argument could not be matched to a location related to the index of the data.'

Exponential Smoothing with alpha and beta greater than one

I have the following time series
year value
2001-01-01 433.0
2002-01-01 445.0
2003-01-01 406.0
2004-01-01 416.0
2005-01-01 432.0
2006-01-01 458.0
2007-01-01 418.0
2008-01-01 392.0
2009-01-01 464.0
2010-01-01 434.0
2012-01-01 435.0
2013-01-01 437.0
2014-01-01 465.0
2015-01-01 442.0
2016-01-01 456.0
2017-01-01 448.0
2018-01-01 433.0
2019-01-01 399.0
that I want to fit with an Exponential Smoothing model. I define my model the following way:
model = ExponentialSmoothing(dataframe, missing='drop', trend='mul', seasonal_periods=5,
seasonal='add',initialization_method="heuristic")
model = model.fit(optimized=True, method="basinhopping")
where I let the algorithm to optimize the values of smoothing_level=$\alpha$, smoothing_trending=$\beta$, smoothing_seasonal=$\gamma$ and damping_trend=$\phi$.
However, when I print the results for this specific case, i get: $\alpha=1.49$, $\beta=1.41$, $\gamma=0.0$ and $\phi=0.0$.
Could someone explain me what's happening here?
Are these values of $\alpha$ and $\beta$ greater than 1 acceptable?
I think you're misinterpreting the results. We can run your model as follows:
data = [
433.0, 445.0, 406.0, 416.0, 432.0, 458.0,
418.0, 392.0, 464.0, 434.0, 435.0, 437.0,
465.0, 442.0, 456.0, 448.0, 433.0, 399.0]
model = sm.tsa.ExponentialSmoothing(data, missing='drop', trend='mul', seasonal_periods=5,
seasonal='add',initialization_method="heuristic")
res = model.fit(optimized=True, method="basinhopping")
print(res.params['smoothing_level'])
print(res.params['smoothing_trend'])
which gives me:
1.4901161193847656e-08
1.4873988732462211e-08
Notice the e-08 part - the first parameter isn't equal to 1.49, it's equal to 0.0000000149.

The sum() function does not work well when adding zeros in pandas with python

I try sum all rows of dataframe with the following code:
df = pd.read_csv('dataset.csv', parse_dates =["date"], index_col ="date")
df.fillna(0, inplace=True)
df.head(10)
Result
date CALIFORNIA-CENTRAL CALIFORNIA-SOUTH Ciudad Guzman La Barca Zamora Ensenada
2014-01-01 0.0 34.778435 7.157517 20.216307 5.405574 10.788031
2014-02-01 0.0 43.540938 9.275431 26.198327 7.005088 13.980217
2014-03-01 0.0 36.032682 10.508932 29.682333 7.936665 15.839388
2014-04-01 0.0 51.948925 7.894117 22.296823 5.961877 11.898257
2014-05-01 0.0 24.522135 12.399953 35.023496 9.364822 18.689594
2014-06-01 0.0 66.884840 5.440356 15.366209 4.108722 8.199873
2014-07-01 0.0 43.974770 9.204158 25.997018 6.951260 13.872793
2014-08-01 0.0 51.296811 8.001250 22.599420 6.042787 12.059732
2014-09-01 0.0 30.358093 11.441187 32.315475 8.640733 17.244512
2014-10-01 0.0 45.377776 8.973664 25.345991 6.777184 13.525385
Then I make a resample weekly beginning by Monday and apply sum() function:
weekly = df.resample('W-MON').sum()
weekly.head(10)
Result:
date CALIFORNIA-CENTRAL CALIFORNIA-SOUTH Ciudad Guzman La Barca Zamora Ensenada
2014-01-05 0.161442 164.688463 35.074387 99.067123 26.489245 52.865205
2014-01-12 360.218088 198.894117 23.145886 65.375235 17.480477 34.886197
2014-01-19 11.395859 330.487743 58.833493 166.174392 44.432845 88.675667
2014-01-26 0.000000 295.610943 66.435441 187.645989 50.174067 100.133559
2014-02-02 0.000000 315.052247 63.241508 178.624769 47.761912 95.319564
2014-02-09 214.260106 227.217402 42.471614 119.960488 32.075855 64.014534
2014-02-16 141.873044 255.491621 49.718734 140.429880 37.549101 74.937619
2014-02-23 0.000000 330.671191 60.675535 171.377213 45.824011 91.452049
2014-03-02 0.000000 269.364174 70.747417 199.825104 53.430602 106.632702
2014-03-09 207.773431 238.926975 41.613565 117.536941 31.427831 62.721257
And if you can see, the sum in column 'CALIFORNIA-CENTRAL', it is wrong, it should be 0 in the first rows
My question is, what is the problem? Why can't the sum() function add zeros?
I very well checked my dataset.csv file and it is correct, I rebooted the kernel, cleared my cache and checked the type column (floating)
What am I doing wrong?

How to find the k value for K-Means clustering using scikit in python

I have a Pandas DataFrame which looks like this:
1 2 3 4 5 6 7 8 9 10 11 ... 467 468 469 470 471 472 473 474 475 476 477
1 1.000000 0.014085 0.134615 0.053030 0.109756 0.092105 0.095238 0.058824 0.104167 0.043478 0.135135 ... 0.045752 0.084112 0.098039 0.060870 0.000000 0.127273 0.043716 0.084615 0.068323 0.122449 0.172414
2 0.014085 1.000000 0.026667 0.039735 0.038095 0.074468 0.134021 0.084337 0.092593 0.184211 0.030303 ... 0.092025 0.107438 0.120690 0.021898 0.098361 0.176471 0.105820 0.127660 0.085714 0.132743 0.100000
3 0.134615 0.026667 1.000000 0.058824 0.054945 0.011494 0.089888 0.040541 0.078947 0.040541 0.141026 ... 0.050955 0.052174 0.063636 0.016000 0.000000 0.098361 0.048128 0.057971 0.072727 0.074766 0.068182
4 0.053030 0.039735 0.058824 1.000000 0.113924 0.056604 0.059880 0.039735 0.094170 0.039735 0.076433 ... 0.113636 0.104396 0.070652 0.072539 0.015152 0.042553 0.108434 0.081340 0.070833 0.059783 0.083333
5 0.109756 0.038095 0.054945 0.113924 1.000000 0.237113 0.102564 0.048077 0.120000 0.101010 0.090090 ... 0.064865 0.077465 0.111940 0.152174 0.011765 0.076087 0.070423 0.126582 0.082902 0.139535 0.145695
.. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
473 0.043716 0.105820 0.048128 0.108434 0.070423 0.062802 0.150754 0.123656 0.333333 0.100000 0.110553 ... 0.178571 0.258706 0.257576 0.074689 0.033333 0.137143 1.000000 0.235556 0.202335 0.187500 0.147059
474 0.084615 0.127660 0.057971 0.081340 0.126582 0.118421 0.209459 0.074324 0.294737 0.135714 0.285714 ... 0.165094 0.215569 0.326667 0.071795 0.054264 0.221311 0.235556 1.000000 0.269608 0.287582 0.225275
475 0.068323 0.085714 0.072727 0.070833 0.082902 0.069149 0.337580 0.117647 0.259091 0.172840 0.286624 ... 0.139344 0.187817 0.204188 0.048035 0.050314 0.111111 0.202335 0.269608 1.000000 0.280899 0.175926
476 0.122449 0.132743 0.074766 0.059783 0.139535 0.130081 0.345455 0.132743 0.343750 0.361702 0.166667 ... 0.173913 0.312977 0.302326 0.059524 0.071429 0.229167 0.187500 0.287582 0.280899 1.000000 0.246753
477 0.172414 0.100000 0.068182 0.083333 0.145695 0.122449 0.200000 0.076923 0.248705 0.157895 0.144828 ... 0.157895 0.222222 0.220126 0.133333 0.065041 0.142857 0.147059 0.225275 0.175926 0.246753 1.000000
This is an example, but the number of row and columns may vary for other DataFrames.
I need to cluster the values using K-Means in scikit, but I have no idea of how to find the correct number of cluster for my dataFrame.
Any suggestion? Also, as I am new to python and being this the first time I use sci-kit, any easy explanation of how to perform the K-Means clustering would be much appreciated.
Very common is the Elbow method (https://www.geeksforgeeks.org/elbow-method-for-optimal-value-of-k-in-kmeans/), where you fit your data to the KMeans model and use the .inertia_ attribute to create a plot with the cluster value.
#Creating models for k from 2 to 14
inertia = []
for k in range(2,15):
model = KMeans(n_clusters=k, random_state=12).fit(df)
inertia.append(model.inertia_)
#Plotting the inertia of the models
k_values = range(2,15)
plt.plot(k_values, inertia, 'o-')
plt.xlabel('Number of Clusters')
plt.ylabel('Inertia')
You should see something similar to this, indicating that, for this specific data frame I have created for this exercise, the ideal K value would be 3, as this is where you see the "elbow" - and the following k values do not bring much change in inertia:
We usually use Elbow Method to find the value of "K" in K-means.
inertias=[]
for k in K:
clf= KMeans(n_clusters=k)
clf.fit(X)
inertias.append(clf.inertia_)
plt.plot(inertias)
Now from the plot, you have to find the breakpoint. For the provided image, from point 1-3, the inertia changes drastically. The rate of change reduces from point 4. That means, 4 will be the elbow point, i.e., k=4
For a detailed explanation, you may visit,
Elbow Method for K Means
GFG

pd.DataFrame.ewm().cov() just get the last cov

mat is a pd.DataFrame of daily ret of several stocks.
the function of pd.DataFrame.ewm().cov() compute each day covariance of these stocks. when the shape of mat is 250 * 2972, it cost too much time and return 250 matrix of covariance. But i just want to get the last one of these cov. How could i do more easily and save some time ?
mat.head()
SecuCode 000001 000002 000004 000005
TradingDay
2016-08-31 0.00211 0.09547 0.0 0.00802
2016-09-01 -0.00422 -0.06163 0.0 -0.01746
2016-09-02 0.00000 0.01398 0.0 -0.00680
2016-09-05 -0.00318 -0.01398 0.0 0.00408
2016-09-06 -0.00106 -0.00513 0.0 0.01081
5 rows × 2972 columns

Categories

Resources