Multi Station time series forecast of flood through LSTM - python

I need to make flood forecast for 11 different stations. Each station require different forecast i.e. flood level (water inflow in cusecs) is different for different stations. Can A single LSTM model can handle all stations.
Parameters for all stations are same, like upstream discharge, rain, slope.
Moreover Flood period is only one month or 20 days of a year. If I want to use 10 years of data is there a way to use only 2 months data each year. 1 month with flood
1 month with no flood
But according to my knowledge there should not be any break in time steps in LSTM.
Please guide me your best. i am new to machine learning.
I would be highly thankful

Related

Train a dataset related to oil& gas industry using lstm for finding the optimum number of test that should be conducted in a month for each oil wells

I have a simulated data set of the oil and gas industry. This data set contains data of 365 days for 36 wells. Flow rates of each well are noted down for 365 days. My aim is to find the optimum number of test that needs to be carried out on each well in a month with a minimum number of errors. Possibly by using LSTM.
Screen shots of the dataset

calculating a weighted daily average for each DOY in xarray across a decade

I have a few years of sea level height data with variables of both absolute height and sea level anomaly. I want to calculate an improved anomaly dataset that takes into account seasonal changes in absolute height. Towards that goal I'm trying to calculate the mean height at every point on the grid for each day of the year. Ideally I'd like to take into account the previous two weeks and following two weeks with the closer days carrying more weight in the final mean. I think a normal distribution of weights would be ideal. There is a nice example in the xarray documentation of how to calculate seasonal averages, but I've yet to find a suitable approach for this weighted mean of each day.
My initial ds looks like:
I'm able to calculate this daily average via:
ds_daily_avg = ds.groupby('time.dayofyear').mean(dim='time')
The output of ds_daily_avg
but there is too much variation in the daily averages because I only have a decade of data. I've thought of then just doing a rolling average of ~ 14 days and while good enough this doesn't properly do the weighting I'm hoping to implement:
ds_daily_avg.sla.rolling(dayofyear=14).mean()
Any advice for properly doing this weighted mean through time?

Python rolling forecast update lags

I would like to implement an OLS with a sklearn.linear_model.LinearRegression.
I have a time series with 100 data points and the respective data.
My overall goal is to forecast the next 6 weeks. I have in x_train multiple explaining variables for my output. These include for example the date, the month and, amongst others, my lag variables. I have checked the significance of lags with the autocorrelation plot and it told me, lags 1-12 are significant, so 12 explaining variables are these 12 lags.
I have created 12 lag variables:
lag_variables=pd.concat([data.shift(1), data.shift(2),...., data.shift(12)]
These lag variables are 12 columns in my x_train dataframe, from which I choose my features.
However, if I want to forecast 6 data points, I need to update my lags.
Because for example for week 5, I don't have any lags 1-4, because I don't know them at the time of my forecast. For week 1 I know the lags 1-12, for week 2 I know the lags 2-12, and so on.
Python gives me now an error, since my x_train contains nan values, since I dont know all lags.
So my idea was, to forecast each week individually, so firstly I forecast week 1. Then, after I have forecasted week 1, I have the real value for week 1, and I can update my lags for week 2 with the real value. I am basically doing a rollig forecast with lags.
I have tried the following:
hist_x_train=[x for x in x_train]
hist_x_test=[x for x in x_test]
hist_y_train=[x for x in y_train]
hist_y_test=[x for x in y_test]
predictions=list()
for t in range(len(x_test)):
model=best_praams #gridsearch before the loop, chosses best parameter
model.fit(hist_x_train, hist_y_train)
y_pred=model.predict(hist_x_test[t])
predictions.append(y_pred)
hist_x_train.append(hist_x_test[t])
hist_y_train.append(y_pred)
print(y_test[t]) # this is my actual real value for that period; I don't
# know how to update this in x_train, so it shows up as a lag
Technically, the rolling forecast works like that if i don't have any lags in my x_train, but I don't know how to deal with the lags.
Can anyone please help me?
Thanks a lot!
Updating the train data with the predictions result in cascading errors which makes the next predictions almost constant or zero after some time, this should be avoided. Instead forecast for the next 6 weeks at one shot, or use multiple models (6 models here) to predict for the next 6 weeks, arrange your y_labels (y_1, y_2..... y_6) such that y_1 is lagged by 1 week, and y_2 by 2 weeks, and so on. and train then accordingly.

fbprophet yearly seasonality values too high

I have recently started using fbprophet package in python. I have monthly data for last 2 years and forecasting for next 9 months.
Since, I have monthly data I have only included yearly seasonality (Prophet(yearly_seasonality = True)).
When I plot components, trend values seem to be fine however, Yearly seasonality values are too high, I don't understand why?
Seasonality is showing 300 increase or -200 decrease. However, in actual graph it is not happening in any months in the past - what I can do to correct?
Code Used is as follows:
m = Prophet(yearly_seasonality = True)
m.fit(df_bu_country1)
future = m.make_future_dataframe(periods=9, freq='M')
forecast = m.predict(future)
m.plot(forecast)
m.plot_components(forecast)
There is no seasonality at all in your data. For there to be yearly seasonality, you should have a pattern that repeats year after year, but the shape of your time series from 10/2015 to 10/2016 is completely different from the shape between 10/2016 to 10/2017. So forcing a yearly seasonality is going to give you strange results, you should switch it off (i.e. just use Prophet's default settings).
There is an inconsistency in the seasonality factor of your data, there seems a little yearly seasonality between 2017-04 to 2018-10 . The first answer is absolutely true but incase if you feel there some seasonality you can reduce its impact by altering fourier order it has.
https://facebook.github.io/prophet/docs/seasonality,_holiday_effects,_and_regressors.html#fourier-order-for-seasonalities
This page has how to do so, the default fourier order is 10, reducing the values cahnges it effects.
Try this hope it helps you

Holt-Winters for multi-seasonal forecasting in Python

My data: I have two seasonal patterns in my hourly data... daily and weekly. For example... each day in my dataset has roughly the same shape based on hour of the day. However, certain days like Saturday and Sunday exhibit increases in my data, and also slightly different hourly shapes.
(Using holt-winters, as I found discovered here: https://gist.github.com/andrequeiroz/5888967)
I ran the algorithm, using 24 as my periods per season, and forecasting out 7 seasons (1 week), I noticed that it would over-forecast the weekdays and under-forecast the weekend since its estimating the saturday curve based on fridays curve and not a combination of friday's curve and saturday(t-1)'s curve. What would be a good way to include a secondary period in my data, as in, both 24 and 7? Is their a different algorithm that I should be using?
One obvious way to account for different shapes would be to use just one sort of period, but make it have a periodicity of 7*24, so you would be forecasting the entire week as a single shape.
Have you tried linear regression, in which the predicted value is a linear trend plus a contribution from dummy variables? The simplest example to explain would be trend plus only a daily contribution. Then you would have
Y = X*t + c + A*D1 + B*D2 + ... F * D6 (+ noise)
Here you use linear regression to find the best fitting values of X, c, and A...F. t is the time, counting up 0, 1, 2, 3,... indefinitely, so the fitted value of X gives you a trend. c is a constant value, so it moves all the predicted Ys up or down. D1 is set to 1 on Tuesdays and 0 otherwise, D2 is set to 1 on Wednesdays and 0 otherwise... D6 is set to 1 on Sundays and 0 otherwise, so the A..F terms give contributions for days other than Mondays. We don't fit a term for Mondays because if we did then we could not distinguish the c term - if you added 1 to c and subtracted one from each of A..F the predictions would be unchanged.
Hopefully you can now see that we could add 23 terms to account for an shape for the 24 hours of each day, and a total of 46 terms to account for a shape for the 24 hours of each weekday and the different 24 hours of each weekend day.
You would be best to look for a statistical package to handle this for you, such as the free R package (http://www.r-project.org/). It does have a bit of a learning curve, but you can probably find books or articles that take you through using it for just this sort of prediction.
Whatever you do, I would keep on checking forecasting methods against your historical data - people have found that the most accurate forecasting methods in practice are often surprisingly simple.

Categories

Resources