Predict / Forecast exchange rate based on finacial indicators - python

This is the dataset. I want to create a time series to forecast the last row (EURUSD).
Is it possible to forecast the last variable based on the other financial indicators present in the dataset?

You can use multiple linear regression for the prediction.
With your independent variables(interest rate, etc.) you can find the dependent variable(EURUSD in your case).
For further instructions and how to write it, you can visit these links;
Basic One
More intuitive

Related

Backcast time series using Python

I have a time-series data from 2016 to 2021, how could I backcast to get the data from 2010 to 2015 using ARIMA in Python?
COuld you guys give me some sample Python code?
Thank you very much
The only possibiliy I see here is to simply inverse your time series. That means the last observations becomes the first, the second last becomes the second and so on. You then have a series from 2021 to 2016.
You can do that by:
df = df.reindex(index=df.index[::-1])
You can then train an ARIMA model on this data and predict the "next" five years from 2015 to 2010. Remember that the first prediction will be for 2015-12-31, so you need to inverse this again to have the series from 2010 to 2015.
Keep in mind that ARIMA the predictions will be very, very bad, since your forecasts will be based on forecasts and so on. ARIMA is not made for predictions on such long time frames, so the results will be useless anyway, I guess. It is very likely that the predicitons will become a straight line after 30 or 40 predicions.And you can only use the autoregression part in such a case, since the order of the moving average model will limit the amount of steps you can forecast into the future.
Forecasting from an inversed timeseries would be the solution if you had more data.
However, only having 6 observations is problematic. Creating a forecasting (or backcasting) model requires using some of the observations to train the model and others to validate it. If you train with 4 observations then you only have 2 observations for validation. Is a model good if it forecasts those two well or did you just get lucky? Is it bad if it forecasts one observation well and the other poorly? If you increase the validation set to 3 observations, you get more confidence on whether the model is good or bad but creating the model (with only 3 observations) gets even harder than before.
Like others have stated, regardless of what machine learning model you choose, the results are likely to be poor with so little data. If you had the monthly data it might be more fruitful.
If you can't get the monthly data, since you are backcasting into the past, it might be better to estimate the values manually based on some related variables that you have data of (if any). E.g. if your timeseries is about a company's sales then maybe you could estimate based on the company's annual revenue (or company size, or something else) if you can get the historical data of that variable. This is not precise but can still be more precise than what ARIMA or similar methods would give with the data you have.

Timeseries several events forecasting

I am new to timeseries and I have a problem, I have a dataset of 3 columns, time, category, frequency of this category. The time is from 2016 to end of 2017. I need to forecast the frequency of each category during 2018: Dataset:
I need to use Neural Prophet to forecast the frequency of these data and I know only how to forecast ds,y. So Please advice on how to do this task using neural prophet
Thanks
As mentioned in Neuraprophet docs, here https://neuralprophet.com/model-overview/:
If you have many series that you expect to produce forecasts for, you
need to do this one at a time.
In your case you have multiple timeseries i.e. you have a separate timeseries corresponding to each primary_industry. So if you are going to apply neuralprophet on this dataset you will have to fit the model on each industry separately.

What should I choose as a target variable for price prediction?

I have this dataset for agriculture raw materials from 1990 to 2017, and I am trying to make some price predictions for sake of learning:
Here are all the columns:
Now I want to split the dataset into training and test set, so I can apply some machine learning models into predicting, however it is not clear in my head what should be my target variable y, considering that each of the columns has their prices and they are all independent from each other. How should I be splitting this dataset if I wanted to make price prediction?
As I can see from your data, there are a couple of raw material prices available for prediction. Considering that these raw materials prices are independent of each other, you can create a dataset with just one dependent variable (for example Copra_Price) and the rest of the independent variables, removing other price-related variables from the data. Once you have this dataset, you can easily split into train and test using Copra_Price. This can be repeated for each of the price variables.
One more consideration is that, if none of the price variables has anomalies in them, then you could use any one of them to split the data as a random selection on one of them would in most probability be a random selection across the group.

Predict number of rows using Machine Learnnig

I have the bank data of around 4 years of different branches. I am trying to predict number of rows in daily and hourly level. I have issue_datetime (year, month, day, hour) as important features. I applied different regression techniques (linear, decision trees, random forest, xgb) using graph lab but could not get better accuracy.
I was also thinking to set the threshold based on past data like taking the mean of counts in daily, monthly level after removing outliers and set that as a threshold.
What is the best approach?
Since you have 1d time series data, it should be relatively easy to graph your data and look for interesting patterns.
Once you establish that there are some non-stationary aspects to your data, the class of models you are probably wanting to check out first are auto-regressive models, possibly with seasonal additions. ARIMA models are pretty standard for time-series data. http://www.seanabu.com/2016/03/22/time-series-seasonal-ARIMA-model-in-python/

how to analyse and predict(machine learning) a time series data set using scikit-learn for python

i got data-set like this
i need to analyse and predict the status column. This is just 2 entrees from the training data set. In this data set there is heart rate pattern(which is collected in 1 second intervals, 10 numbers altogether) its a time series array(correct me if i'm wrong) i just need to know best way to analyse and get a prediction using this data. I'm using scikit-learning for my data-mining and machine learning.
What i just want to know is what is the best way to analyse these time series data? should i use vector based approach or something else. If you can give me example code that would be great for me to understand it.
Feed in each point in the heart rate time series as a separate column, along with a separate column (feature) for all of the the other data points. Do feature normalization (substract the mean, divide by the standard deviation) for each column over the entire dataset, and feed into a classifier.

Categories

Resources