Does model get retrained entirely using .fit() in sklearn and tensorflow - python

I am trying to use machine learning in Python. Right now I am using sklearn and TensorFlow. I was wondering what to do if I have a model that needs updating when new data comes. For example, I have financial data. I built an LSTM model with TensorFlow and trained it. But new data comes in every day, and I don't want to retrain the model every day. Is there a way just to update the model and not retrain it from scratch?
In sklearn, the documentation for .fit() method (using DecisionTreeClassifier as an example) says that it
Build a decision tree classifier from the training set (X, y).
So it seems like it will retrain the entire model from scratch.
In tensorflow, .fit() method (using Sequential as an example) say
Trains the model for a fixed number of epochs (iterations on a
dataset).
So it seems like it does update the model instead of retraining. But I am not sure if my understanding is correct. I would be grateful for some clarification. And if sklearn indeed retrains the entire model using .fit(), is there a function that would just update the model instead of retraining from scratch?

When you say update and not train. Is it just updating the weights using the new data?
If so you can adopt two approaches with Transfer learning.
Finetune: Initialise a model with the weights from old model and retrain it on new data.
Add a new layer: Add a new layer and update the weights in this layer only while freezing the remaining weights in the network.
for more details read the tensorflow guide on tansferlearning

In tensorflow, there is a method called train_on_batch() that you can call on your model.
Say you defined your model as sequential, and you initially trained it on the existing initial_dataset using the fit method.
Now, you have new data in your hand -> call it X_new_train,y_new_train
so you can update the existing model using train_on_batch()
An example would be:
#generate some X_new_train (one batch)
X_new_train = tf.random.normal(shape=[no_of_samples_in_one_batch,100])
#generate corresponding y_new_train
y_new_train = tf.constant([[1.0]]*no_of_samples_in_one_batch)
model.train_on_batch(X_new_train,y_new_train)
Note that the idea of no_of_samples_in_one_batch (also called batch size) is not so important here. I mean whatever number of samples that you have in your data will be considered as one batch!
Now, coming to sklearn, I am not sure whether all machine learning models can incrementally learn (update weights from new examples). There is a list of models that support incremental learning:
https://scikit-learn.org/0.15/modules/scaling_strategies.html#incremental-learning

In sklearn, the .fit() method retrains on the dataset i.e as you use .fit() on any dataset, any info pertaining to previous training will all be discarded. So assuming you have new data coming in every day you will have to retrain each time in the case of most sklearn algorithms.
Although, If you like to retrain the sklearn models instead of training from scratch, some algorithms of sklearn (like SGDClassifier) provide a method called partial_fit(). These can be used to retrain and update the weights of an existing model.
As per Tensorflow, the .fit() method actually trains the model without discarding any info pertaining to previous trainings. Hence each time .fit() is used via TF it will actually retrain the model.
Tip: you can use SaveModel from TF to save the best model and reload and re-train the model as and when more data keeps flowing in.

Related

Shall I update my training data in real-time?

I tried image classification using trained model and its working well but some images could not find perfectly in that time have to get that image and label from users so my doubt is..Is it possible to add new data into already trained model?
No, during inference time you use the weights of the trained model for predictions. Which basically means that at the time your model is deployed the capabilities of your image classifier are fixed by the weights. If you wish to improve your model, you would have to retrain your model with the new - data. However, there is another paradigm of learning called "Online Learning" where the model is continuously learning and modifying the weights. In this case your weights are not fixed and your model is continuously updating its weights with each training input. However afaik this is not usually recommended for CNNs, because the backward pass of gradients is computationally intensive and your inference will be slow because of this.
No model can predict with 100% accuracy if it does it's an ideal model. And if you want to add more data to your train model you have to retrain the model with the new data. Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. So if you want to get better accuracy you have to train your model with more data. Without retraining, you can't add data to your trained model.

Using sklearn models as input to deep learning models

Keras gives me a way to use my deep learning models with sklearn(The keras wrapper for sklearn), but I need the same thing the other way around.
I want to create an ensemble of several already trained sklearn models by feeding their output to the input layer of a deep learning classifier(to be trained)
Can I achieve that?
You should probably explore Stacking : http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/
What happens is that when we are doing cross validation, we can combine combine the out of fold predictions to regenerate the training data.
For example, if you 1000 data points and you use 5 folds to evaluate, you will have 5 different validation sets of length 200. Combining all the predictions obtained on this set will essentially give you a new feature of length 1000, hence a new feature.
Similarly by training more models, you can get 3-4 features corresponding to predictions from 3-4 models.
Finally you can stack these features with any model of your choice, you can even use a deep neural network.

Saving then reusing CNN models - preserving initializations

I wish to repeat a series of image classification experiments by reusing a CNN with the same CNN with identical hyperparameters especially initializations. So, if I save a model after I have instantiated it and before I train it, does that also save the initializations so I then reload it later and train with a different data set and labels, does it start this new model with the same hyperparameters and initializations as the first model I trained with the first data set/classification labels? I am currently using fastai which is, of course, a library/set of API's, built on Pythorch but I think that everyone would be helped with a more general explanation that covers all CNN's using any library.
I expect an answer that says, "after this point in the workflow creating a CNN, the model is initialized and if you save it at this point, you can reload it later and use the same hyperparameters and initializations in your next model."
you can save the learner as soon it is created.
Example:
learn = cnn_learner(data,models.resnet34,metrics=error_rate)
learn.save('init')
later on:
learn.load('init)

Are there some pre-trained LSTM, RNN or ANN models for time-series prediction?

I am trying to solve a time series prediction problem. I tried with ANN and LSTM, played around a lot with the various parameters, but all I could get was 8% better than the persistence prediction.
So I was wondering: since you can save models in keras; are there any pre-trained model (LSTM, RNN, or any other ANN) for time series prediction? If so, how to I get them? Are there in Keras?
I mean it would be super useful if there a website containing pre trained models, so that people wouldn't have to speent too much time training them..
Similarly, another question:
Is it possible to do the following?
1. Suppose I have a dataset now and I use it to train my model. Suppose that in a month, I will have access to another dataset (corresponding to same data or similar data, in the future possibly, but not exclusively). Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment.
Is it possible? And how?
I'll answer your last questions first.
Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment. Is it possible? And how?
Yes, it is possible. In general, it's called transfer learning. But keep in mind that if two datasets represent very different populations, the network will soon "forget" what it learned on the first run and will optimize to the second one. To do this, you simply start training from a loaded state instead of random initialization and save the model afterwards. It is also recommended to use a smaller learning rate on the second run in order to adapt it gradually to the new data.
are there any pre-trained model (LSTM, RNN, or any other ANN) for time
series prediction? If so, how to I get them? Are there in Keras?
I haven't found exactly a pre-trained model, but a quick search gave me several active GitHub projects that you can just run and get a result for yourself: Time Series Prediction with Machine Learning (LSTM, GRU implementation in tensorflow), LSTM Neural Network for Time Series Prediction (keras and tensorflow), Time series predictions with Keras (keras and theano), Neural-Network-with-Financial-Time-Series-Data (keras and tensorflow). See also this post.
Now you can use BERT or related variants and here you can find all the pre-trained models: https://huggingface.co/transformers/pretrained_models.html
And it is possible to pre-train and fine-tune RNN, and you can refer to this paper: TimeNet: Pre-trained deep recurrent neural network for time series classification.

Test neural network using Keras Python

I have trained and tested a Feed Forward Neural Network using Keras in Python with a dataset. But each time, in order to recognize a new test set with external data (external since the data are not included within the dataset), I have to re-train the Feed Forward Neural Network to compute the test set. For instance each time I have to do:
model.fit (data, output_data)
prediction=model.predict_classes(new_test)
print "Prediction : " prediction
Obtaining correct output:
Prediction: [1 2 3 4 5 1 2 3 1 2 3]
Acc: 100%
Now I would test a new test set, namely "new_test2.csv" without re-training again, just using what the network has learned. I am also thinking about a sort of real time recognition.
How I should do that?
Thanks in advance
With a well trained model you can make predictions on any new data. You don´t have to retrain anything because (hopefully) your model can generalize it´s learning to unseen data and will achieve comparable accuracy.
Just feed in the data from "new_test2.csv" to your predict function:
prediction=model.predict_classes(content_of_new_test2)
Obviously you need data of the same type and classes. In addition to that you need to apply any transformations to the new data in the same way you may have transformed the data you trained your model on.
If you want realtime predictions you could setup an API with Flask:
http://flask.pocoo.org/
Regarding terminology and correct method of training:
You train on a training set (e.g. 70% of all the data you have).
You validate your training with a validation set (e.g. 15% of your data). You use the accuracy and loss values from your training to tune your hyperparameters.
You then evaluate your models final performance by predicting data from your test set (again 15% of your data). That has to be data, your network hasn´t seen before at all and hasn´t been used by you to optimize training parameters.
After that you can predict on production data.
If you want to save your trained model use this (taken from Keras documentation):
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
In your training file, you can save the model using
model.save('my_model.h5')
Later, whenever you want to test, you can load it with
from keras.models import load_model
model = load_model('my_model.h5')
Then you can call model.predict and whatnot.

Categories

Resources