I tried image classification using trained model and its working well but some images could not find perfectly in that time have to get that image and label from users so my doubt is..Is it possible to add new data into already trained model?
No, during inference time you use the weights of the trained model for predictions. Which basically means that at the time your model is deployed the capabilities of your image classifier are fixed by the weights. If you wish to improve your model, you would have to retrain your model with the new - data. However, there is another paradigm of learning called "Online Learning" where the model is continuously learning and modifying the weights. In this case your weights are not fixed and your model is continuously updating its weights with each training input. However afaik this is not usually recommended for CNNs, because the backward pass of gradients is computationally intensive and your inference will be slow because of this.
No model can predict with 100% accuracy if it does it's an ideal model. And if you want to add more data to your train model you have to retrain the model with the new data. Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. So if you want to get better accuracy you have to train your model with more data. Without retraining, you can't add data to your trained model.
Related
I am trying to use machine learning in Python. Right now I am using sklearn and TensorFlow. I was wondering what to do if I have a model that needs updating when new data comes. For example, I have financial data. I built an LSTM model with TensorFlow and trained it. But new data comes in every day, and I don't want to retrain the model every day. Is there a way just to update the model and not retrain it from scratch?
In sklearn, the documentation for .fit() method (using DecisionTreeClassifier as an example) says that it
Build a decision tree classifier from the training set (X, y).
So it seems like it will retrain the entire model from scratch.
In tensorflow, .fit() method (using Sequential as an example) say
Trains the model for a fixed number of epochs (iterations on a
dataset).
So it seems like it does update the model instead of retraining. But I am not sure if my understanding is correct. I would be grateful for some clarification. And if sklearn indeed retrains the entire model using .fit(), is there a function that would just update the model instead of retraining from scratch?
When you say update and not train. Is it just updating the weights using the new data?
If so you can adopt two approaches with Transfer learning.
Finetune: Initialise a model with the weights from old model and retrain it on new data.
Add a new layer: Add a new layer and update the weights in this layer only while freezing the remaining weights in the network.
for more details read the tensorflow guide on tansferlearning
In tensorflow, there is a method called train_on_batch() that you can call on your model.
Say you defined your model as sequential, and you initially trained it on the existing initial_dataset using the fit method.
Now, you have new data in your hand -> call it X_new_train,y_new_train
so you can update the existing model using train_on_batch()
An example would be:
#generate some X_new_train (one batch)
X_new_train = tf.random.normal(shape=[no_of_samples_in_one_batch,100])
#generate corresponding y_new_train
y_new_train = tf.constant([[1.0]]*no_of_samples_in_one_batch)
model.train_on_batch(X_new_train,y_new_train)
Note that the idea of no_of_samples_in_one_batch (also called batch size) is not so important here. I mean whatever number of samples that you have in your data will be considered as one batch!
Now, coming to sklearn, I am not sure whether all machine learning models can incrementally learn (update weights from new examples). There is a list of models that support incremental learning:
https://scikit-learn.org/0.15/modules/scaling_strategies.html#incremental-learning
In sklearn, the .fit() method retrains on the dataset i.e as you use .fit() on any dataset, any info pertaining to previous training will all be discarded. So assuming you have new data coming in every day you will have to retrain each time in the case of most sklearn algorithms.
Although, If you like to retrain the sklearn models instead of training from scratch, some algorithms of sklearn (like SGDClassifier) provide a method called partial_fit(). These can be used to retrain and update the weights of an existing model.
As per Tensorflow, the .fit() method actually trains the model without discarding any info pertaining to previous trainings. Hence each time .fit() is used via TF it will actually retrain the model.
Tip: you can use SaveModel from TF to save the best model and reload and re-train the model as and when more data keeps flowing in.
I'm a Tensorflow newby and I'm trying to train a 1 class model for object detection. In particular I'm trying to recognize an arrow like the following:
I need a very fast recognition so I started wondering if a pre-trained model can contain such kind of shape.
Unfortunately didn't find anything similar and therefor I started with my own training of the arrow using as model the faster_rcnn_inception_v2_coco_2018_01_28.
I'm using his pipeline config, and I'm using his fine_tune_checkpoint as well, is this right considering that I have to train a completely different object?
The result is a training with a very good accuracy but very low speed. I need to increase the framerate and I didn't understand yet if the less is the "training loss" the more is the "object recognition speed", or not.
Any suggestion on how could I speedup the detection?
I'm using his pipeline config, and I'm using his fine_tune_checkpoint
as well, is this right considering that I have to train a completely
different object?
Yes! Every time you want to change the output of a deep NN, you should take a pretrained model. Training a model from scratch can take several weeks and you will never be able to generate enough data on your own. Taking a pretrained model and fine-tuning it is a way to go.
I didn't understand yet if the
less is the "training loss" the more is the "object recognition
speed", or not.
No. Training loss just tells you how good your model performs with respect to the training set.
The issue you are having is a classic speed vs. accuracy trade-off. I encourage you to take a look at this table and find a model which is fast enough for you (i.e. lowest run-time) but have decent accuracy. I would first check SSD here.
The result is a training with a very good accuracy but very low speed.
How much FPS does your algorithm perform?
Since you already have prepared dataset, I would suggest using Tiny-Yolo which performs 244 FPS on COCO dataset https://pjreddie.com/darknet/yolo/
Preparing training dataset for Tiny-Yolo is very easy if you use this repository
And
I didn't understand yet if the less is the "training loss" the more is the "object recognition speed"
Training lost has nothing to do with speed.
I'm new to keras seq2seq LSTM models. I have a working machine translation model and English-to-Arabic training data. I just trained the model using google colab tool and made some predictions. As you can see in the image, when I test the model on a text from the training data, it predicts well, but when I change ONE word, the prediction goes completely wrong!
I want my model to UNDERSTAND the full meaning of the text even when adding/deleting one word. How can I solve this problem?
LSTM wrong predictions when adding/deleting one word
In the image, the first test of each section is the text from the training data, which predicts well. The second test is the same but with adding/deleting one word.
UPDATE: Whenever I add validation split, the val_loss is always increasing and the model isn't learning too much! What's going worng?
This is the classical overtraining problem. Your model only learn to translate your training data by remembering each sample instead of understanding the concept behind it.
For this reason always split your training data in training data and validation data. The validation data must not be in the training data set! This way you can check if your model is actually learning something.
There are two main solution for this:
Like m33n said more training data (there is no data like more data)
Implement more regularization techniques like Dropout
Also the problem seems very ambigous. Translating sentences is not an easy task at all and copanies like google or deepl created very complex models trained with lots and lots of data occupied over years. Are you sure you have the necessary resources to accomplish this?
I am trying to solve a time series prediction problem. I tried with ANN and LSTM, played around a lot with the various parameters, but all I could get was 8% better than the persistence prediction.
So I was wondering: since you can save models in keras; are there any pre-trained model (LSTM, RNN, or any other ANN) for time series prediction? If so, how to I get them? Are there in Keras?
I mean it would be super useful if there a website containing pre trained models, so that people wouldn't have to speent too much time training them..
Similarly, another question:
Is it possible to do the following?
1. Suppose I have a dataset now and I use it to train my model. Suppose that in a month, I will have access to another dataset (corresponding to same data or similar data, in the future possibly, but not exclusively). Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment.
Is it possible? And how?
I'll answer your last questions first.
Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment. Is it possible? And how?
Yes, it is possible. In general, it's called transfer learning. But keep in mind that if two datasets represent very different populations, the network will soon "forget" what it learned on the first run and will optimize to the second one. To do this, you simply start training from a loaded state instead of random initialization and save the model afterwards. It is also recommended to use a smaller learning rate on the second run in order to adapt it gradually to the new data.
are there any pre-trained model (LSTM, RNN, or any other ANN) for time
series prediction? If so, how to I get them? Are there in Keras?
I haven't found exactly a pre-trained model, but a quick search gave me several active GitHub projects that you can just run and get a result for yourself: Time Series Prediction with Machine Learning (LSTM, GRU implementation in tensorflow), LSTM Neural Network for Time Series Prediction (keras and tensorflow), Time series predictions with Keras (keras and theano), Neural-Network-with-Financial-Time-Series-Data (keras and tensorflow). See also this post.
Now you can use BERT or related variants and here you can find all the pre-trained models: https://huggingface.co/transformers/pretrained_models.html
And it is possible to pre-train and fine-tune RNN, and you can refer to this paper: TimeNet: Pre-trained deep recurrent neural network for time series classification.
Let me make my question clearer:
I am using python gensim.models.Word2Vec to train a word embedding model. Based on my understanding, the model training is in essence a machine learning issue---to train a neural network via a prediction task. For example, if I select parameters to train a skip-gram model, then the model is trained by predicting context words from target word. Once the model is well-trained, word vectors are just obtained from the model.
If my understanding is correct, so since in fact it is a machine learning process and the training goal is to perform well in the prediction task, there should be a loss function during training and the model is supposed to make the loss as low as possible. So, how to know the model loss value for a given set of parameters? Or is there any other metrics that we can know to understand the model itself?
Hope I have made my question clear. In a word, I don't want to evaluate the model by its outputs as in the Google test set http://word2vec.googlecode.com/svn/trunk/questions-words.txt, but I want to understand the model itself as a simple machine learning problem during its training process. Would this be possible?