Using sklearn models as input to deep learning models - python

Keras gives me a way to use my deep learning models with sklearn(The keras wrapper for sklearn), but I need the same thing the other way around.
I want to create an ensemble of several already trained sklearn models by feeding their output to the input layer of a deep learning classifier(to be trained)
Can I achieve that?

You should probably explore Stacking : http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/
What happens is that when we are doing cross validation, we can combine combine the out of fold predictions to regenerate the training data.
For example, if you 1000 data points and you use 5 folds to evaluate, you will have 5 different validation sets of length 200. Combining all the predictions obtained on this set will essentially give you a new feature of length 1000, hence a new feature.
Similarly by training more models, you can get 3-4 features corresponding to predictions from 3-4 models.
Finally you can stack these features with any model of your choice, you can even use a deep neural network.

Related

Fit neural network for 10 different small datasets and ensemble the results

I have 10 different small datasets. What I want is to train the regression neural network model on all the datasets separately, save these models, estimate MSE, MAE, and predicted values from these models separately, and in the end ensemble together to get one output result.
I can train the datasets separately and save the single models one at a time which takes lots of time with epoch=1000. I want to load my 10 datasets and create 10 models with them in a single run. Honestly, I am fairly new to this. How do I achieve this?
Thanks in advance.

Does model get retrained entirely using .fit() in sklearn and tensorflow

I am trying to use machine learning in Python. Right now I am using sklearn and TensorFlow. I was wondering what to do if I have a model that needs updating when new data comes. For example, I have financial data. I built an LSTM model with TensorFlow and trained it. But new data comes in every day, and I don't want to retrain the model every day. Is there a way just to update the model and not retrain it from scratch?
In sklearn, the documentation for .fit() method (using DecisionTreeClassifier as an example) says that it
Build a decision tree classifier from the training set (X, y).
So it seems like it will retrain the entire model from scratch.
In tensorflow, .fit() method (using Sequential as an example) say
Trains the model for a fixed number of epochs (iterations on a
dataset).
So it seems like it does update the model instead of retraining. But I am not sure if my understanding is correct. I would be grateful for some clarification. And if sklearn indeed retrains the entire model using .fit(), is there a function that would just update the model instead of retraining from scratch?
When you say update and not train. Is it just updating the weights using the new data?
If so you can adopt two approaches with Transfer learning.
Finetune: Initialise a model with the weights from old model and retrain it on new data.
Add a new layer: Add a new layer and update the weights in this layer only while freezing the remaining weights in the network.
for more details read the tensorflow guide on tansferlearning
In tensorflow, there is a method called train_on_batch() that you can call on your model.
Say you defined your model as sequential, and you initially trained it on the existing initial_dataset using the fit method.
Now, you have new data in your hand -> call it X_new_train,y_new_train
so you can update the existing model using train_on_batch()
An example would be:
#generate some X_new_train (one batch)
X_new_train = tf.random.normal(shape=[no_of_samples_in_one_batch,100])
#generate corresponding y_new_train
y_new_train = tf.constant([[1.0]]*no_of_samples_in_one_batch)
model.train_on_batch(X_new_train,y_new_train)
Note that the idea of no_of_samples_in_one_batch (also called batch size) is not so important here. I mean whatever number of samples that you have in your data will be considered as one batch!
Now, coming to sklearn, I am not sure whether all machine learning models can incrementally learn (update weights from new examples). There is a list of models that support incremental learning:
https://scikit-learn.org/0.15/modules/scaling_strategies.html#incremental-learning
In sklearn, the .fit() method retrains on the dataset i.e as you use .fit() on any dataset, any info pertaining to previous training will all be discarded. So assuming you have new data coming in every day you will have to retrain each time in the case of most sklearn algorithms.
Although, If you like to retrain the sklearn models instead of training from scratch, some algorithms of sklearn (like SGDClassifier) provide a method called partial_fit(). These can be used to retrain and update the weights of an existing model.
As per Tensorflow, the .fit() method actually trains the model without discarding any info pertaining to previous trainings. Hence each time .fit() is used via TF it will actually retrain the model.
Tip: you can use SaveModel from TF to save the best model and reload and re-train the model as and when more data keeps flowing in.

Tensorflow: Combining Loss Functions in LSTM Model for Domain Adaptation

Can any one please help me out?
I am working on my thesis work. Its about Predicting Parkinson disease, Since i want to build an LSTM model to adapt independent of patients. Currently i have implemented it using TensorFlow with my own loss function.
Since i am planning to introduce both labeled train and unlabeled train data in every batch of data to train the model. I want to apply my own loss function on this both labeled and unlabeled train data and also want to apply cross entropy loss only on labeled train data. Can i do this in tensorflow?
So my question is, Can i have combination of loss functions in a single model training on different set of train data?
From an implementation perspective, the short answer would be yes. However, I believe your question could be more specific, maybe what you mean is whether you could do it with tf.estimator?

Are there some pre-trained LSTM, RNN or ANN models for time-series prediction?

I am trying to solve a time series prediction problem. I tried with ANN and LSTM, played around a lot with the various parameters, but all I could get was 8% better than the persistence prediction.
So I was wondering: since you can save models in keras; are there any pre-trained model (LSTM, RNN, or any other ANN) for time series prediction? If so, how to I get them? Are there in Keras?
I mean it would be super useful if there a website containing pre trained models, so that people wouldn't have to speent too much time training them..
Similarly, another question:
Is it possible to do the following?
1. Suppose I have a dataset now and I use it to train my model. Suppose that in a month, I will have access to another dataset (corresponding to same data or similar data, in the future possibly, but not exclusively). Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment.
Is it possible? And how?
I'll answer your last questions first.
Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment. Is it possible? And how?
Yes, it is possible. In general, it's called transfer learning. But keep in mind that if two datasets represent very different populations, the network will soon "forget" what it learned on the first run and will optimize to the second one. To do this, you simply start training from a loaded state instead of random initialization and save the model afterwards. It is also recommended to use a smaller learning rate on the second run in order to adapt it gradually to the new data.
are there any pre-trained model (LSTM, RNN, or any other ANN) for time
series prediction? If so, how to I get them? Are there in Keras?
I haven't found exactly a pre-trained model, but a quick search gave me several active GitHub projects that you can just run and get a result for yourself: Time Series Prediction with Machine Learning (LSTM, GRU implementation in tensorflow), LSTM Neural Network for Time Series Prediction (keras and tensorflow), Time series predictions with Keras (keras and theano), Neural-Network-with-Financial-Time-Series-Data (keras and tensorflow). See also this post.
Now you can use BERT or related variants and here you can find all the pre-trained models: https://huggingface.co/transformers/pretrained_models.html
And it is possible to pre-train and fine-tune RNN, and you can refer to this paper: TimeNet: Pre-trained deep recurrent neural network for time series classification.

Multi Column Deep Neural Network with TFLearn and Tensorflow

I am trying to build a multi column deep neural network (MDNN) with tflearn and tensorflow. The MDNN is explained in this paper. The part I am struggling with is how I can add two or more inputs together to be fed to tensorflow.
For a single column I have:
network = tflearn.input_data(shape=[None, image_shape, image_shape, 3])
and
model.fit(X_input, y_train, n_epoch=50, shuffle=True,
validation_set=(X_test_norm, y_test),
show_metric=True, batch_size=240, run_id='traffic_cnn2')
where X_input is of shape (31367, 32, 32, 3). I am pretty new to numpy, tensorflow and tflearn. The difficulty for now really lays in how to specify multiple inputs to tflearn.
Any help is greatly appreciated.
The MDNN explained in the paper individually trains several models using random (but bounded) distortions on the data. Once all models are trained, they produce predictions using an ensemble classifier by averaging the output of all the models on different versions of the data.
As far as I understand, the columns are not jointly but independently trained. So you must create different models and call fit on each on them. I recommend you start training a single model and once you have a training setting getting good results, replicate it. To generate predictions, you must compute the average of the predicted probabilities from the predict function and take the most probable class.
One way to a generate data from your inputs is to use data augmentation. However, instead of generating new data you must replace it by the modified versions.

Categories

Resources