Make predictions with final keras model - python

Context:
let's say I had a sequence of length 100. 70 for training and 30 for testing with no shuffling. I used the training set for creating/fitting the model as follows:
model = fit_lstm(train_scaled, batch_size,nb_epochs, nb_neurons)
And for testing:
forecast = model.predict(test_reshaped, batch_size=batch_size)
Issue:
Now that I have my final model saved and loaded elsewhere, I would like to predict what comes after (future predictions) using:
prediction = model.predict(X, btach_size=batch_size)
However, I don't know what to feed the model as X? Should X be similar to training set or test set in length, transformations(scaling,differentiating), etc? How far can the model can go in the future?

Related

how to plot correctly loss curves for training and validation sets?

I want to plot loss curves for my training and validation sets the same way as Keras does, but using Scikit. I have chosen the concrete dataset which is a Regression problem, the dataset is available at:
http://archive.ics.uci.edu/ml/machine-learning-databases/concrete/compressive/
So, I have converted the data to CSV and the first version of my program is the following:
Model 1
df=pd.read_csv("Concrete_Data.csv")
train,validate,test=np.split(df.sample(frac=1),[int(.8*len(df)),int(.90*len(df))])
Xtrain=train.drop(["ConcreteCompStrength"],axis="columns")
ytrain=train["ConcreteCompStrength"]
Xval=validate.drop(["ConcreteCompStrength"],axis="columns")
yval=validate["ConcreteCompStrength"]
mlp=MLPRegressor(activation="relu",max_iter=5000,solver="adam",random_state=2)
mlp.fit(Xtrain,ytrain)
plt.plot(mlp.loss_curve_,label="train")
mlp.fit(Xval,yval) #doubt
plt.plot(mlp.loss_curve_,label="validation") #doubt
plt.legend()
The resulting graph is the following:
In this model, I doubt if it's the correct marked part because as long as I know one should leave apart the validation or testing set, so maybe the fit function is not correct there. The score that I got is 0.95.
Model 2
In this model I try to use the validation score as follows:
df=pd.read_csv("Concrete_Data.csv")
train,validate,test=np.split(df.sample(frac=1),[int(.8*len(df)),int(.90*len(df))])
Xtrain=train.drop(["ConcreteCompStrength"],axis="columns")
ytrain=train["ConcreteCompStrength"]
Xval=validate.drop(["ConcreteCompStrength"],axis="columns")
yval=validate["ConcreteCompStrength"]
mlp=MLPRegressor(activation="relu",max_iter=5000,solver="adam",random_state=2,early_stopping=True)
mlp.fit(Xtrain,ytrain)
plt.plot(mlp.loss_curve_,label="train")
plt.plot(mlp.validation_scores_,label="validation") #line changed
plt.legend()
And for this model, I had to add the part of early stopping set to true and validation_scores_to be plotted, but the graph results are a little bit weird:
The score I get is 0.82, but I read that this occurs when the model finds it easier to predict the data in the validation set that in the train set. I believe that is because I am using the validation_scores_ part, but I was not able to find any online reference about this particular instruction.
How it will be the correct way to plot these loss curves for adjusting my hyperparameters in Scikit?
Update
I have programmed the module as it was advise like this:
mlp=MLPRegressor(activation="relu",max_iter=1,solver="adam",random_state=2,early_stopping=True)
training_mse = []
validation_mse = []
epochs = 5000
for epoch in range(1,epochs):
mlp.fit(X_train, Y_train)
Y_pred = mlp.predict(X_train)
curr_train_score = mean_squared_error(Y_train, Y_pred) # training performances
Y_pred = mlp.predict(X_valid)
curr_valid_score = mean_squared_error(Y_valid, Y_pred) # validation performances
training_mse.append(curr_train_score) # list of training perf to plot
validation_mse.append(curr_valid_score) # list of valid perf to plot
plt.plot(training_mse,label="train")
plt.plot(validation_mse,label="validation")
plt.legend()
but the plot obtained are two flat lines:
It seems I am missing something here.
You shouldn't fit your model on the validation set. The validation set is usually used to decide what hyperparameters to use, not the parameters' values.
The standard way to do training is to divide your dataset into three parts
training
validation
test
For example with a split of 80, 10, 10 %
Usually, you would select a neural network (how many layers, nodes, what activation functions) and then train -only- on the training set, check the result on the validation, and then on the test
I'll show a pseudo algorithm to make it clear:
for model in my_networks: # hyperparameters selection
model.fit(X_train, Y_train) # parameters fitting
model.predict(X_valid) # no train, only check on performances
Save model performances on validation and pick the best model (the one with the best scores on the validation set) then check results on the testset:
model.predict(X_test) # this will be the estimated performance of your model
If your dataset is big enough, you could also use something like cross-validation.
Anyway, remember:
the parameters are the network weights
you fit the parameters with the training set
the hyperparameters are the ones that define the net architecture (layers, nodes, activation functions)
you select the best hyperparameters checking the result of your model on the validation set
after this selection (best parameters, best hyperparameters) you get the model performances testing the model on the test set
To obtain the same result of keras, you should understand that when you call the method fit() on the model with default arguments, the training will stop after a fixed amount of epochs (200), with your defined number of epochs (5000 in your case) or when you define a early_stopping.
max_iter: int, default=200
Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For
stochastic solvers (‘sgd’, ‘adam’), note that this determines the
number of epochs (how many times each data point will be used), not
the number of gradient steps.
Check your model definition and arguments on the scikit page
To obtain the same result of keras, you could fix the training epochs (eg. 1 step per training), check the result on validation, and then train again until you reach the desired number of epochs
for example, something like this (if your model uses mse):
epochs = 5000
mlp = MLPRegressor(activation="relu",
max_iter=1,
solver="adam",
random_state=2,
early_stopping=True)
training_mse = []
validation_mse = []
for epoch in epochs:
mlp.fit(X_train, Y_train)
Y_pred = mlp.predict(X_train)
curr_train_score = mean_squared_error(Y_train, Y_pred) # training performances
Y_pred = mlp.predict(X_valid)
curr_valid_score = mean_squared_error(Y_valid, Y_pred) # validation performances
training_mse.append(curr_train_score) # list of training perf to plot
validation_mse.append(curr_valid_score) # list of valid perf to plot
I have the same problem: obtained two flat lines when using the module as it was advised, I solve the problem just adding warm_start=True to the MLPRegressor parameters, as explained in MLPRegressor- 1.17.9. More control with warm_start
mlp=MLPRegressor(activation="relu",max_iter=1,solver="adam",random_state=2,early_stopping=True, warm_star=True)
The plot obtained are now correct:
Train and validation loss curves

Tensorflow BERT for token-classification - exclude pad-tokens from accuracy while training and testing

I'm doing token-based classification using the pre-trained BERT-model for tensorflow to automatically label cause and effects in sentences.
To access BERT, I'm using the TFBertForTokenClassification-Interface from huggingface: https://huggingface.co/transformers/model_doc/bert.html#tfbertfortokenclassification
The sentences I use to train are all converted to tokens (basically a mapping of words to numbers) according to the BERT-tokenizer and then padded to a certain length before training, so when one sentence has only 50 tokens and another one has only 30 the first one is filled up with 50 pad-tokens and the second one with 70 of them to get a universal input sentence-length of 100.
I then train my model to predict on every token which label this token belongs to; whether it is part of the cause, the effect or none of them.
However, during training and evaluation, my model does predictions on the PAD-tokens as well and they are also included in the accuracy of the model. As PAD-tokens are very easy to predict for the model (they always have the same token and they all have the "none" label which means they neither belong to the cause nor the effect of the sentence), they really distort my model's accuracy.
For example, if you have a sentence which has 30 words -> 30 tokens and you pad all sentences to a length of 100, then this sentence would get a score of 70% even if the model predicted none of the "real" tokens correctly.
This way i'm getting training and validation accuracy of 90+% really quick although the model performs poorly on the real pad-tokens.
I thought that attention-mask is there to solve this problem but this doesn't seem to be the case.
The input-datasets are created as follows:
def example_to_features(input_ids,attention_masks,token_type_ids,label_ids):
return {"input_ids": input_ids,
"attention_mask": attention_masks},label_ids
train_ds = tf.data.Dataset.from_tensor_slices((input_ids_train,attention_masks_train,token_ids_train,label_ids_train)).map(example_to_features).shuffle(buffer_size=1000).batch(32)
Model creation:
from transformers import TFBertForTokenClassification
num_epochs = 30
model = TFBertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=3)
model.layers[-1].activation = tf.keras.activations.softmax
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-6)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.summary()
And then I train it like this:
history = model.fit(train_ds, epochs=num_epochs, validation_data=validate_ds)
Has anyone encountered this problem so far or does know how to exclude the predictions on pad-tokens from the model's accuracy during training and evaluation?
Yes, this is normal.
The output of BERT [batch_size, max_seq_len = 100, hidden_size] will include values or embeddings for [PAD] tokens as well. However, you also provide attention_masks to the BERT model so that it does not take into consideration these [PAD] tokens.
Similarly, you need to MASK these [PAD] tokens before passing the BERT results to the final fully-connected layer, mask them when you are calculating loss, and also for calculating metrics like precision and recall.

Tensorflow-IO Dataset input pipeline with very large HDF5 files

I have very big training (30Gb) files.
Since all the data does not fit in my available RAM, I want to read the data by batch.
I saw that there is Tensorflow-io package which implemented a way to read HDF5 into Tensorflow this way thanks to the function tfio.IODataset.from_hdf5()
Then, since tf.keras.model.fit() takes a tf.data.Dataset as input containing both samples and targets, I need to zip my X and Y together and then use .batch and .prefetch to load in memory just the necessary data. For testing I tried to apply this method to smaller samples: training (9Gb), validation (2.5Gb) and testing (1.2Gb) which I know work well because they can fit into memory and I have good results (70% accuracy and <1 loss).
The training files are stored in HDF5 files split into samples (X) and labels (Y) files like so:
X_learn.hdf5
X_val.hdf5
X_test.hdf5
Y_test.hdf5
Y_learn.hdf5
Y_val.hdf5
Here is my code:
BATCH_SIZE = 2048
EPOCHS = 100
# Create an IODataset from a hdf5 file's dataset object
x_val = tfio.IODataset.from_hdf5(path_hdf5_x_val, dataset='/X_val')
y_val = tfio.IODataset.from_hdf5(path_hdf5_y_val, dataset='/Y_val')
x_test = tfio.IODataset.from_hdf5(path_hdf5_x_test, dataset='/X_test')
y_test = tfio.IODataset.from_hdf5(path_hdf5_y_test, dataset='/Y_test')
x_train = tfio.IODataset.from_hdf5(path_hdf5_x_train, dataset='/X_learn')
y_train = tfio.IODataset.from_hdf5(path_hdf5_y_train, dataset='/Y_learn')
# Zip together samples and corresponding labels
train = tf.data.Dataset.zip((x_train,y_train)).batch(BATCH_SIZE, drop_remainder=True).prefetch(tf.data.experimental.AUTOTUNE)
test = tf.data.Dataset.zip((x_test,y_test)).batch(BATCH_SIZE, drop_remainder=True).prefetch(tf.data.experimental.AUTOTUNE)
val = tf.data.Dataset.zip((x_val,y_val)).batch(BATCH_SIZE, drop_remainder=True).prefetch(tf.data.experimental.AUTOTUNE)
# Build the model
model = build_model()
# Compile the model with custom learing rate function for Adam optimizer
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=lr_schedule(0)),
metrics=['accuracy'])
# Fit model with class_weights calculated before
model.fit(train,
epochs=EPOCHS,
class_weight=class_weights_train,
validation_data=val,
shuffle=True,
callbacks=callbacks)
This code runs but the loss goes very high (300+) and accuracy drops to 0 (0.30 -> 4*e^-5) right from the beginning... I don't understand what I am doing wrong, am I missing something ?
Providing the solution here (Answer Section), even though it is present in the Comment Section for the benefit of the community.
There was no issue with the code, its actually with the data (not preprocessed properly), hence model not able to learning well, which leads to strange loss and accuracy.

Pytorch simulation fails to converge on convex loss function when not initialized with 0

My code works when the weights initialized with 0. When I initialize them according to some seed, they fail to converge. This should be a bug since the loss function is convex.
I filtered two labels from MNIST (0 and 1), and then I trained a logistic regression model using pytorch. Since I use only 200 training samples (and 784 parameters), the model should quickly converge to 100% accuracy on the training set. This is not the case when the weights initialize by some seed.
I had some problem to share my code on stackoverflow, so here is a link to the code: https://drive.google.com/file/d/1ELe8TIWrXMiXgsB63B0Ss43GPr719rGc/view?usp=sharing
Your data are not rescaled and normalized. If you look at the images variable in your training loop it's between 0 and 255 this is in all likelihood hurting your training process.
There are cleaner ways to subsample the dataset as you want, but without modifying too much of your code, using this data loading definition
import torchvision.transforms as transforms
#Load Dataset
preprocessing = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = dsets.MNIST(root='./data', train=True, transform=preprocessing, download=True)
#Filter samples by label (to get binary classification) and by number of training samples
Binary_filter=torch.add(train_dataset.targets==1, train_dataset.targets==0)
train_dataset.data, train_dataset.targets = train_dataset.data[Binary_filter],train_dataset.targets[Binary_filter]
TrainSet_filter=torch.cat((torch.ones(num_of_training_samples)
,torch.zeros(len(train_dataset.targets)-num_of_training_samples)),0).bool()
train_dataset.data, train_dataset.targets = train_dataset.data[TrainSet_filter], train_dataset.targets[TrainSet_filter]
#Make Dataset Iterable
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
I have ~100% accuracy in about 5-10 epochs.
Your loss function (BCE) is convex only with respect to the outputs of the deep network, not with respect to the weights.
You definitely can't assume that any local minimum is also a global minimum.

Keras Validation Accuracy is Zero but other metrics are normal

I am working on a computer vision problem in keras and I have run into a an interesting problem. My val_acc is 0.0000e+00. This is especially interesting as my other metrics such as loss, acc, and val_loss all are acting normally.
This started happening when I switched from the Sequence data_generator to a custom one that I'm pretty sure is working as intended. My issue is very similar to this one validation accuracy is 0 with Keras fit_generator but no answer was reached in that thread.
I have checked to make sure my activations and loss metrics are appropriate for my particular problem. I am using: loss='categorical_crossentropy' metrics=['accuracy'] and am attempting to predict the month that a certain spectrogram comes from.The validation data is being loaded in the exact same way as the training data so I really can't figure out whats happening also even random guessing should give a 1/12 val_acc right? It can't be zero.
Here is my model architecture:
x = (Convolution2D(32,5,5,activation='relu',input_shape=(501,501,1)))(input_img)
x = (MaxPooling2D(pool_size=(2,2)))(x)
x = (Convolution2D(32,5,5,activation='relu'))(x)
x = (MaxPooling2D(pool_size=(2,2)))(x)
x = (Dropout(0.25))(x)
x = (Flatten())(x)
x = (Dense(128,activation='relu'))(x)
x = (Dropout(0.5))(x)
classify = (Dense(12,activation='softmax', kernel_regularizer=regularizers.l1_l2(l1 = 0.001,l2 = 0.001)))(x)
model = Model(input_img,classify)
model.compile(loss='categorical_crossentropy',optimizer='nadam',metrics=['accuracy'])
and here is my call to fit_generator:
model.fit_generator(generator = pd.data_generator(folder,'train'),
validation_data = pd.data_generator(folder,'test'),
steps_per_epoch=(120),
validation_steps=(24),
nb_epoch=20,
verbose=1,
shuffle=True,
callbacks=[tensorboard_callback,early_stop_callback])
and finally here is the important part of my data generator:
if mode == 'test':
print('test')
while True:
for things in up.unpickle_batch(folder,50,6000,7200): #The last 1200 things in batches of 50
random.shuffle(things)
test_spect = []
test_months = []
for thing in things:
test_spect.append(thing.spect) #GET BATCH DATA
test_months.append(thing.month-1) #this is is here because the months go from 1-12 but should go from 0-11 for to_categorical
x_test = np.asarray(test_spect) #PREPARE BATCH DATA
x_test = x_test.astype('float32')
x_test /= np.amax(x_test) #- 0.5
X_test = np.reshape(x_test, (-1,501, 501,1))
Y_test = np_utils.to_categorical(test_months,12)
yield X_test,Y_test #RETURN BATCH DATA
Check for bad data.
Make sure your data is what you think it is -- shuffled, distributed the same as your validation and/or test set, free of misleading/erroneous/contradictory samples. You can probably generate a failproof dataset (e.g. distinguish dark images from light ones, or sharp versus blurry) and prove that everything but the data is OK. If you can't, then look more closely at your code. This, however, sounds like a data problem.
I just fixed a similar problem in a simple 3-layer MLP network for which training loss & accuracy were heading in reasonable directions, validation loss was following training loss (but lagging) yet validation accuracy hovered at zero. There was an off-by-one error in my training dataset generation (a sampling script from a larger set) that meant that 1 sample in the entire block of samples for one type had the label for the next block for a different type. 499 correct samples out of 500 was insufficient to keep the training on track.

Categories

Resources