tf.keras predictions are bad while evaluation is good - python

I'm programming a model in tf.keras, and running model.evaluate() on the training set usually yields ~96% accuracy. My evaluation on the test set is usually close, about 93%. However, when I predict manually, the model is usually inaccurate. This is my code:
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import pandas as pd
!git clone https://github.com/DanorRon/data
%cd data
!ls
batch_size = 100
epochs = 15
alpha = 0.001
lambda_ = 0.001
h1 = 50
train = pd.read_csv('/content/data/mnist_train.csv.zip')
test = pd.read_csv('/content/data/mnist_test.csv.zip')
train = train.loc['1':'5000', :]
test = test.loc['1':'2000', :]
train = train.sample(frac=1).reset_index(drop=True)
test = test.sample(frac=1).reset_index(drop=True)
x_train = train.loc[:, '1x1':'28x28']
y_train = train.loc[:, 'label']
x_test = test.loc[:, '1x1':'28x28']
y_test = test.loc[:, 'label']
x_train = x_train.values
y_train = y_train.values
x_test = x_test.values
y_test = y_test.values
nb_classes = 10
targets = y_train.reshape(-1)
y_train_onehot = np.eye(nb_classes)[targets]
nb_classes = 10
targets = y_test.reshape(-1)
y_test_onehot = np.eye(nb_classes)[targets]
model = tf.keras.Sequential()
model.add(layers.Dense(784, input_shape=(784,), kernel_initializer='random_uniform', bias_initializer='zeros'))
model.add(layers.Dense(h1, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(lambda_), kernel_initializer='random_uniform', bias_initializer='zeros'))
model.add(layers.Dense(10, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2(lambda_), kernel_initializer='random_uniform', bias_initializer='zeros'))
model.compile(optimizer='SGD',
loss = 'mse',
metrics = ['categorical_accuracy'])
model.fit(x_train, y_train_onehot, epochs=epochs, batch_size=batch_size)
model.evaluate(x_test, y_test_onehot, batch_size=batch_size)
prediction = model.predict_classes(x_test)
print(prediction)
print(y_test[1:])
I've heard that a lot of the time when people have this problem, it's just a problem with data input. But I can't see any problem with that here since it almost always predicts wrongly (about as much as you would expect if it was random). How do I fix this problem?
Edit: Here are the specific results:
Last training step:
Epoch 15/15
49999/49999 [==============================] - 3s 70us/sample - loss: 0.0309 - categorical_accuracy: 0.9615
Evaluation output:
2000/2000 [==============================] - 0s 54us/sample - loss: 0.0352 - categorical_accuracy: 0.9310
[0.03524150168523192, 0.931]
Output from model.predict_classes:
[9 9 0 ... 5 0 5]
Output from print(y_test):
[9 0 0 7 6 8 5 1 3 2 4 1 4 5 8 4 9 2 4]

First thing is, your loss function is wrong: you are in a multi-class classification setting, and you are using a loss function suitable for regression and not classification (MSE).
Change our model compilation to:
model.compile(loss='categorical_crossentropy',
optimizer='SGD',
metrics=['accuracy'])
See the Keras MNIST MLP example for corroboration, and own answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)? for more details (although here you actually have the inverse problem, i.e. regression loss in a classification setting).
Moreover, it is not clear if the MNIST variant you are using is already normalized; if not, you should normalize them yourself:
x_train = x_train.values/255
x_test = x_test.values/255
It is also not clear why you ask for a 784-unit layer, since this is actually the second layer of your NN (the first is implicitly set by the input_shape argument - see Keras Sequential model input layer), and it certainly does not need to contain one unit for each one of your 784 input features.
UPDATE (after comments):
But why is MSE meaningless for classification?
This is a theoretical issue, not exactly appropriate for SO; roughly speaking, it is for the same reason we don't use linear regression for classification - we use logistic regression, the actual difference between the two approaches being exactly the loss function. Andrew Ng, in his popular Machine Learning course at Coursera, explains this nicely - see his Lecture 6.1 - Logistic Regression | Classification at Youtube (explanation starts at ~ 3:00), as well as section 4.2 Why Not Linear Regression [for classification]? of the (highly recommended and freely available) textbook An Introduction to Statistical Learning by Hastie, Tibshirani and coworkers.
And MSE does give a high accuracy, so why doesn't that matter?
Nowadays, almost anything you throw at MNIST will "work", which of course neither makes it correct nor a good approach for more demanding datasets...
UPDATE 2:
whenever I run with crossentropy, the accuracy just flutters around at ~10%
Sorry, cannot reproduce the behavior... Taking the Keras MNIST MLP example with a simplified version of your model, i.e.:
model = Sequential()
model.add(Dense(784, activation='linear', input_shape=(784,)))
model.add(Dense(50, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=SGD(),
metrics=['accuracy'])
we easily end up with a ~ 92% validation accuracy after only 5 epochs:
history = model.fit(x_train, y_train,
batch_size=128,
epochs=5,
verbose=1,
validation_data=(x_test, y_test))
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 4s - loss: 0.8974 - acc: 0.7801 - val_loss: 0.4650 - val_acc: 0.8823
Epoch 2/10
60000/60000 [==============================] - 4s - loss: 0.4236 - acc: 0.8868 - val_loss: 0.3582 - val_acc: 0.9034
Epoch 3/10
60000/60000 [==============================] - 4s - loss: 0.3572 - acc: 0.9009 - val_loss: 0.3228 - val_acc: 0.9099
Epoch 4/10
60000/60000 [==============================] - 4s - loss: 0.3263 - acc: 0.9082 - val_loss: 0.3024 - val_acc: 0.9156
Epoch 5/10
60000/60000 [==============================] - 4s - loss: 0.3061 - acc: 0.9132 - val_loss: 0.2845 - val_acc: 0.9196
Notice the activation='linear' of the first Dense layer, which is the equivalent of not specifying anything, like in your case (as I said, practically everything you throw to MNIST will "work")...
Final advice: Try modifying your model as:
model = tf.keras.Sequential()
model.add(layers.Dense(784, activation = 'relu',input_shape=(784,)))
model.add(layers.Dense(h1, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
in order to use the better (and default) 'glorot_uniform' initializer, and remove the kernel_regularizer args (they may be the cause of any issue - always start simple!)...

Related

How can i increase the accuracy of my LSTM model (regression) [duplicate]

I am doing a time series analysis using Tensorflow/ Keras in Python.
The overall LSTM model looks like,
model = keras.models.Sequential()
model.add(keras.layers.LSTM(25, input_shape = (1,1), activation = 'relu', dropout = 0.2, return_sequences = False))
model.add(keras.layers.Dense(1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['acc'])
tensorboard = keras.callbacks.TensorBoard(log_dir="logs/{}".format(time()))
es = keras.callbacks.EarlyStopping(monitor='val_acc', mode='max', verbose=1, patience=50)
mc = keras.callbacks.ModelCheckpoint('/home/sukriti/best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
history = model.fit(trainX_3d, trainY_1d, epochs=50, batch_size=10, verbose=2, validation_data = (testX_3d, testY_1d), callbacks=[mc, es, tensorboard])
I am having the following outcome,
Train on 14015 samples, validate on 3503 samples
Epoch 1/50
- 3s - loss: 0.0222 - acc: 7.1352e-05 - val_loss: 0.0064 - val_acc: 0.0000e+00
Epoch 2/50
- 2s - loss: 0.0120 - acc: 7.1352e-05 - val_loss: 0.0054 - val_acc: 0.0000e+00
Epoch 3/50
- 2s - loss: 0.0108 - acc: 7.1352e-05 - val_loss: 0.0047 - val_acc: 0.0000e+00
Now the val_acc remains unchanged. Is it normal?
what does it signify?
As signified by loss = 'mean_squared_error', you are in a regression setting, where accuracy is meaningless (it is meaningful only in classification problems).
Unfortunately, Keras will not "protect" you in such a case, insisting in computing and reporting back an "accuracy", despite the fact that it is meaningless and inappropriate for your problem - see my answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)?
You should simply remove metrics=['acc'] from your model compilation, and don't bother - in regression settings, MSE itself can (and usually does) serve also as the performance metric.
In my case I had validation accuracy of 0.0000e+00 throughout training (using Keras and CNTK-GPU backend) when my batch size was 64 but there were only 120 samples in my validation set (divided into three classes). After I changed the batch size to 60, I got normal accuracy values.
It will not improve with changing batch size or with metrics. I had the same problem but when I shuffled my training and validation data set 0.0000e+00 gone.

Big difference between val-acc and prediction accuracy in Keras Neural Network

I have a dataset that I used for making NN model in Keras, i took 2000 rows from that dataset to have them as validation data, those 2000 rows should be added in .predict function.
I wrote a code for Keras NN and for now it works good, but I noticed something that is very strange for me. It gives me very good accuracy of more than 83%, loss is around 0.12, but when I want to make a prediction with unseen data (those 2000 rows), it only predicts correct in average of 65%.
When I add Dropout layer, it only decreases accuracy.
Then I have added EarlyStopping, and it gave me accuracy around 86%, loss is around 0.10, but still when I make prediction with unseen data, I get final prediction accuracy of 67%.
Does this mean that model made correct prediction in 87% of situations? Im going with a logic, if I add 100 samples in my .predict function, that program should make good prediction for 87/100 samples, or somewhere in that range (lets say more than 80)? I have tried to add 100, 500, 1000, 1500 and 2000 samples in my .predict function, and it always make correct prediction in 65-68% of the samples.
Why is that, am I doing something wrong?
I have tried to play with number of layers, number of nodes, with different activation functions and with different optimizers but it only changes the results by 1-2%.
My dataset looks like this:
DataFrame shape (59249, 33)
x_train shape (47399, 32)
y_train shape (47399,)
x_test shape (11850, 32)
y_test shape (11850,)
testing_features shape (1000, 32)
This is my NN model:
model = Sequential()
model.add(Dense(64, input_dim = x_train.shape[1], activation = 'relu')) # input layer requires input_dim param
model.add(Dropout(0.2))
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation='sigmoid')) # sigmoid instead of relu for final probability between 0 and 1
# compile the model, adam gradient descent (optimized)
model.compile(loss="binary_crossentropy", optimizer= "adam", metrics=['accuracy'])
# call the function to fit to the data training the network)
es = EarlyStopping(monitor='val_loss', min_delta=0.0, patience=1, verbose=0, mode='auto')
model.fit(x_train, y_train, epochs = 15, shuffle = True, batch_size=32, validation_data=(x_test, y_test), verbose=2, callbacks=[es])
scores = model.evaluate(x_test, y_test)
print(model.metrics_names[0], round(scores[0]*100,2), model.metrics_names[1], round(scores[1]*100,2))
These are the results:
Train on 47399 samples, validate on 11850 samples
Epoch 1/15
- 25s - loss: 0.3648 - acc: 0.8451 - val_loss: 0.2825 - val_acc: 0.8756
Epoch 2/15
- 9s - loss: 0.2949 - acc: 0.8689 - val_loss: 0.2566 - val_acc: 0.8797
Epoch 3/15
- 9s - loss: 0.2741 - acc: 0.8773 - val_loss: 0.2468 - val_acc: 0.8849
Epoch 4/15
- 9s - loss: 0.2626 - acc: 0.8816 - val_loss: 0.2416 - val_acc: 0.8845
Epoch 5/15
- 10s - loss: 0.2566 - acc: 0.8827 - val_loss: 0.2401 - val_acc: 0.8867
Epoch 6/15
- 8s - loss: 0.2503 - acc: 0.8858 - val_loss: 0.2364 - val_acc: 0.8893
Epoch 7/15
- 9s - loss: 0.2480 - acc: 0.8873 - val_loss: 0.2321 - val_acc: 0.8895
Epoch 8/15
- 9s - loss: 0.2450 - acc: 0.8886 - val_loss: 0.2357 - val_acc: 0.8888
11850/11850 [==============================] - 2s 173us/step
loss 23.57 acc 88.88
And this is for prediction:
#testing_features are 2000 rows that i extracted from dataset (these samples are not used in training, this is separate dataset thats imported)
prediction = model.predict(testing_features , batch_size=32)
res = []
for p in prediction:
res.append(p[0].round(0))
# Accuracy with sklearn - also much lower
acc_score = accuracy_score(testing_results, res)
print("Sklearn acc", acc_score)
result_df = pd.DataFrame({"label":testing_results,
"prediction":res})
result_df["prediction"] = result_df["prediction"].astype(int)
s = 0
for x,y in zip(result_df["label"], result_df["prediction"]):
if x == y:
s+=1
print(s,"/",len(result_df))
acc = s*100/len(result_df)
print('TOTAL ACC:', round(acc,2))
The problem is...now I get accuracy with sklearn 52% and my_acc 52%.
Why do I get such low accuracy on validation, when it says that its much larger?
The training data you posted gives high validation accuracy, so I'm a bit confused as to where you get that 65% from, but in general when your model performs much better on training data than on unseen data, that means you're over fitting. This is a big and recurring problem in machine learning, and there is no method guaranteed to prevent this, but there are a couple of things you can try:
regularizing the weights of your network, e.g. using l2 regularization
using stochastic regularization techniques such as drop-out during training
early stopping
reducing model complexity (but you say you've already tried this)
I will list the problems/recommendations that I see on your model.
What are you trying to predict? You are using sigmoid activation function in the last layer which seems it is a binary classification but in your loss fuction you used mse which seems strange. You can try binary_crossentropy instead of mse loss function for your model.
Your model seems suffer from overfitting so you can increase the prob. of Dropout and also add new Dropout between other hidden layers or you can remove one of the hidden layers because it seem your model is too complex.
You can change your neuron numbers in layers like a narrower => 64 -> 32 -> 16 -> 1 or try different NN architectures.
Try adam optimizer instead of sgd.
If you have 57849 sample you can use 47000 samples in training+validation and rest of will be your test set.
Don't use the same sets for your evaluation and validation. First split your data into train and test set. Then when you are fitting your model give validation_split_ratio then it will automatically give validation set from your training set.

Metrics not displaying when running model.fit

I am working my way through an ML example in Google Colabs. The documentation says that when I run model.fit, the loss and accuracy metrics are displayed. I am not seeing any loss or accuracy metric.
I have added accuracy as a metric in model.compile
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Here is a screenshot of what I am seeing.
How do I get the loss and accuracy metrics to be displayed when I am fitting the model?
You can use the verbose flag and set it to 2 to display 1 line per epoch or 1 for a progress bar.
import keras
import numpy as np
model = keras.Sequential()
model.add(keras.layers.Dense(10, input_shape=(5, 6)))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy')
x_data = np.random.random((32, 5, 6))
y_data = np.random.randint(0, 9, size=(32,5,1))
model.fit(x=x_data, y=y_data, batch_size=16, epochs=3)
Use tf.cast instead.
Epoch 1/3
32/32 [==============================] - 1s 20ms/step - loss: 9.9664
Epoch 2/3
32/32 [==============================] - 0s 293us/step - loss: 9.9537
Epoch 3/3
32/32 [==============================] - 0s 164us/step - loss: 9.9425
I hope it solves your problem.

tf.keras loss becomes NaN

I'm programming a neural network in tf.keras, with 3 layers. My dataset is the MNIST dataset. I decreased the number of examples in the dataset, so the runtime is lower. This is my code:
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import pandas as pd
!git clone https://github.com/DanorRon/data
%cd data
!ls
batch_size = 32
epochs = 10
alpha = 0.0001
lambda_ = 0
h1 = 50
train = pd.read_csv('/content/first-repository/mnist_train.csv.zip')
test = pd.read_csv('/content/first-repository/mnist_test.csv.zip')
train = train.loc['1':'5000', :]
test = test.loc['1':'2000', :]
train = train.sample(frac=1).reset_index(drop=True)
test = test.sample(frac=1).reset_index(drop=True)
x_train = train.loc[:, '1x1':'28x28']
y_train = train.loc[:, 'label']
x_test = test.loc[:, '1x1':'28x28']
y_test = test.loc[:, 'label']
x_train = x_train.values
y_train = y_train.values
x_test = x_test.values
y_test = y_test.values
nb_classes = 10
targets = y_train.reshape(-1)
y_train_onehot = np.eye(nb_classes)[targets]
nb_classes = 10
targets = y_test.reshape(-1)
y_test_onehot = np.eye(nb_classes)[targets]
model = tf.keras.Sequential()
model.add(layers.Dense(784, input_shape=(784,)))
model.add(layers.Dense(h1, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(lambda_)))
model.add(layers.Dense(10, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(lambda_)))
model.compile(optimizer=tf.train.GradientDescentOptimizer(alpha),
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
model.fit(x_train, y_train_onehot, epochs=epochs, batch_size=batch_size)
Whenever I run it, one of 3 things happens:
The loss decreases and the accuracy increases for a few epochs, until the loss becomes NaN for no apparent reason and the accuracy plummets.
The loss and accuracy stay the same for each epoch. Usually the loss is 2.3025 and the accuracy is 0.0986.
The loss starts at NaN(and stays that way), while the accuracy stays low.
Most of the time, the model does one of these things, but sometimes it does something random. It seems like the type of erratic behavior that occurs is completely random. I have no idea what the problem is. How do I fix this problem?
Edit: Sometimes, the loss decreases, but the accuracy stays the same. Also, sometimes the loss decreases and the accuracy increases, then after a while the accuracy decreases while the loss still decreases. Or, the loss decreases and the accuracy increases, then it switches and the loss goes up fast while the accuracy plummets, eventually ending with loss: 2.3025 acc: 0.0986.
Edit 2: This is an example of something that sometimes happens:
Epoch 1/100
49999/49999 [==============================] - 5s 92us/sample - loss: 1.8548 - acc: 0.2390
Epoch 2/100
49999/49999 [==============================] - 5s 104us/sample - loss: 0.6894 - acc: 0.8050
Epoch 3/100
49999/49999 [==============================] - 4s 90us/sample - loss: 0.4317 - acc: 0.8821
Epoch 4/100
49999/49999 [==============================] - 5s 104us/sample - loss: 2.2178 - acc: 0.1345
Epoch 5/100
49999/49999 [==============================] - 5s 90us/sample - loss: 2.3025 - acc: 0.0986
Epoch 6/100
49999/49999 [==============================] - 4s 90us/sample - loss: 2.3025 - acc: 0.0986
Epoch 7/100
49999/49999 [==============================] - 4s 89us/sample - loss: 2.3025 - acc: 0.0986
Edit 3: I changed the loss to mean squared error and the network works well now. Is there a way to keep it in cross entropy without it converging to a local minimum?
I changed the loss to mean squared error and the network works well now
MSE is not the appropriate loss function for such classification problems; you should certainly stick to loss = 'categorical_crossentropy'.
Most probably, the issue is due to your MNIST data being not normalized; you should normalize your final variables as
x_train = x_train.values/255
x_test = x_test.values/255
Not normalizing input data is a known cause of exploding gradient problems, which is probably what is happening here.
Other advice: set activation='relu' for your first dense layer, and get rid of both the regularizer & initializer arguments from all layers (the default glorot_uniform is actually a better initializer, while regularization here may actually be harmful for the performance).
As a general advice, try not to reinvent the wheel - start with a Keras example using the built-in MNIST data...
The frustration your feeling towards the seemly random output of your code is understandable and correctly identified. Every time the model begins training it randomly initializes the weights. Depending on this initialization you see one of your three output scenarios.
The issue is most likely due to vanishing gradients. It's a phenomenon that occurs when the backpropagation causes very small weights to be multiplied by a small number to create an almost infinitely small value. The solution is to add small jitter (1e-10) to each of your gradients (from within the cost function) so that they never reach zero.
There are tons of more detailed blogs about vanishing gradients online and for an implementation example checkout line 217 of this TensorFlow Network

Tensorflow Keras LSTM not training - affected by number_of_epochs, optimizer adam

I have 2 code snippets. One of them trains a model while the other one doesn't. I don't want to raise an issue on Github without getting to the bottom of this and it wasted a day of mine waiting for the incorrect model to train.
This is the model, which is correct. Running tensorflow 1.10.1.
model = Sequential()
# I truncate the string at 20 characters, alphabet listset is a sorted list of the set of [A-Za-z0-9-_] which has len = 64
model.add(LSTM(512, return_sequences=True, input_shape=(20, len(alphabet_listset)), dropout=0.2, stateful=False))
model.add(LSTM(512, return_sequences=False, dropout=0.2, stateful=False))
model.add(Dense(2, activation="softmax"))
model.compile(optimizer=adam, loss='categorical_crossentropy',
metrics=['accuracy']) # adam here is at learning rate 1e-3
model.summary()
To create X_train and Y_train I use test_train_split.
The way I convert the string to one hot vector(even though there is a fuction for one hot vector for lstm now, if you add that it would really help) is
def string_vectorizer(strng, alphabet, max_str_len=20):
vector = [[0 if char != letter else 1 for char in alphabet] for letter in strng[0:max_str_len]]
while len(vector) != max_str_len:
vector = [*vector, [0 for char in alphabet]]
return np.array(vector)
The parts I mention as correct are indeed correct as this is not the first time I am training this model and have validated it. I need to update my models every month and when I was testing my architecture by running multiple models I came across this anomaly.
Here is the incorrect code
model.fit(X_train, to_categorical(Y_train, 2), epochs=1000,
validation_data=(X_test, to_categorical(Y_test, 2)),
verbose=2, shuffle=True)
loss, accuracy = model.evaluate(X_test, to_categorical(Y_test, 2))
Output of this incorrect snippet is the same as the correct snippet log, just that the accuracy remains at 0.5454 for 12 epochs and the loss does not reduce. My sample data is at a split of 50k correct to 60k incorrect labels. So if the model just predicts 1 for all the 60k incorrect labels, the accuracy would be 60k / (60k + 50k) => 0.54.
Here is the correct code, the only difference is the value of epochs.
expected_acc_eth, expected_loss_eth = 0.83, 0.40
while(True):
model.fit(X_train, to_categorical(Y_train, 2), epochs=1,
validation_data=(X_test, to_categorical(Y_test, 2)),\
verbose=2, shuffle=True)
loss, accuracy = model.evaluate(X_test, to_categorical(Y_test, 2))
if((accuracy > expected_acc_eth) & (loss < expected_loss_eth)):
break
Output of this correct code
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1414s - loss: 0.6847 - acc: 0.5578 - val_loss: 0.6698 - val_acc: 0.5961
11000/11000 [==============================] - 36s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1450s - loss: 0.6777 - acc: 0.5764 - val_loss: 0.6707 - val_acc: 0.5886
11000/11000 [==============================] - 36s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1425s - loss: 0.6729 - acc: 0.5862 - val_loss: 0.6643 - val_acc: 0.6030
11000/11000 [==============================] - 37s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1403s - loss: 0.6681 - acc: 0.5948 - val_loss: 0.6633 - val_acc: 0.6092
11000/11000 [==============================] - 35s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
I have seen this stackoverflow post which states that early stopping affects the way models learn but they go off topic with the steps per epoch theory. I tried setting batch_size but that doesn't help or I couldn't do it correctly as it depends inversely to the learning rate of adam and my scale must have been off. I understand deep nets and machine learning to some extent but this is too much of a difference between the outputs.
I hope it saves others who face similar bugs from wasting too much time like me!
Can someone please elaborate on this. Any help is much appreciated!
From our discussion in the comments, it sounds like the issue arises in the implementation of the Adam optimizer failing to update anything when model.fit() is called with epochs > 1.
I would be interested in seeing why this is, but a (slower) working solution for now is to use optimizer=rmsprop instead of optimizer=adam in your call to model.compile().

Categories

Resources