Training works but prediction produces constant values (cnn with pytorch) - python

I have a model trying to predict the class of image: cat or dog. I receive 95% accuracy in training. However when I try to predict a single image, I am stuck with almost constant output every time I run the model. There are some non-constant values, but they mostly look like catastrophic failure.
I read similar topics from forums but that hasn't helped, as it appears there is no particular solution for this problem...
I have tried the following:
Changing epochs 5 to 15,20,30...
Changing lr = 0.001 to 0.01, 0.0001...
I implemented with both dropout regularization model and batch
normalization model...
I changed testing pictures...
Changing last activation layer from softmax to torch.sigmoid...
Reducing batch size from 100 to 30, 75...
Trying with a batch, which results with normal acc, loss and
predictions.
My dataset is scaled which is mentioned in forums as solution.
My optim is Adam which is mentioned in forums as solution.
Loading dataset with torch.data.DataLoader...
Sampling randomly...
I saved and load the model, in case there are problems with that.
However, I already checked that state_dict's are different...
I re-prepared data which resulted the constant value to change
otherwise (dog to cat), somehow? Idk if that's a coincidence though.
Infos:
Dataset :
https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
Here is all my code with predictions in Jupyter Notebook, feel free to investigate. I am really tired of this solution. Any help is highly appreciated!
https://github.com/yusuftengriverdi/neural_networks/blob/master/CNN_Last.ipynb
Similar topics around the web:
https://discuss.pytorch.org/t/rnn-predicting-a-constant-output/40397/5
https://discuss.pytorch.org/t/cnn-does-not-predict-properly-does-not-converge-as-expected/43567
https://discuss.pytorch.org/t/making-a-prediction-with-a-trained-model/2193
https://datascience.stackexchange.com/questions/46779/predict-gives-the-same-output-value-for-every-image-keras
https://github.com/keras-team/keras/issues/6447
PyTorch model prediction fail for single item
Having trouble with CNN prediction

If something works in training but fails during prediction, the most likely cause is you're not preprocessing the data the same way.
I had a look at the notebook (huge amount of code, in future please condense this to just the relevant parts here). At a glance - this is your prediction code which doesn't work as expected:
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
plt.imshow(img, cmap='gray')
x = torch.Tensor([i for i in img]).view(-1, 50, 50)
y= torch.Tensor([0,1]).to(device)
test_x = x.view(-1, 1, 50, 50)
test_x = test_x.to(device)
net.eval()
#with torch.no_grad():
yhat.append(net(test_x))
But during training you're using a dataloader
testloader = DataLoader(v_dataset, batch_size = BATCH_SIZE, sampler= test_sampler)
...
test_dt = next(iter(testloader))
X, y = test_dt[0].view(-1, 1, 50, 50), test_dt[1]
val_acc, val_loss = fwd_pass(X.view(-1, 1, 50, 50).to(device), y.to(device))
which works (since your test/validation accuracy goes up to a good level).
Figure out what the dataloader code path does which the other code path doesn't do, and you'll have the solution. Eg, load the same image in both ways and compare - same dimensions? data average / standard deviation the same? etc
For a shortcut - just use a dataloader to make predictions as well. P.S. Yes, it is okay to create a dataloader for just one image.

Related

A Classifier Network Seems to be "Forgetting" older samples

This is a strange problem: Imagine a neural network classifier. It is a simple linear layer followed by a sigmoid activation that has an input size of 64, and an output size of 112. There also are 112 training samples, where I expect the output to be a one-hot vector. So the basic structure of a training loop is as follows, where samples is a list of integer indices:
model = nn.Sequential(nn.Linear(64,112),nn.Sequential())
loss_fn = nn.BCELoss()
optimizer = optim.AdamW(model.parameters(),lr=3e-4)
for epoch in range(500):
for input_state, index in samples:
one_hot = torch.zeros(112).float()
one_hot[index] = 1.0
optimizer.zero_grad()
prediction = model(input_state)
loss = loss_fn(prediction,one_hot)
loss.backward()
optimizer.step()
This model does not perform well, but I don't think it's a problem with the model itself, but rather how it's trained. I think that this is happening because for the most part, all of the one_hot tensor is zeros, that the model just tends to gravitate toward all of the outputs being zeros, which is what's happening. The question becomes: "How does this get solved?" I tried using the average loss with all the samples, to no avail. So what do I do?
So this is very embarrassing, but the answer actually lies in how I process my data. This is a text-input project, so I used basic python lists to create blocks of messages, but when I did this, I ended up making it so that all of the inputs the net got were the same, but the output was different every time. I solved tho s problem with the copy method.

tensorflow 2.0, model.fit() : Your input ran out of data

I am absolutely new to TensorFlow and Keras, and I am trying to make my way around trying out some code that I am finding online.
In particular I am using the fashion-MNIST - consisting of 60000 examples and test set of 10000 examples. Each of them is a 28x28 grayscale image.
I am following this tutorial "https://towardsdatascience.com/building-your-first-neural-network-in-tensorflow-2-tensorflow-for-hackers-part-i-e1e2f1dfe7a0", and I have no problem until the definition of
history = model.fit(
train_dataset.repeat(),
epochs=10,
steps_per_epoch=500,
validation_data=val_dataset.repeat(),
validation_steps=2)
As long as I understood, I need to use train_dataset.repeat() as input dataset because otherwise I won't have enough training example using those values for the hyperparameters (epochs, steps_per_epochs).
My question is: how can I avoid to have to use .repeat()?
How do I need to change the hyperparameters?
I am coping the code here, for simplicity:
def preprocess(x,y):
x = tf.cast(x,tf.float32) / 255.0
y = tf.cast(y, tf.float32)
return x,y
def create_dataset(xs, ys, n_classes=10):
ys = tf.one_hot(ys, depth=n_classes)
return tf.data.Dataset.from_tensor_slices((xs, ys)).map(preprocess).shuffle(len(ys)).batch(128)
model.compile(optimizer = 'adam', loss =tf.losses.CategoricalCrossentropy(from_logits= True), metrics =['accuracy'])
history1 = model.fit(train_dataset.repeat(),
epochs=10,
steps_per_epoch=500,
validation_data=val_dataset.repeat(),
validation_steps=2)
Thanks!
If you don't want to use .repeat() you need to have your model passing thought your entire data only one time per epoch.
In order to do that you need to calculate how many steps it will take for your model to pass throught the entire dataset, the calcul is easy :
steps_per_epoch = len(train_dataset) // batch_size
So with a train_dataset of 60 000 sample and a batch_size of 128, you need to have 468 steps per epoch.
By setting this parameter like that you make sure that you do not exceed the size of your dataset.
I encountered the same problem and here is what I found.
Documentation of tf.keras.Model.fit: "If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted."
In other words, we don't need to specify 'steps_per_epoch' if we use the tf.data.dataset as the training data, and tf will figure out how many steps are there. Meanwhile, tf will automatically repeat the dataset when the next epoch begins, so you can specify any 'epoch'.
When passing an infinitely repeating dataset (e.g. dataset.repeat()), you must specify the steps_per_epoch argument.

Pytorch simulation fails to converge on convex loss function when not initialized with 0

My code works when the weights initialized with 0. When I initialize them according to some seed, they fail to converge. This should be a bug since the loss function is convex.
I filtered two labels from MNIST (0 and 1), and then I trained a logistic regression model using pytorch. Since I use only 200 training samples (and 784 parameters), the model should quickly converge to 100% accuracy on the training set. This is not the case when the weights initialize by some seed.
I had some problem to share my code on stackoverflow, so here is a link to the code: https://drive.google.com/file/d/1ELe8TIWrXMiXgsB63B0Ss43GPr719rGc/view?usp=sharing
Your data are not rescaled and normalized. If you look at the images variable in your training loop it's between 0 and 255 this is in all likelihood hurting your training process.
There are cleaner ways to subsample the dataset as you want, but without modifying too much of your code, using this data loading definition
import torchvision.transforms as transforms
#Load Dataset
preprocessing = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = dsets.MNIST(root='./data', train=True, transform=preprocessing, download=True)
#Filter samples by label (to get binary classification) and by number of training samples
Binary_filter=torch.add(train_dataset.targets==1, train_dataset.targets==0)
train_dataset.data, train_dataset.targets = train_dataset.data[Binary_filter],train_dataset.targets[Binary_filter]
TrainSet_filter=torch.cat((torch.ones(num_of_training_samples)
,torch.zeros(len(train_dataset.targets)-num_of_training_samples)),0).bool()
train_dataset.data, train_dataset.targets = train_dataset.data[TrainSet_filter], train_dataset.targets[TrainSet_filter]
#Make Dataset Iterable
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
I have ~100% accuracy in about 5-10 epochs.
Your loss function (BCE) is convex only with respect to the outputs of the deep network, not with respect to the weights.
You definitely can't assume that any local minimum is also a global minimum.

Keras Validation Accuracy is Zero but other metrics are normal

I am working on a computer vision problem in keras and I have run into a an interesting problem. My val_acc is 0.0000e+00. This is especially interesting as my other metrics such as loss, acc, and val_loss all are acting normally.
This started happening when I switched from the Sequence data_generator to a custom one that I'm pretty sure is working as intended. My issue is very similar to this one validation accuracy is 0 with Keras fit_generator but no answer was reached in that thread.
I have checked to make sure my activations and loss metrics are appropriate for my particular problem. I am using: loss='categorical_crossentropy' metrics=['accuracy'] and am attempting to predict the month that a certain spectrogram comes from.The validation data is being loaded in the exact same way as the training data so I really can't figure out whats happening also even random guessing should give a 1/12 val_acc right? It can't be zero.
Here is my model architecture:
x = (Convolution2D(32,5,5,activation='relu',input_shape=(501,501,1)))(input_img)
x = (MaxPooling2D(pool_size=(2,2)))(x)
x = (Convolution2D(32,5,5,activation='relu'))(x)
x = (MaxPooling2D(pool_size=(2,2)))(x)
x = (Dropout(0.25))(x)
x = (Flatten())(x)
x = (Dense(128,activation='relu'))(x)
x = (Dropout(0.5))(x)
classify = (Dense(12,activation='softmax', kernel_regularizer=regularizers.l1_l2(l1 = 0.001,l2 = 0.001)))(x)
model = Model(input_img,classify)
model.compile(loss='categorical_crossentropy',optimizer='nadam',metrics=['accuracy'])
and here is my call to fit_generator:
model.fit_generator(generator = pd.data_generator(folder,'train'),
validation_data = pd.data_generator(folder,'test'),
steps_per_epoch=(120),
validation_steps=(24),
nb_epoch=20,
verbose=1,
shuffle=True,
callbacks=[tensorboard_callback,early_stop_callback])
and finally here is the important part of my data generator:
if mode == 'test':
print('test')
while True:
for things in up.unpickle_batch(folder,50,6000,7200): #The last 1200 things in batches of 50
random.shuffle(things)
test_spect = []
test_months = []
for thing in things:
test_spect.append(thing.spect) #GET BATCH DATA
test_months.append(thing.month-1) #this is is here because the months go from 1-12 but should go from 0-11 for to_categorical
x_test = np.asarray(test_spect) #PREPARE BATCH DATA
x_test = x_test.astype('float32')
x_test /= np.amax(x_test) #- 0.5
X_test = np.reshape(x_test, (-1,501, 501,1))
Y_test = np_utils.to_categorical(test_months,12)
yield X_test,Y_test #RETURN BATCH DATA
Check for bad data.
Make sure your data is what you think it is -- shuffled, distributed the same as your validation and/or test set, free of misleading/erroneous/contradictory samples. You can probably generate a failproof dataset (e.g. distinguish dark images from light ones, or sharp versus blurry) and prove that everything but the data is OK. If you can't, then look more closely at your code. This, however, sounds like a data problem.
I just fixed a similar problem in a simple 3-layer MLP network for which training loss & accuracy were heading in reasonable directions, validation loss was following training loss (but lagging) yet validation accuracy hovered at zero. There was an off-by-one error in my training dataset generation (a sampling script from a larger set) that meant that 1 sample in the entire block of samples for one type had the label for the next block for a different type. 499 correct samples out of 500 was insufficient to keep the training on track.

Why is TensorFlow's `tf.data` package slowing down my code?

I'm just learning to use TensorFlow's tf.data API, and I've found that it is slowing my code down a lot, measured in time per epoch. This is the opposite of what it's supposed to do, I thought. I wrote a simple linear regression program to test it out.
Tl;Dr: With 100,000 training data, tf.data slows time per epoch down by about a factor of ten, if you're using full batch training. Worse if you use smaller batches. The opposite is true with 500 training data.
My question: What is going on? Is my implementation flawed? Other sources I've read have tf.data improving speeds by about 30%.
import tensorflow as tf
import numpy as np
import timeit
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
tf.logging.set_verbosity(tf.logging.ERROR)
n_epochs = 10
input_dimensions_list = [10]
def function_to_approximate(x):
return np.dot(x, random_covector).astype(np.float32) + np.float32(.01) * np.random.randn(1,1).astype(np.float32)
def regress_without_tfData(n_epochs, input_dimension, training_inputs, training_labels):
tf.reset_default_graph()
weights = tf.get_variable("weights", initializer=np.random.randn(input_dimension, 1).astype(np.float32))
X = tf.placeholder(tf.float32, shape=(None, input_dimension), name='X')
Y = tf.placeholder(tf.float32, shape=(None, 1), name='Y')
prediction = tf.matmul(X,weights)
loss = tf.reduce_mean(tf.square(tf.subtract(prediction, Y)))
loss_op = tf.train.AdamOptimizer(.01).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for _ in range(n_epochs):
sess.run(loss_op, feed_dict={X: training_inputs, Y:training_labels})
def regress_with_tfData(n_epochs, input_dimension, training_inputs, training_labels, batch_size):
tf.reset_default_graph()
weights = tf.get_variable("weights", initializer=np.random.randn(input_dimension, 1).astype(np.float32))
X,Y = data_set.make_one_shot_iterator().get_next()
prediction = tf.matmul(X, weights)
loss = tf.reduce_mean(tf.square(tf.subtract(prediction, Y)))
loss_op = tf.train.AdamOptimizer(.01).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
while True:
try:
sess.run(loss_op)
except tf.errors.OutOfRangeError:
break
for input_dimension in input_dimensions_list:
for data_size in [500, 100000]:
training_inputs = np.random.randn(data_size, input_dimension).astype(np.float32)
random_covector = np.random.randint(-5, 5, size=(input_dimension, 1))
training_labels = function_to_approximate(training_inputs)
print("Not using tf.data, with data size "
"{}, input dimension {} and training with "
"a full batch, it took an average of "
"{} seconds to run {} epochs.\n".
format(
data_size,
input_dimension,
timeit.timeit(
lambda: regress_without_tfData(
n_epochs, input_dimension,
training_inputs, training_labels
),
number=3
),
n_epochs))
for input_dimension in input_dimensions_list:
for data_size, batch_size in [(500, 50), (500, 500), (100000, 50), (100000, 100000)]:
training_inputs = np.random.randn(data_size, input_dimension).astype(np.float32)
random_covector = np.random.randint(-5, 5, size=(input_dimension, 1))
training_labels = function_to_approximate(training_inputs)
data_set = tf.data.Dataset.from_tensor_slices((training_inputs, training_labels))
data_set = data_set.repeat(n_epochs)
data_set = data_set.batch(batch_size)
print("Using tf.data, with data size "
"{}, and input dimension {}, and training with "
"batch size {}, it took an average of {} seconds "
"to run {} epochs.\n".
format(
data_size,
input_dimension,
batch_size,
timeit.timeit(
lambda: regress_with_tfData(
n_epochs, input_dimension,
training_inputs, training_labels,
batch_size
),
number=3
)/3,
n_epochs
))
This outputs for me:
Not using tf.data, with data size 500, input dimension 10 and training
with a full batch, it took an average of 0.20243382899980134 seconds
to run 10 epochs.
Not using tf.data, with data size 100000, input dimension 10 and
training with a full batch, it took an average of 0.2431719040000644
seconds to run 10 epochs.
Using tf.data, with data size 500, and input dimension 10, and
training with batch size 50, it took an average of 0.09512088866661846
seconds to run 10 epochs.
Using tf.data, with data size 500, and input dimension 10, and
training with batch size 500, it took an average of
0.07286913600000844 seconds to run 10 epochs.
Using tf.data, with data size 100000, and input dimension 10, and
training with batch size 50, it took an average of 4.421892363666605
seconds to run 10 epochs.
Using tf.data, with data size 100000, and input dimension 10, and
training with batch size 100000, it took an average of
2.2555197536667038 seconds to run 10 epochs.
Edit: Fixed an important issue that Fred Guth pointed out. It didn't much affect the results, though.
I wanted to test the dataset API which seems to be really convenient for processing data. I did a lot of time testing about this API in CPU, GPU and multi-GPU way for small and large NN with different type of data.
First thing, It seems to me that your code is ok. But I need to point that your NN is just one simple layer.
Now, the dataset API is not suitable for your type of NN but for NN with a lot more complexity. Why ? For several reasons that I explain below (founded in my quest of understanding the dataset API).
Firstly, in one hand the dataset API processes data each batch whereas in the other hand data are preprocessed. Therefore, if it fits your RAM, you can save time by preprocessing the data. Here your data are just to "simple". If you want to test what i am saying, try to find a really really big dataset to process. Nevertheless, the dataset API can be tuned with prefetching data. You can take a look to this tutorial that explain really well why it is good to process data with prefetch.
Secondly, in my quest of dataset API for Multi-GPU training, I discovered that as far as i know the old pre-processing way is faster than dataset API for small Neural Network. You can verify that by creating a simple stackable RNN which take a sequence in input. You can try different size of stack (i have tested 1, 2, 10 and 20). You will see that, using the dataset API, on 1-GPU or on 4-GPUs, the time did not differ for small RNN stacks (1, 2 and 5).
To summarize, the dataset API is suitable for Neural Network that have data that can't be pre-process. Depending on your task, it may be more convenient to pre-process data, for example if you want to tweak your NN in order to improve it. I agree that the dataset API is really cool for batch, padding and also convenient for shuffling large amount of data but it's also not suitable for multi-GPU training.
First:
You are recreating the dataset unnecessarily.
data_set = tf.data.Dataset.from_tensor_slices((training_inputs, training_labels))
Create the dataset prior to the loop and change the regress_with_tfData input signature to use dataset instead of training_inputs and training_labels.
Second:
The problem here is that minibatches of size 50 or even 500 are too small to compensate the cost of td.data building latency. You should increase the minibatch size. Interestingly you did so with a minibatch of size 100000, but then maybe it is too big ( I am not certain of this, I think it would need more tests).
There are a couple of things you could try:
1) Increase the minibatch size to something like 10000 and see if you get an improvement
2) Change your pipeline to use an iterator, example:
data_set = tf.data.Dataset.from_tensor_slices((training_inputs, training_labels))
data_set = data_set.repeat(n_epochs)
data_set = data_set.batch(batch_size)
iterator = data_set.make_one_shot_iterator()
....
next_element = iterator.get_next()
That is because you are comparing apples with bananas.
On one hand, when using placeholders, you are providing a monolithic tensor as is. On the other hand, when using Dataset, you are slicing the tensor into individual samples. This is very different.
The equivalent of providing a monolothic placeholder tensor with the Dataset pipeline is by using tf.data.Dataset.from_tensors. When I use from_tensors in your example, I get similar (actually smaller) computation times than with placeholders.
If you want to compare a more sophisticated pipeline using from_tensor_slices, you should use a fair comparison with placeholders. For example, shuffle your data. Add some preprocessing on your slices. I have no doubt you will observe the performance gain that makes people switch to this pipeline.
One possible thing you are missing is a prefetch. Add a prefetch of 1 at the end of your data pipeline like so:
data_set = tf.data.Dataset.from_tensor_slices((training_inputs, training_labels))
data_set = data_set.repeat(n_epochs)
data_set = data_set.batch(batch_size).prefetch(1)
Adding a prefetch of 1 at the end of your dataset pipeline means you try and fetch 1 batch of data while training is happening. This way you wont be waiting around while the batch is prepared, it should be ready to go as soon as each train iteration is done.
The accepted answer doesn't help longer valid, as the TF behavior has changed. Per documentation:
from_tensors produces a dataset containing only a single element. To
slice the input tensor into multiple elements, use from_tensor_slices
instead.
This means you cannot batch it
X = np.arange(10)
data = tf.data.Dataset.from_tensors( X )
data = data.batch(2)
for t in data.as_numpy_iterator():
print(t)
# only one row, whereas expected 5 !!!
The documentation recommends from_tensor_slices. But this has quite some overhead when compared to numpy slicing. Slow slicing is an open issue https://github.com/tensorflow/tensorflow/issues/39750
Essentially, slicing in TF is slow and impacts input-bound or light models such as small networks (regression, word2vec).

Categories

Resources