Batch Input Keras Shared Parameters - python

I am building network to rank a set of N inputs. Ideally they should all be input at the same time and share parameters. Their target vector should be an N-hot vector to match the inputs.
This means my input should be (Batch_size, N, sequence_length, feature_length)
But keras will throw an error for any input larger than 3 dimensions as shown here:
ValueError: Input 0 is incompatible with layer lstm_2: expected
ndim=3, found ndim=4
My current keras set up is:
x = Input(shape=(72,300))
aux_input = Input(shape=(72, 4))
probs = Input(shape=(1,))
#dim_red_1 = Dense(100)(x)
dim_red_2 = Dense(20, activation='tanh')(x)
cat = concatenate([dim_red_2, aux_input])
encoded = LSTM(64)(cat)
cat2 = concatenate([encoded, probs])
output = Dense(1, activation='sigmoid')(cat2)
lstm_model = Model(inputs=[x, aux_input, probs], outputs=output)
lstm_model.compile(optimizer='ADAM', loss='binary_crossentropy', metrics=['accuracy'])
Is there a way to achieve this with Keras?

Although your code seems to be fine, make sure to import the right packages:
import numpy as np
from tensorflow.python.keras import Input
from tensorflow.python.keras.engine.training import Model
from tensorflow.python.keras.layers import Dense, LSTM, Concatenate
a = np.zeros(shape=[1000, 72, 300])
b = np.zeros(shape=[1000, 72, 4])
c = np.zeros(shape=[1000, 1])
d = np.zeros(shape=[1000, 1])
x = Input(shape=(72, 300))
aux_input = Input(shape=(72, 4))
probs = Input(shape=(1,))
dim_red_2 = Dense(20, activation='tanh')(x)
cat = Concatenate()([dim_red_2, aux_input])
encoded = LSTM(64)(cat)
cat2 = Concatenate()([encoded, probs])
output = Dense(1, activation='sigmoid')(cat2)
lstm_model = Model(inputs=[x, aux_input, probs], outputs=output)
lstm_model.compile(optimizer='ADAM', loss='binary_crossentropy', metrics=['accuracy'])
lstm_model.summary()
lstm_model.fit([a, b, c], d, batch_size=256)
output:
256/1000 [======>.......................] - ETA: 2s - loss: 0.6931 - acc: 1.0000
512/1000 [==============>...............] - ETA: 1s - loss: 0.6910 - acc: 1.0000
768/1000 [======================>.......] - ETA: 0s - loss: 0.6885 - acc: 1.0000
1000/1000 [==============================] - 1s 1ms/step - loss: 0.6859 - acc: 1.00

Related

Why I am getting a negative loss and negative validation loss in my model

I am training a variational autoencoder using USPS dataset of shape (7291, 16, 16). Below is my code snipet. I also tried the same code snipet on MNIST dataset of shape (60000,28,28) and everything seems to work fine. Both are gray scale images. I can figure out why I am getting the a negative value for training loss and validation loss for USPS dataset. The code execution is quite straight forwards, The only changes from MNIST model is mnist.load_data() to usps.load_data().
I also have also tried reducing the number of layers in both the encoder and decoder network but the result for the USPS model appears the same. I can figure out what exactly I am getting wrong. please I need your assistance to understand the reason for the negative values.
!pip install extra_keras_datasets
#######################################
from extra_keras_datasets import usps
import keras
from keras.layers import Conv2D, Conv2DTranspose, Input, Flatten, Dense, Lambda, Reshape
#from keras.layers import BatchNormalization
from keras.models import Model
from keras.datasets import mnist
import tensorflow.compat.v1.keras.backend as K
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import numpy as np
import matplotlib.pyplot as plt
# Load MNIST
# (x_train, y_train), (x_test, y_test) = mnist.load_data()
(x_train, y_train), (x_test, y_test) = usps.load_data()
#Normalize and reshape ============
#Norm.
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255
x_test = x_test / 255
# Reshape
img_width = x_train.shape[1]
img_height = x_train.shape[2]
num_channels = 1 #MNIST --> grey scale so 1 channel
x_train = x_train.reshape(x_train.shape[0], img_height, img_width, num_channels)
x_test = x_test.reshape(x_test.shape[0], img_height, img_width, num_channels)
input_shape = (img_height, img_width, num_channels)
# ========================
# BUILD THE MODEL
# # ================= #############
# # Encoder
#Let us define 4 conv2D, flatten and then dense
# # ================= ############
latent_dim = 2 # Number of latent dim parameters
#Create the model
input_img = Input(shape=input_shape, name='encoder_input')
x = Conv2D(32, 3, padding='same', activation='relu')(input_img)
x = Conv2D(64, 3, padding='same', activation='relu',strides=(2, 2))(x)
x = Conv2D(64, 3, padding='same', activation='relu')(x)
x = Conv2D(64, 3, padding='same', activation='relu')(x)
conv_shape = K.int_shape(x) #Shape of conv to be provided to decoder
print(conv_shape)
#Flatten
x = Flatten()(x)
x = Dense(32, activation='relu')(x)
# Two outputs, for latent mean and log variance (std. dev.)
#Use these to sample random variables in latent space to which inputs are mapped.
z_mu = Dense(latent_dim, name='latent_mu')(x) #Mean values of encoded input
z_sigma = Dense(latent_dim, name='latent_sigma')(x) #Std dev. (variance) of encoded input
#REPARAMETERIZATION TRICK
# Define sampling function to sample from the distribution
# Reparameterize sample based on the process defined by Gunderson and Huang
# into the shape of: mu + sigma squared x eps
#This is to allow gradient descent to allow for gradient estimation accurately.
def sample_z(args):
z_mu, z_sigma = args
eps = K.random_normal(shape=(K.shape(z_mu)[0], K.int_shape(z_mu)[1]))
return z_mu + K.exp(z_sigma / 2) * eps
# sample vector from the latent distribution
# z is the labda custom layer we are adding for gradient descent calculations
# using mu and variance (sigma)
z = Lambda(sample_z, output_shape=(latent_dim, ), name='z')([z_mu, z_sigma])
#Z (lambda layer) will be the last layer in the encoder.
# Define and summarize encoder model.
encoder = Model(input_img, [z_mu, z_sigma, z], name='encoder')
print(encoder.summary())
Decoder
x = Dense(conv_shape[1]*conv_shape[2]*conv_shape[3], activation='relu')(decoder_input)
# reshape to the shape of last conv. layer in the encoder, so we can
x = Reshape((conv_shape[1], conv_shape[2], conv_shape[3]))(x)
# upscale (conv2D transpose) back to original shape
# use Conv2DTranspose to reverse the conv layers defined in the encoder
x = Conv2DTranspose(32, 3, padding='same', activation='relu',strides=(2, 2))(x)
#Can add more conv2DTranspose layers, if desired.
#Using sigmoid activation
x = Conv2DTranspose(num_channels, 3, padding='same', activation='sigmoid', name='decoder_output')(x)
# Define and summarize decoder model
decoder = Model(decoder_input, x, name='decoder')
# apply the decoder to the latent sample
z_decoded = decoder(z)
decoder.summary()
custom loss and model fitting
#VAE is trained using two loss functions reconstruction loss and KL divergence
#Let us add a class to define a custom layer with loss
class CustomLayer(keras.layers.Layer):
def vae_loss(self, x, z_decoded):
x = K.flatten(x)
z_decoded = K.flatten(z_decoded)
# Reconstruction loss (as we used sigmoid activation we can use binarycrossentropy)
recon_loss = keras.metrics.binary_crossentropy(x, z_decoded)
# KL divergence
kl_loss = -5e-4 * K.mean(1 + z_sigma - K.square(z_mu) - K.exp(z_sigma), axis=-1)
return K.mean(recon_loss + kl_loss)
# add custom loss to the class
def call(self, inputs):
x = inputs[0]
z_decoded = inputs[1]
loss = self.vae_loss(x, z_decoded)
self.add_loss(loss, inputs=inputs)
return x
# apply the custom loss to the input images and the decoded latent distribution sample
y = CustomLayer()([input_img, z_decoded])
# y is basically the original image after encoding input img to mu, sigma, z
# and decoding sampled z values.
#This will be used as output for vae
# =================
# VAE
# =================
vae = Model(input_img, y, name='vae')
# Compile VAE
vae.compile(optimizer='adam', loss=None)
vae.summary()
# Train autoencoder
vae.fit(x_train, None, epochs = 10, batch_size = 32, validation_split = 0.2)
Here is my training History.
5832/5832 [==============================] - 5s 928us/sample - loss: 0.0345 - val_loss: -0.0278
Epoch 2/10
5832/5832 [==============================] - 4s 740us/sample - loss: -0.0301 - val_loss: -0.0292
Epoch 3/10
5832/5832 [==============================] - 4s 746us/sample - loss: -0.0307 - val_loss: -0.0293
Epoch 4/10
5832/5832 [==============================] - 4s 751us/sample - loss: -0.0307 - val_loss: -0.0294
Epoch 5/10
5832/5832 [==============================] - 4s 753us/sample - loss: -0.0307 - val_loss: -0.0294
Epoch 6/10
5832/5832 [==============================] - 4s 746us/sample - loss: -0.0307 - val_loss: -0.0294
Epoch 7/10
5832/5832 [==============================] - 4s 750us/sample - loss: -0.0307 - val_loss: -0.0294
Epoch 8/10
5832/5832 [==============================] - 4s 742us/sample - loss: -0.0307 - val_loss: -0.0294
Epoch 9/10
5832/5832 [==============================] - 4s 751us/sample - loss: -0.0307 - val_loss: -0.0294
Epoch 10/10
5832/5832 [==============================] - 4s 748us/sample - loss: -0.0307 - val_loss: -0.0294

Convert BasicLSTMCell to bidirectional LSTM

Recently I tried to use BasicLSTMCell api from Tensorflow to generate video caption. I am working with a code that builds BasicLSTMCell in the following way:
self.lstm1 = tf.compat.v1.nn.rnn_cell.BasicLSTMCell(dim_hidden, state_is_tuple=False)
self.lstm2 = tf.compat.v1.nn.rnn_cell.BasicLSTMCell(dim_hidden, state_is_tuple=False)
Then uses it later as follows:
with tf.compat.v1.variable_scope("Encoding") as scope:
for i in range(0, self.n_video_lstm_step):
if i > 0:
scope.reuse_variables()
with tf.compat.v1.variable_scope("LSTM1"):
output1, state1 = self.lstm1(image_emb[:,i,:], state1)
with tf.compat.v1.variable_scope("LSTM2"):
output2, state2 = self.lstm2(tf.concat([padding, output1], 1), state2)
out_list.append(tf.concat([output1, output2], 1))
I want these LSTM cells to be bidirectional for my requirement. I have tried using
keras.layers.Bidirectional(keras.layers.LSTM(dim_hidden, unit_forget_bias=True, unroll=True))
But it didn't work. Can anyone let me know how to make it work with bidirectional lstm.
Based on the question you ask, Convert BasicLSTMCell to bidirectional LSTM - You can use the Bi-Directional RNN wrapper directly as shown in the code below. Do clarify how you are modifying the LSTM layer class that is causing the error you are facing. Ill update my answer accordingly.
import numpy as np
from tensorflow.keras import layers, Model, utils
X = np.random.random((100,10,3))
y = np.random.random((100,))
inp = layers.Input((10,3))
x = layers.Bidirectional(layers.LSTM(8, return_sequences=True))(inp)
x = layers.Bidirectional(layers.LSTM(8))(x)
out = layers.Dense(1, activation='softmax')(x)
model = Model(inp, out)
utils.plot_model(model, show_layer_names=False, show_shapes=True)
model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit(X, y, epochs=3)
Epoch 1/3
4/4 [==============================] - 5s 10ms/step - loss: 0.6963
Epoch 2/3
4/4 [==============================] - 0s 22ms/step - loss: 0.6965
Epoch 3/3
4/4 [==============================] - 0s 11ms/step - loss: 0.6976
<tensorflow.python.keras.callbacks.History at 0x7f91066bf4c0>

WARNING:tensorflow:Model was constructed with shape (20, 37, 42) for input Tensor("input_5:0", shape=(20, 37, 42), dtype=float32), but

WARNING:tensorflow:Model was constructed with shape (20, 37, 42) for input Tensor("input_5:0", shape=(20, 37, 42), dtype=float32), but it was called on an input with incompatible shape (None, 37).
Hello! Deep learning noob here... I'm having trouble using LSTM layers.
The input is a length 37 float array containing 2 floats and a length 35 one-hot array converted into float. The output is a length 19 array with 0s and 1s. Like the title suggests, I'm having trouble reshaping my input data to fit the model, and I'm not even sure what input dimensions would be considered 'compatible'
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import random
inputs, outputs = [], []
for x in range(10000):
tempi, tempo = [], []
tempi.append(random.random() - 0.5)
tempi.append(random.random() - 0.5)
for x2 in range(35):
if random.random() > 0.5:
tempi.append(1.)
else:
tempi.append(0.)
for x2 in range(19):
if random.random() > 0.5:
tempo.append(1.)
else:
tempo.append(0.)
inputs.append(tempi)
outputs.append(tempo)
batch = 20
timesteps = 42
training_units = 0.85
cutting_point_i = int(len(inputs)*training_units)
cutting_point_o = int(len(outputs)*training_units)
x_train, x_test = np.asarray(inputs[:cutting_point_i]), np.asarray(inputs[cutting_point_i:])
y_train, y_test = np.asarray(outputs[:cutting_point_o]), np.asarray(outputs[cutting_point_o:])
input_layer = keras.Input(shape=(37,timesteps),batch_size=batch)
dense = layers.LSTM(150, activation="sigmoid", return_sequences=True)
x = dense(input_layer)
hidden_layer_2 = layers.LSTM(150, activation="sigmoid", return_sequences=True)(x)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_2)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model"
Several problems here.
Your input didn't have time steps, you need input shape (n, time steps, features)
In input_shape, the time steps dimension comes first, not last
Your last LSTM layer returned sequences, so you can't compare it with 0s and 1s
What I did:
I added time steps to your data (7)
I permuted the dimensions in input_shape
I set the final return_sequences=False
Completely fixed example with generated data:
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
batch = 20
n_samples = 1000
timesteps = 7
features = 10
x_train = np.random.rand(n_samples, timesteps, features)
y_train = keras.utils.to_categorical(np.random.randint(0, 10, n_samples))
input_layer = keras.Input(shape=(timesteps, features),batch_size=batch)
dense = layers.LSTM(16, activation="sigmoid", return_sequences=True)(input_layer)
hidden_layer_2 = layers.LSTM(16, activation="sigmoid", return_sequences=False)(dense)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_2)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model")
model.compile(loss='categorical_crossentropy', optimizer='adam')
history = model.fit(x_train, y_train)
Train on 1000 samples
20/1000 [..............................] - ETA: 2:50 - loss: 2.5145
200/1000 [=====>........................] - ETA: 14s - loss: 2.3934
380/1000 [==========>...................] - ETA: 5s - loss: 2.3647
560/1000 [===============>..............] - ETA: 2s - loss: 2.3549
740/1000 [=====================>........] - ETA: 1s - loss: 2.3395
900/1000 [==========================>...] - ETA: 0s - loss: 2.3363
1000/1000 [==============================] - 4s 4ms/sample - loss: 2.3353
The correct input for your model is (20, 37, 42).
Note: Here 20 is the batch_size you have explicitly specified.
Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
batch = 20
timesteps = 42
training_units = 0.85
x1 = tf.constant(np.random.randint(50, size =(1000,37, 42)), dtype = tf.float32)
y1 = tf.constant(np.random.randint(10, size =(1000,)), dtype = tf.int32)
input_layer = keras.Input(shape=(37,timesteps),batch_size=batch)
dense = layers.LSTM(150, activation="sigmoid", return_sequences=True)
x = dense(input_layer)
hidden_layer_2 = layers.LSTM(150, activation="sigmoid", return_sequences=True)(x)
hidden_layer_3 = layers.Flatten()(hidden_layer_2)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_3)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model")
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True)
Model Architecture:
You can clearly see the Input Size.
Code to Run:
model.fit(x = x1, y = y1, batch_size = batch, epochs = 10)
Note: Whatever batch_size you have specified you have to specify the same batch_size in the model.fit() command.
Output:
Epoch 1/10
50/50 [==============================] - 4s 89ms/step - loss: 2.3288 - accuracy: 0.0920
Epoch 2/10
50/50 [==============================] - 5s 91ms/step - loss: 2.3154 - accuracy: 0.1050
Epoch 3/10
50/50 [==============================] - 5s 101ms/step - loss: 2.3114 - accuracy: 0.0900
Epoch 4/10
50/50 [==============================] - 5s 101ms/step - loss: 2.3036 - accuracy: 0.1060
Epoch 5/10
50/50 [==============================] - 5s 99ms/step - loss: 2.2998 - accuracy: 0.1000
Epoch 6/10
50/50 [==============================] - 4s 89ms/step - loss: 2.2986 - accuracy: 0.1170
Epoch 7/10
50/50 [==============================] - 4s 84ms/step - loss: 2.2981 - accuracy: 0.1300
Epoch 8/10
50/50 [==============================] - 5s 103ms/step - loss: 2.2950 - accuracy: 0.1290
Epoch 9/10
50/50 [==============================] - 5s 106ms/step - loss: 2.2960 - accuracy: 0.1210
Epoch 10/10
50/50 [==============================] - 5s 97ms/step - loss: 2.2874 - accuracy: 0.1210

MobileNet transfer learning in Keras for object localization extraction - loss computed as NaN

I'm trying to use Keras and its MobileNet implementation to do object localization (output the x/y coordinates of a few features, instead of classes) and I'm running into some likely very basic issue that I can't figure out.
My code looks like this:
# =============================
# Load MobileNet and change the top layers.
model = applications.MobileNet(weights="imagenet",
include_top=False,
input_shape=(224, 224, 3))
# Freeze all the layers except the very last 5.
for layer in model.layers[:-5]:
layer.trainable = False
# Adding custom Layers at the end, after the last Conv2D layer.
x = model.output
x = GlobalAveragePooling2D()(x)
x = Reshape((1, 1, 1024))(x)
x = Dropout(0.5)(x)
x = Conv2D(1024, (1, 1), activation='relu', padding='same', name='conv_preds')(x)
x = Dense(1024, activation="relu")(x)
# I'd like this to output 4 variables, two pairs of x/y coordinates
x = Dense(PREDICT_SIZE, activation="sigmoid")(x)
predictions = Reshape((PREDICT_SIZE,))(x)
# =============================
# Create the new final model.
model_final = Model(input = model.input, output = predictions)
def custom_loss(y_true, y_pred):
'''Trying to compute the Euclidian distance as a Loss Function'''
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1))
model_final.compile(loss = custom_loss,
optimizer = optimizers.adam(lr=0.0001),
metrics=["accuracy"])
With this model, then I load the data and try to train it.
x_train, y_train, x_val, y_val = load_data(DATASET_DIR)
# This load_data is my own implementation. It returns the images
# as tensors.
# ==> x_train[0].shape= (224, 224, 3)
#
# y_train and y_val look like this:
# ==> y_train[0]= [ 0.182 -0.0933 0.072 -0.0453]
#
# holding values in the [0, 1] interval for where the pixel
# is relative to the width/height of the image.
#
model_final.fit(x_train, y_train,
batch_size=batch_size, epochs=5, shuffle=False,
validation_data=(x_val, y_val))
Unfortunately, what I get when I run this model to train, I get something like this:
Train on 45 samples, validate on 5 samples
Epoch 1/5
16/45 [=========>....................] - ETA: 2s - loss: nan - acc: 0.0625
32/45 [====================>.........] - ETA: 1s - loss: nan - acc: 0.0312
45/45 [==============================] - 4s - loss: nan - acc: 0.0222 - val_loss: nan - val_acc: 0.0000e+00
Epoch 2/5
16/45 [=========>....................] - ETA: 2s - loss: nan - acc: 0.0625
32/45 [====================>.........] - ETA: 1s - loss: nan - acc: 0.0312
45/45 [==============================] - 4s - loss: nan - acc: 0.0222 - val_loss: nan - val_acc: 0.0000e+00
Epoch 3/5
I'm at a loss about why my loss value is "nan". I must be doing something wrong, and I've tried to change everything - the loss function, the shape of the output... but I can't figure out what I'm doing wrong.
Any help would be appreciated!
UPDATE: it seems like the issue is in the way I load_data.
If I create the image data like this it fails and results in loss:nan
i = pil_image.open(img_filename)
img = image.load_img(img_filename, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = keras.applications.mobilenet.preprocess_input(x)
x_train = np.append(x_train, x, axis=0)
but if I do something trivial like this, 'fit' works just fine and computes real values for loss:
x_train = np.random.random((100, 224, 224, 3))
sigh I wonder what's happening...
UPDATE #2: I figured out what the issue was
Documenting this here in case it helps anybody.
The way to properly generate the input tensors for MobileNet is this one:
test_img=[]
for i in range(len(test)):
temp_img=image.load_img(test_path+test['filename'][i],target_size=(224,224))
temp_img=image.img_to_array(temp_img)
test_img.append(temp_img)
test_img=np.array(test_img)
test_img=preprocess_input(test_img)
Notice how making it into a numpy.array and running preprocess_input happens on the whole batch of images. Doing it image by image seems to not have worked (what I was doing before).
Hope this helps somebody someday.

Keras accuracy does not change

I have a few thousand audio files and I want to classify them using Keras and Theano. So far, I generated a 28x28 spectrograms (bigger is probably better, but I am just trying to get the algorithm work at this point) of each audio file and read the image into a matrix. So in the end I get this big image matrix to feed into the network for image classification.
In a tutorial I found this mnist classification code:
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense
from keras.utils import np_utils
batch_size = 128
nb_classes = 10
nb_epochs = 2
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
X_train /= 255
X_test /= 255
print(X_train.shape[0], "train samples")
print(X_test.shape[0], "test samples")
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Dense(output_dim = 100, input_dim = 784, activation= "relu"))
model.add(Dense(output_dim = 200, activation = "relu"))
model.add(Dense(output_dim = 200, activation = "relu"))
model.add(Dense(output_dim = nb_classes, activation = "softmax"))
model.compile(optimizer = "adam", loss = "categorical_crossentropy")
model.fit(X_train, y_train, batch_size = batch_size, nb_epoch = nb_epochs, show_accuracy = True, verbose = 2, validation_data = (X_test, y_test))
score = model.evaluate(X_test, y_test, show_accuracy = True, verbose = 0)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
This code runs, and I get the result as expected:
(60000L, 'train samples')
(10000L, 'test samples')
Train on 60000 samples, validate on 10000 samples
Epoch 1/2
2s - loss: 0.2988 - acc: 0.9131 - val_loss: 0.1314 - val_acc: 0.9607
Epoch 2/2
2s - loss: 0.1144 - acc: 0.9651 - val_loss: 0.0995 - val_acc: 0.9673
('Test score: ', 0.099454972004890438)
('Test accuracy: ', 0.96730000000000005)
Up to this point everything runs perfectly, however when I apply the above algorithm to my dataset, accuracy gets stuck.
My code is as follows:
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dense, Activation, Dropout, Flatten
from keras.utils import np_utils
import AudioProcessing as ap
import ImageTools as it
batch_size = 128
nb_classes = 2
nb_epoch = 10
for i in range(20):
print "\n"
# Generate spectrograms if necessary
if(len(os.listdir("./AudioNormalPathalogicClassification/Image")) > 0):
print "Audio files are already processed. Skipping..."
else:
print "Generating spectrograms for the audio files..."
ap.audio_2_image("./AudioNormalPathalogicClassification/Audio/","./AudioNormalPathalogicClassification/Image/",".wav",".png",(28,28))
# Read the result csv
df = pd.read_csv('./AudioNormalPathalogicClassification/Result/result.csv', header = None)
df.columns = ["RegionName","IsNormal"]
bool_mapping = {True : 1, False : 0}
nb_classes = 2
for col in df:
if(col == "RegionName"):
a = 3
else:
df[col] = df[col].map(bool_mapping)
y = df.iloc[:,1:].values
y = np_utils.to_categorical(y, nb_classes)
# Load images into memory
print "Loading images into memory..."
X = it.load_images("./AudioNormalPathalogicClassification/Image/",".png")
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)
X_train = X_train.reshape(X_train.shape[0], 784)
X_test = X_test.reshape(X_test.shape[0], 784)
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
X_train /= 255
X_test /= 255
print("X_train shape: " + str(X_train.shape))
print(str(X_train.shape[0]) + " train samples")
print(str(X_test.shape[0]) + " test samples")
model = Sequential()
model.add(Dense(output_dim = 100, input_dim = 784, activation= "relu"))
model.add(Dense(output_dim = 200, activation = "relu"))
model.add(Dense(output_dim = 200, activation = "relu"))
model.add(Dense(output_dim = nb_classes, activation = "softmax"))
model.compile(loss = "categorical_crossentropy", optimizer = "adam")
print model.summary()
model.fit(X_train, y_train, batch_size = batch_size, nb_epoch = nb_epoch, show_accuracy = True, verbose = 1, validation_data = (X_test, y_test))
score = model.evaluate(X_test, y_test, show_accuracy = True, verbose = 1)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
AudioProcessing.py
import os
import scipy as sp
import scipy.io.wavfile as wav
import matplotlib.pylab as pylab
import Image
def save_spectrogram_scipy(source_filename, destination_filename, size):
dt = 0.0005
NFFT = 1024
Fs = int(1.0/dt)
fs, audio = wav.read(source_filename)
if(len(audio.shape) >= 2):
audio = sp.mean(audio, axis = 1)
fig = pylab.figure()
ax = pylab.Axes(fig, [0,0,1,1])
ax.set_axis_off()
fig.add_axes(ax)
pylab.specgram(audio, NFFT = NFFT, Fs = Fs, noverlap = 900, cmap="gray")
pylab.savefig(destination_filename)
img = Image.open(destination_filename).convert("L")
img = img.resize(size)
img.save(destination_filename)
pylab.clf()
del img
def audio_2_image(source_directory, destination_directory, audio_extension, image_extension, size):
nb_files = len(os.listdir(source_directory));
count = 0
for file in os.listdir(source_directory):
if file.endswith(audio_extension):
destinationName = file[:-4]
save_spectrogram_scipy(source_directory + file, destination_directory + destinationName + image_extension, size)
count += 1
print ("Generating spectrogram for files " + str(count) + " / " + str(nb_files) + ".")
ImageTools.py
import os
import numpy as np
import matplotlib.image as mpimg
def load_images(source_directory, image_extension):
image_matrix = []
nb_files = len(os.listdir(source_directory));
count = 0
for file in os.listdir(source_directory):
if file.endswith(image_extension):
with open(source_directory + file,"r+b") as f:
img = mpimg.imread(f)
img = img.flatten()
image_matrix.append(img)
del img
count += 1
#print ("File " + str(count) + " / " + str(nb_files) + " loaded.")
return np.asarray(image_matrix)
So I run the above code and recieve:
Audio files are already processed. Skipping...
Loading images into memory...
X_train shape: (2394L, 784L)
2394 train samples
1027 test samples
--------------------------------------------------------------------------------
Initial input shape: (None, 784)
--------------------------------------------------------------------------------
Layer (name) Output Shape Param #
--------------------------------------------------------------------------------
Dense (dense) (None, 100) 78500
Dense (dense) (None, 200) 20200
Dense (dense) (None, 200) 40200
Dense (dense) (None, 2) 402
--------------------------------------------------------------------------------
Total params: 139302
--------------------------------------------------------------------------------
None
Train on 2394 samples, validate on 1027 samples
Epoch 1/10
2394/2394 [==============================] - 0s - loss: 0.6898 - acc: 0.5455 - val_loss: 0.6835 - val_acc: 0.5716
Epoch 2/10
2394/2394 [==============================] - 0s - loss: 0.6879 - acc: 0.5522 - val_loss: 0.6901 - val_acc: 0.5716
Epoch 3/10
2394/2394 [==============================] - 0s - loss: 0.6880 - acc: 0.5522 - val_loss: 0.6842 - val_acc: 0.5716
Epoch 4/10
2394/2394 [==============================] - 0s - loss: 0.6883 - acc: 0.5522 - val_loss: 0.6829 - val_acc: 0.5716
Epoch 5/10
2394/2394 [==============================] - 0s - loss: 0.6885 - acc: 0.5522 - val_loss: 0.6836 - val_acc: 0.5716
Epoch 6/10
2394/2394 [==============================] - 0s - loss: 0.6887 - acc: 0.5522 - val_loss: 0.6832 - val_acc: 0.5716
Epoch 7/10
2394/2394 [==============================] - 0s - loss: 0.6882 - acc: 0.5522 - val_loss: 0.6859 - val_acc: 0.5716
Epoch 8/10
2394/2394 [==============================] - 0s - loss: 0.6882 - acc: 0.5522 - val_loss: 0.6849 - val_acc: 0.5716
Epoch 9/10
2394/2394 [==============================] - 0s - loss: 0.6885 - acc: 0.5522 - val_loss: 0.6836 - val_acc: 0.5716
Epoch 10/10
2394/2394 [==============================] - 0s - loss: 0.6877 - acc: 0.5522 - val_loss: 0.6849 - val_acc: 0.5716
1027/1027 [==============================] - 0s
('Test score: ', 0.68490593621422047)
('Test accuracy: ', 0.57156767283349563)
I tried changing the network, adding more epochs, but I always get the same result no matter what. I don't understand why I am getting the same result.
Any help would be appreciated. Thank you.
Edit:
I found a mistake where pixel values were not read correctly. I fixed the ImageTools.py below as:
import os
import numpy as np
from scipy.misc import imread
def load_images(source_directory, image_extension):
image_matrix = []
nb_files = len(os.listdir(source_directory));
count = 0
for file in os.listdir(source_directory):
if file.endswith(image_extension):
with open(source_directory + file,"r+b") as f:
img = imread(f)
img = img.flatten()
image_matrix.append(img)
del img
count += 1
#print ("File " + str(count) + " / " + str(nb_files) + " loaded.")
return np.asarray(image_matrix)
Now I actually get grayscale pixel values from 0 to 255, so now my dividing it by 255 makes sense. However, I still get the same result.
The most likely reason is that the optimizer is not suited to your dataset. Here is a list of Keras optimizers from the documentation.
I recommend you first try SGD with default parameter values. If it still doesn't work, divide the learning rate by 10. Do that a few times if necessary. If your learning rate reaches 1e-6 and it still doesn't work, then you have another problem.
In summary, replace this line:
model.compile(loss = "categorical_crossentropy", optimizer = "adam")
with this:
from keras.optimizers import SGD
opt = SGD(lr=0.01)
model.compile(loss = "categorical_crossentropy", optimizer = opt)
and change the learning rate a few times if it doesn't work.
If it was the problem, you should see the loss getting lower after just a few epochs.
Another solution that I do not see mentioned here, but caused a similar problem for me was the activiation function of the last neuron, especialy if it is relu and not something non linear like sigmoid.
In other words, it might help you to use a non-linear activation function in the last layer
Last layer:
model.add(keras.layers.Dense(1, activation='relu'))
Output:
7996/7996 [==============================] - 1s 76us/sample - loss: 6.3474 - accuracy: 0.5860
Epoch 2/30
7996/7996 [==============================] - 0s 58us/sample - loss: 6.3473 - accuracy: 0.5860
Epoch 3/30
7996/7996 [==============================] - 0s 58us/sample - loss: 6.3473 - accuracy: 0.5860
Epoch 4/30
7996/7996 [==============================] - 0s 57us/sample - loss: 6.3473 - accuracy: 0.5860
Epoch 5/30
7996/7996 [==============================] - 0s 58us/sample - loss: 6.3473 - accuracy: 0.5860
Epoch 6/30
7996/7996 [==============================] - 0s 60us/sample - loss: 6.3473 - accuracy: 0.5860
Epoch 7/30
7996/7996 [==============================] - 0s 57us/sample - loss: 6.3473 - accuracy: 0.5860
Epoch 8/30
7996/7996 [==============================] - 0s 57us/sample - loss: 6.3473 - accuracy: 0.5860
Now I used a non linear activation function:
model.add(keras.layers.Dense(1, activation='sigmoid'))
Output:
7996/7996 [==============================] - 1s 74us/sample - loss: 0.7663 - accuracy: 0.5899
Epoch 2/30
7996/7996 [==============================] - 0s 59us/sample - loss: 0.6243 - accuracy: 0.5860
Epoch 3/30
7996/7996 [==============================] - 0s 56us/sample - loss: 0.5399 - accuracy: 0.7580
Epoch 4/30
7996/7996 [==============================] - 0s 56us/sample - loss: 0.4694 - accuracy: 0.7905
Epoch 5/30
7996/7996 [==============================] - 0s 57us/sample - loss: 0.4363 - accuracy: 0.8040
Epoch 6/30
7996/7996 [==============================] - 0s 60us/sample - loss: 0.4139 - accuracy: 0.8099
Epoch 7/30
7996/7996 [==============================] - 0s 58us/sample - loss: 0.3967 - accuracy: 0.8228
Epoch 8/30
7996/7996 [==============================] - 0s 61us/sample - loss: 0.3826 - accuracy: 0.8260
This is not directly a solution to the original answer, but as the answer is #1 on Google when searching for this problem, it might benefit someone.
If the accuracy is not changing, it means the optimizer has found a local minimum for the loss. This may be an undesirable minimum. One common local minimum is to always predict the class with the most number of data points. You should use weighting on the classes to avoid this minimum.
from sklearn.utils import compute_class_weight
classWeight = compute_class_weight('balanced', outputLabels, outputs)
classWeight = dict(enumerate(classWeight))
model.fit(X_train, y_train, batch_size = batch_size, nb_epoch = nb_epochs, show_accuracy = True, verbose = 2, validation_data = (X_test, y_test), class_weight=classWeight)
After some examination, I found that the issue was the data itself. It was very dirty as in same input had 2 different outputs, hence creating confusion. After clearing up the data now my accuracy goes up to %69. Still not enough to be good, but at least I can now work my way up from here now that the data is clear.
I used the below code to test:
import os
import sys
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dense, Activation, Dropout, Flatten
from keras.utils import np_utils
sys.path.append("./")
import AudioProcessing as ap
import ImageTools as it
# input image dimensions
img_rows, img_cols = 28, 28
dim = 1
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 3
batch_size = 128
nb_classes = 2
nb_epoch = 200
for i in range(20):
print "\n"
## Generate spectrograms if necessary
if(len(os.listdir("./AudioNormalPathalogicClassification/Image")) > 0):
print "Audio files are already processed. Skipping..."
else:
# Read the result csv
df = pd.read_csv('./AudioNormalPathalogicClassification/Result/AudioNormalPathalogicClassification_result.csv', header = None, encoding = "utf-8")
df.columns = ["RegionName","Filepath","IsNormal"]
bool_mapping = {True : 1, False : 0}
for col in df:
if(col == "RegionName" or col == "Filepath"):
a = 3
else:
df[col] = df[col].map(bool_mapping)
region_names = df.iloc[:,0].values
filepaths = df.iloc[:,1].values
y = df.iloc[:,2].values
#Generate spectrograms and make a new CSV file
print "Generating spectrograms for the audio files..."
result = ap.audio_2_image(filepaths, region_names, y, "./AudioNormalPathalogicClassification/Image/", ".png",(img_rows,img_cols))
df = pd.DataFrame(data = result)
df.to_csv("NormalVsPathalogic.csv",header= False, index = False, encoding = "utf-8")
# Load images into memory
print "Loading images into memory..."
df = pd.read_csv('NormalVsPathalogic.csv', header = None, encoding = "utf-8")
y = df.iloc[:,0].values
y = np_utils.to_categorical(y, nb_classes)
y = np.asarray(y)
X = df.iloc[:,1:].values
X = np.asarray(X)
X = X.reshape(X.shape[0], dim, img_rows, img_cols)
X = X.astype("float32")
X /= 255
print X.shape
model = Sequential()
model.add(Convolution2D(64, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(32, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
print model.summary()
model.fit(X, y, batch_size = batch_size, nb_epoch = nb_epoch, show_accuracy = True, verbose = 1)
Check out this one
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile( loss = "categorical_crossentropy",
optimizer = sgd,
metrics=['accuracy']
)
Check out the documentation
I had better results with MNIST
By mistake I had added a softmax at the end instead of sigmoid. Try doing the latter. It worked as expected when I did this. For one output layer, softmax always gives values of 1 and this is what had happened.
I faced a similar issue. One-hot encoding the target variable using nputils in Keras, solved the issue of accuracy and validation loss being stuck. Using weights for balancing the target classes further improved performance.
Solution :
from keras.utils.np.utils import to_categorical
y_train = to_categorical(y_train)
y_val = to_categorical(y_val)
I've the same problem as you
my solution was a loop instead of epochs
for i in range(10):
history = model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=6,
epochs=1)
and you can as well save the model each epoch so you can pause the training after any epoch you want
for i in range(10):
history = model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=6,
epochs=1)
#save model
model.save('drive/My Drive/vggnet10epochs.h5')
model = load_model('drive/My Drive/vggnet10epochs.h5')
I got 13% Accuracy increment using this 'sigmoid' activation
model = Sequential()
model.add(Dense(3072, input_shape=(3072,), activation="sigmoid"))
model.add(Dense(512, activation="sigmoid"))
model.add(Dense(1, activation="sigmoid"))
Or you can also test the following, where 'relu' in first and hidden layer.
model = Sequential()
model.add(Dense(3072, input_shape=(3072,), activation="relu"))
model.add(Dense(512, activation="sigmoid"))
model.add(Dense(1, activation="sigmoid"))
As mentioned above, the problem mainly arises from the type of optimizers chosen. However, it can also be driven from the fact of topping 2 Dense layers with the same activation functions(softmax, for example).
In this case, NN finds a local minimum and is not able to descent more from that point, rolling around the same acc (val_acc) values.
Hope it helps out.
I had similar problem. I had binary class which was labeled by 1 and 2. After testing different kinds of optimizer and activation functions I found that the root of the problem was my labeling to classes. In the other words I changed the labels to 0 and 1 instead of 1 and 2, then this problem solved!
I faced same problem for multi-class, Try to changing optimizer by default it is Adam change it to sgd.
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
you can also try different Activation functions eg. (relu, sigmoid, softmax, softplus, etc.)
Some imp links
Optimizers
Activations
As pointed out by others, the optimizer probably doesn't suit your data/model which stuck in local minima. A neural network should at least be able to overfit the data (training_acc close to 1).
I once had a similar problem. I solved by trying different optimizers (in my case from SGD to RMSprop)
In my case, my problem was binary and I was using the 'softmax' activation function and it doesn't work. I changed to 'sigmoid' it works properly for me.
I had the exactly same problem: validation loss and accuracy remaining the same through the epochs. I increased the batch size 10x times, reduced learning rate by 100x times, etc. It did not work.
My last try, inspired by monolingual's and Ranjab's answers, worked.
my solution was to add Batchnormalization AND arrange the order as below:
Conv - DropOut - BatchNorm - Activation - Pool.
as recommended in Ordering of batch normalization and dropout?.
I know this is an old question but as of today (14/06/2021), the comment from #theTechGuy works well on tf 2.3. The code is:
from tensorflow.keras.optimizers import SGD
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile( loss = "categorical_crossentropy",
optimizer = sgd,
metrics=['accuracy']
)
I tried playing a lot with the optimizers and activation functions, but the only thing that worked was Batchnormalization1. And I guess it is a good practice too.
You can import it as:
from tensorflow.keras.layers import BatchNormalization
and simply add it before each hidden layer:
model.add(BatchNormalization())
I had the same problem, but in my case, it was caused by a non-regularized column on my data. This column had huge value. Fixing that solved it for me.
So, I just converted it to values around 0 and 1.
I had the same problem. My solution was to change the last layer activation function from "softmax" to "sigmoid" since i was dealing with a binary classification problem.
model.add(layers.Dense(1, activation="sigmoid"))

Categories

Resources