Keras GRU/LSTM layer input dimension error - python

I am kind of new to deep learning and I have been trying to create a simple sentiment analyzer using deep learning methods for natural language processing and using the Reuters dataset. Here is my code:
import numpy as np
from keras.datasets import reuters
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Dropout, GRU
from keras.utils import np_utils
max_length=3000
vocab_size=100000
epochs=10
batch_size=32
validation_split=0.2
(x_train, y_train), (x_test, y_test) = reuters.load_data(path="reuters.npz",
num_words=vocab_size,
skip_top=5,
maxlen=None,
test_split=0.2,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
tokenizer = Tokenizer(num_words=max_length)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
y_train = np_utils.to_categorical(y_train, 50)
y_test = np_utils.to_categorical(y_test, 50)
model = Sequential()
model.add(GRU(50, input_shape = (49,1), return_sequences = True))
model.add(Dropout(0.2))
model.add(Dense(256, input_shape=(max_length,), activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(50, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_split=validation_split)
score = model.evaluate(x_test, y_test)
print('Test Accuracy:', round(score[1]*100,2))
What I do not understand is why, every time I try to use a GRU or LSTM cell instead of a Dense one, I get this error:
ValueError: Error when checking input: expected gru_1_input to have 3
dimensions, but got array with shape (8982, 3000)
I have seen online that adding return_sequences = True could solve the issue, but as you can see, the issue remains in my case.
What should I do in this case?

The problem is that the shape of x_train is (8982, 3000) so it means that (considering the preprocessing stage) there are 8982 sentences encoded as one-hot vectors with vocab size of 3000. On the other hand a GRU (or LSTM) layer accepts a sequence as input and therefore its input shape should be (batch_size, num_timesteps or sequence_length, feature_size). Currently the features you have are the presence (1) or absence (0) of a particular word in a sentence. So to make it work with GRU you need to add a third dimension to x_train and x_test:
x_train = np.expand_dims(x_train, axis=-1)
x_test = np.expand_dims(x_test, axis=-1)
and then remove that return_sequences=True and change the input shape of GRU to input_shape=(3000,1). This way you are telling the GRU layer that you are processing sequences of length 3000 where each element consists of one single feature. (As a side note I think you should pass the vocab_size to num_words argument of Tokenizer. That indicates the number of words in vocabulary. Instead, pass max_length to maxlen argument of load_data which limits the length of a sentence.)
However, I think you may get better results if you use an Embedding layer as the first layer and before the GRU layer. That's because currently the way you encode sentences does not take into account the order of words in a sentence (it just cares about their existence). Therefore, feeding GRU or LSTM layers, which relies on the order of elements in a sequence, with this representation does not make sense.

Related

How do I code input layer in Deep Learning using Keras (Basic)

Okay so I'm pretty new to deep learning and have a very basic doubt. I have an input data with an array containing 255 data (Araay shape (255,)) in epochs_data and their corresponding labels in new_labels (Array shape (255,)).
I split the data using the following code:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(epochs_data, new_labels, test_size = 0.2, random_state=30)
I'm using a sequential model:
from keras.models import Sequential
from keras import layers
from keras.layers import Dense, Activation, Flatten
model = Sequential()
I know how to code for the hidden layers and output layer:
model.add(Dense(500, activation='relu')) #Hidden Layer
model.add(Dense(2, activation='softmax')) #Output Layer
But I don't know how to code layer for input with the input_shape specified. The X_train is the input.It's an array of shape (180,). Also tell me how to code the model.fit() for the same. Any help is appreciated.
You have to copy this line before the hidden layer. You can add the activation function that you want. Finally, as you can see this line represent both the input layer and the 1° hidden layer (you have to choose the n° of neuron (I put 100) )
model.add(Dense(100, input_shape = (X_train.shape[1],))
EDIT:
Before fitting your model you have to configure your model with this line:
model.compile(loss = 'mse', optimizer = 'Adam', metrics = ['mse'])
So you have to choose a metric that in this case is Mean Squarred Error and an optimizer like Adam, Adamax, ect.
Then you can fit your model choosing the data (X,Y), n° epochs, val_split and the batch size.
history = model.fit(X_train, y_train, epochs = 200,
validation_split = 0.1, batch_size=250)

Keras CNN Incompatible with Convolution2D

I am getting into Convolutional Neural Networks and want to create one for MNIST data. Whenever I add a convolutional Layer to my CNN, I get an error:
Input 0 is incompatible with layer conv2d_4: expected ndim=4, found ndim=5
I have attemped to reshape X_Train data set but was not successful
I tried to add a flatten layer first but that returns this error:
Input 0 is incompatible with layer conv2d_5: expected ndim=4, found ndim=2
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import Flatten, Dense, Dropout
img_width, img_height = 28, 28
mnist = keras.datasets.mnist
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = keras.utils.normalize(X_train, axis=1) #Normalizes from 0-1 (originally each pixel is valued 0-255)
X_test = keras.utils.normalize(X_test, axis=1) #Normalizes from 0-1 (originally each pixel is valued 0-255)
Y_train = keras.utils.to_categorical(Y_train) #Reshapes to allow ytrain to work with x train
Y_test = keras.utils.to_categorical(Y_test)
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
Y_train = lb.fit_transform(Y_train)
Y_test = lb.fit_transform(Y_test)
#Model
model = Sequential()
model.add(Flatten())
model.add(Convolution2D(16, 5, 5, activation='relu', input_shape=(1,img_width, img_height, 1)))
model.add(Dense(128, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dropout(.2))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer = 'adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, Y_train, epochs=3, verbose=2)
val_loss, val_acc = model.evaluate(X_test, Y_test) #Check to see if model fits test
print(val_loss, val_acc)
If I comment out the Convolutional layer, it works very well (accuracy>95%), but I am planning on making a more complex neural network that requires Convolution in the future and this is my starting point
Keras is looking for a tensor of dimension 4 but it's getting ndim as number of dimension as 2.
first make sure you kernel size in Conv2D layer is in parenthesis
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(img_height, img_height, 1)))
Second you need to reshape the X_train, X_test variable as Conv2D layer is expecting a tensor input.
X_train = X_train.reshape(-1,28, 28, 1) #Reshape for CNN - should work!!
X_test = X_test.reshape(-1,28, 28, 1)
model.fit(X_train, Y_train, epochs=3, verbose=2)
For more information about Conv2D you can look into Keras Documentation here
Hope this helps.
There are two issues in your code.
You are encoding your labels two times, once using to_categorical, and another time using LabelBinarizer. The latter is no needed here, so just encode your labels into categorical once, using to_categorical.
2.- Your input shape is incorrect, it should be (28, 28, 1).
Also you should add a Flatten layer after the convolutional layers so the Dense layer works properly.

Keras Extraction of Informative Features in Text Using Weights

I am working on a text classification project, and I would like to use keras to rank the importance of each word (token). My intuition is that I should be able to sort weights from the Keras model to rank the words.
Possibly I am having a simple issue using argsort or tf.math.top_k.
The complete code is from Packt
I start by using sklearn to compute TF-IDF using the 10,000 most frequent words.
vectorizer = TfidfVectorizer(min_df=2, ngram_range=(1, 2), stop_words='english',
max_features=10000, strip_accents='unicode', norm='l2')
x_train_2 = vectorizer.fit_transform(x_train_preprocessed).todense()
x_test_2 = vectorizer.transform(x_test_preprocessed).todense()
I can view the list of words like this:
print(vectorizer.get_feature_names()[:10])
I then build and fit a model using Keras. Keras is using the tensorflow backend.
# Deep Learning modules
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import Adadelta, Adam, RMSprop
from keras.utils import np_utils
# Definiting hyper parameters
np.random.seed(1337)
nb_classes = 20
batch_size = 64
nb_epochs = 20
Y_train = np_utils.to_categorical(y_train, nb_classes)
model = Sequential()
model.add(Dense(1000, input_shape=(10000,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(500))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
print(model.summary())
# Model Training
model.fit(x_train_2, Y_train, batch_size=batch_size, epochs=nb_epochs, verbose=1)
I can then get weights like this:
weight = model.weights[0]
# Returns <tf.Variable 'dense_1/kernel:0' shape=(10000, 1000) dtype=float32_ref>
Since the number of rows (10,000) is equal to the number of features, I think I am on the right track. I need to get a list of indices I can use to get feature names: informative_features = vectorizer.get_feature_names()[sorted_indices].
I have tried to build a list using two different techniques:
tf.nn.top_k
sorted_indices = tf.nn.top_k(weight)
# Returns TopKV2(values=<tf.Tensor 'TopKV2_2:0' shape=(10000, 1) dtype=float32>, indices=<tf.Tensor 'TopKV2_2:1' shape=(10000, 1) dtype=int32>)
I have not determined how to get a list from this result.
argsort
sorted_indices = model.get_weights()[0].argsort(axis=0)
print(sorted_indices.shape)
# Returns (10000, 1000)
Function argsort returns a matrix, but what I need is a one-dimensional list.
How can I use weights to rank text features?
i think it is not possible
first layer outputs 1000 value
each value binded with each feature with some weight value
and same thing continues to end of network
if input directly binded classification layer and if it is trained then
tfidf = Input(shape=(10000,))
output = Dense(nb_classes, activation='softmax')(tfidf)
model = Model(tfidf,output)
model.summary()
# train model ...
last_layer = model.layers[-1]
weights = last_layer.get_weights()[0]
for i in range(nb_classes):
print('class : ',i,' -> Feature : ',np.argmax(weights[:,i]) )

How to convert 1D flattened MNIST Keras to LSTM model without unflattening?

I want to change my model architecture a bit on the LSTM so it accepts the same exact flattened inputs the full connected approach does.
Working Dnn model from Keras examples
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.utils import to_categorical
# import the data
from keras.datasets import mnist
# read the data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
num_pixels = x_train.shape[1] * x_train.shape[2] # find size of one-dimensional vector
x_train = x_train.reshape(x_train.shape[0], num_pixels).astype('float32') # flatten training images
x_test = x_test.reshape(x_test.shape[0], num_pixels).astype('float32') # flatten test images
# normalize inputs from 0-255 to 0-1
x_train = x_train / 255
x_test = x_test / 255
# one hot encode outputs
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
num_classes = y_test.shape[1]
print(num_classes)
# define classification model
def classification_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, activation='relu', input_shape=(num_pixels,)))
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
# build the model
model = classification_model()
# fit the model
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, verbose=2)
# evaluate the model
scores = model.evaluate(x_test, y_test, verbose=0)
Same problem but trying LSTM (syntax error still)
def kaggle_LSTM_model():
model = Sequential()
model.add(LSTM(128, input_shape=(x_train.shape[1:]), activation='relu', return_sequences=True))
# What does return_sequences=True do?
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt,
metrics=['accuracy'])
return model
model_kaggle_LSTM = kaggle_LSTM_model()
# fit the model
model_kaggle_LSTM.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, verbose=2)
# evaluate the model
scores = model_kaggle_LSTM.evaluate(x_test, y_test, verbose=0)
Problem is here:
model.add(LSTM(128, input_shape=(x_train.shape[1:]), activation='relu', return_sequences=True))
ValueError: Input 0 is incompatible with layer lstm_17: expected
ndim=3, found ndim=2
If I go back and don't flatten x_train and y_train, it works. However, I'd like this to be "just another model choice" that feeds off the same pre-processed input. I thought passing shape[1:] would work as that it the real flattened input_shape. I'm sure it's something easy I'm missing about the dimensionality, but I couldn't get it after an hour of twiddling and debugging, although did figure out not flattening the 28x28 to 784 works, but I don't understand why it works. Thanks a lot!
For bonus points, an example of how to do either DNN or LSTM in either 1D (784,) or 2D (28, 28) would be the best.
RNN layers such as LSTM are meant for sequence processing (i.e. a series of vectors which their order of appearance matters). You can look at an image from top to bottom, and consider each row of pixels as a vector. Therefore, the image would be a sequence of vectors and can be fed to the RNN layer. Therefore, according to this description, you should expect that the RNN layer take an input of shape (sequence_length, number_of_features). That's why when you feed the images to the LSTM network in their original shape, i.e. (28,28), it works.
Now if you insist on feeding the LSTM model the flattened image, i.e. with shape (784,), you have at least two options: either you can consider this as a sequence of length one, i.e. (1, 748), which does not make much sense; or you can add a Reshape layer to your model to reshape back the input to its original shape suitable for the input shape of a LSTM layer, like this:
from keras.layers import Reshape
def kaggle_LSTM_model():
model = Sequential()
model.add(Reshape((28,28), input_shape=x_train.shape[1:]))
# the rest is the same...

Keras LSTM training. How to shape my input data?

I have a dataset of 3000 observations. Each observation consists of 3 timeseries of length 200 samples. As the output I have 5 class labels.
So I build train as test sets as follows:
test_split = round(num_samples * 3 / 4)
X_train = X_all[:test_split, :, :] # Start upto just before test_split
y_train = y_all[:test_split]
X_test = X_all[test_split:, :, :] # From test_split to end
y_test = y_all[test_split:]
# Print shapes and class labels
print(X_train.shape)
print(y_train.shape)
> (2250, 200, 3)
> (22250, 5)
I build my network using Keras functional API:
from keras.models import Model
from keras.layers import Dense, Activation, Input, Dropout, concatenate
from keras.layers.recurrent import LSTM
from keras.constraints import maxnorm
from keras.optimizers import SGD
from keras.callbacks import EarlyStopping
series_len = 200
num_RNN_neurons = 64
ch1 = Input(shape=(series_len, 1), name='ch1')
ch2 = Input(shape=(series_len, 1), name='ch2')
ch3 = Input(shape=(series_len, 1), name='ch3')
ch1_layer = LSTM(num_RNN_neurons, return_sequences=False)(ch1)
ch2_layer = LSTM(num_RNN_neurons, return_sequences=False)(ch2)
ch3_layer = LSTM(num_RNN_neurons, return_sequences=False)(ch3)
visible = concatenate([
ch1_layer,
ch2_layer,
ch3_layer])
hidden1 = Dense(30, activation='linear', name='weighted_average_channels')(visible)
output = Dense(num_classes, activation='softmax')(hidden1)
model = Model(inputs= [ch1, ch2, ch3], outputs=output)
# Compile model
model.compile(loss='categorical_crossentropy', optimizer=SGD(), metrics=['accuracy'])
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-4, patience=5, verbose=1, mode='auto')
Then, I try to fit the model:
# Fit the model
model.fit(X_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(X_test, y_test),
callbacks=[monitor],
verbose=1)
and I get the following error:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 3 array(s), but instead got the following list of 1 arrays...
How should I reshape my data, to solve the issue?
You magically assume a single input with 3 time series X_train will split into 4 channels and be assigned to different inputs. Well this doesn't happen and that is what the error is complaining about. You have 1 input:
ch123_in = Input(shape=(series_len, 3), name='ch123')
latent = LSTM(num_RNN_neurons)(ch123_in)
hidden1 = Dense(30, activation='linear', name='weighted_average_channels')(latent)
By merging the series together into single LSTM, the model might pickup relations across time series as well. Now your target shape has to be y_train.shape == (2250, 5), the first dimension must match X_train.shape[0].
Another point is you have Dense layer with linear activation, that is almost useless as it doesn't provide any non-linearity. You might want to use a non-linear activation function like relu.

Categories

Resources