RNN LSTM valueError while training - python

Hi there recently I've been working on a RNN LSTM project and I have e 2D data set like
x = [[x1,x2,x3...,x18],[x1,x2,x3...,x18],...]
y = [[y1,y2,y3],[y1,y2,y3],...]
X.shape => (295,5,18)
Y.shape => (295,3)
and I convert it to a 3D dataset by code below
X_train = []
Y_train = []
for i in range(5,300):
X_train.append(training_set_scaled[i-5:i,0:18])
Y_train.append(training_set_scaled[i,18:22])
X_train, Y_train = np.array(X_train), np.array(Y_train)
and then use Keras for LSTM
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
regressor = Sequential()
regressor.add(LSTM(units=50, return_sequences=True,input_shape=(X_train.shape[0],X_train.shape[1],X_train.shape[2])))
regressor.add(Dropout(0.1))
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.1))
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.1))
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.1))
regressor.add(LSTM(units=50))
regressor.add(Dropout(0.1))
regressor.add(Dense(units= 1))
regressor.compile(optimizer= 'adam', loss='binary_crossentropy')
regressor.fit(X_train,Y_train, epochs = 100, batch_size = 32)
and when I run this script I got the error below:
ValueError: Input 0 is incompatible with layer lstm_104: expected ndim=3, found ndim=4
I have no idea about the problem can any body help me?

Change:
input_shape=(X_train.shape[0],X_train.shape[1],X_train.shape[2])
to
input_shape=(X_train.shape[1],X_train.shape[2])
Basically keras is designed to take any number of examples in a single batch, so it automatically puts None as the first parameter. So, when you mention the rest 2 dimensions, it gets a 3 dimensional input in total, but if you yourself mention the first dimension, the number of the dimensions becomes 4, i.e, (None, X_train.shape[0],X_train.shape[1],X_train.shape[2]).
But again, if you really want to hard code the batch_size you can still do it. For this, you have to use batch_input_shape instead of input_shape like follow:
regressor.add(LSTM(units=50, return_sequences=True, batch_input_shape=(X_train.shape[0], X_train.shape[1], X_train.shape[2])))
It will give you the power to control which specific batch size to set for the network. (Your program has another flaw in this case, you are setting the batch size X_train.shape[0] which is 295, but you are sending 32 in fit(), but they should be equal. Also batch size is generally taken lesser than the data set size).

Related

How do I code input layer in Deep Learning using Keras (Basic)

Okay so I'm pretty new to deep learning and have a very basic doubt. I have an input data with an array containing 255 data (Araay shape (255,)) in epochs_data and their corresponding labels in new_labels (Array shape (255,)).
I split the data using the following code:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(epochs_data, new_labels, test_size = 0.2, random_state=30)
I'm using a sequential model:
from keras.models import Sequential
from keras import layers
from keras.layers import Dense, Activation, Flatten
model = Sequential()
I know how to code for the hidden layers and output layer:
model.add(Dense(500, activation='relu')) #Hidden Layer
model.add(Dense(2, activation='softmax')) #Output Layer
But I don't know how to code layer for input with the input_shape specified. The X_train is the input.It's an array of shape (180,). Also tell me how to code the model.fit() for the same. Any help is appreciated.
You have to copy this line before the hidden layer. You can add the activation function that you want. Finally, as you can see this line represent both the input layer and the 1° hidden layer (you have to choose the n° of neuron (I put 100) )
model.add(Dense(100, input_shape = (X_train.shape[1],))
EDIT:
Before fitting your model you have to configure your model with this line:
model.compile(loss = 'mse', optimizer = 'Adam', metrics = ['mse'])
So you have to choose a metric that in this case is Mean Squarred Error and an optimizer like Adam, Adamax, ect.
Then you can fit your model choosing the data (X,Y), n° epochs, val_split and the batch size.
history = model.fit(X_train, y_train, epochs = 200,
validation_split = 0.1, batch_size=250)

TF 2.0 sequential CNN into LSTM for regression "Negative dimension size" error

I'm trying to build a model that predict the price of a certain commodity based on current market conditions, my data are shaped similar to
num_samples = 100
sample_dimension = 10
XXX = np.random.random((num_samples,sample_dimension)).reshape(-1,1,sample_dimension)
YYY = np.random.random(num_samples).reshape(-1,1)
so I've got 100 ordered samples of X data, each consisting of 10 variables. My model looks like the following
model = keras.Sequential()
model.add(tf.keras.layers.Conv1D(4,
kernel_size = (2),
activation='sigmoid',
input_shape=(None, sample_dimension),
batch_input_shape = [1,1,sample_dimension]))
model.add(tf.keras.layers.AveragePooling1D(pool_size=2))
model.add(tf.keras.layers.Reshape((1, sample_dimension)))
model.add(tf.keras.layers.LSTM(100,
stateful = True,
return_sequences=False,
activation='sigmoid'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['accuracy'])
so it's a 1D convolution, a pooling, a reshape (so it plays nice with the lstm) and then casting down to a prediction
but when I try to run it, I get the following error
Negative dimension size caused by subtracting 2 from 1 for 'conv1d/conv1d' (op: 'Conv2D') with input shapes: [1,1,1,10], [1,2,10,4].
I've tried a few different values for the kernel size, pool size, and batch_input_shape (have to batch my inputs because my actual data are spread across several large files, so I want to read one at a time and kick it into training the model), but nothing seems to work.
What am I doing wrong? How can I track/predict the shape of my data as it goes through this model? What are the data/variables supposed to look like?
I ended up looking through tutorials for conv2D, and then converting stuff to conv1D (please edit as you feel appropriate)
conv2D solution
model = keras.Sequential()
model.add(tf.keras.layers.Conv2D(4,
kernel_size = (**1**,2),
activation = 'sigmoid',
input_shape = (**1**,sample_dimension,1),
batch_input_shape = [None,**1**,sample_dimension,1]))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(1,2)))
#model.add(tf.keras.layers.Reshape((1,sample_dimension)))
model.add(tf.keras.layers.Flatten())
model.add(keras.layers.Dense(1))
Then I converted it to conv1D by taking out a dimension from each of the necessary arguments (the bold 1s)
model = keras.Sequential()
model.add(tf.keras.layers.Conv1D(4,
kernel_size = 2,
activation = 'sigmoid',
input_shape = (sample_dimension,1),
batch_input_shape = [None,sample_dimension,1]))
model.add(tf.keras.layers.AveragePooling1D(pool_size=2))
#model.add(tf.keras.layers.Reshape((1,sample_dimension)))
model.add(tf.keras.layers.Flatten())
model.add(keras.layers.Dense(1))
i guess the key takeaway is that tensorflow isn't designed to deal with vectors or even matrices, so the last dimension has to be the dimension of the tensor- in this case, it's a 1D tensor (just a number) being held in a sample_dimension

Keras LSTM different input output shape

In my binary multilabel sequence classification problem, I have 22 timesteps in each input sentence. Now that I have added 200 dimensions of word embedding to each timestep, so my current input shape is (*number of input sentence*,22,200). My output shape would be (*number of input sentence*,4), eg.[1,0,0,1].
My first question is, how to build the Keras LSTM model to accept 3D input and output 2D results. The following code outputs the error:
ValueError: Error when checking target: expected dense_41 to have 3 dimensions, but got array with shape (7339, 4)
My second question is, when I add TimeDistributed layer, should I set the number of Dense layer to the number of features in input, in my case, that is 200?
.
X_train, X_test, y_train, y_test = train_test_split(padded_docs2, new_y, test_size=0.33, random_state=42)
start = datetime.datetime.now()
print(start)
# define the model
model = Sequential()
e = Embedding(input_dim=vocab_size2, input_length=22, output_dim=200, weights=[embedding_matrix2], trainable=False)
model.add(e)
model.add(LSTM(128, input_shape=(X_train.shape[1],200),dropout=0.2, recurrent_dropout=0.1, return_sequences=True))
model.add(TimeDistributed(Dense(200)))
model.add(Dense(y_train.shape[1],activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(X_train, y_train, epochs=300, verbose=0)
end = datetime.datetime.now()
print(end)
print('Time taken to build the model: ', end-start)
Please let me know if I have missed out any information, thanks.
Your model's Lstm layers gets 3D sequence and produces outputs of 3D. The same goes to TimeDistributed layer. If you want lstm to return 2D tensor the argument return_sequences should be false. Now you don't have to use TimeDistributed Wrapper. With this setup your model would be
model = Sequential()
e = Embedding(input_dim=vocab_size2, input_length=22, output_dim=200, weights=[embedding_matrix2], trainable=False)
model.add(e)
model.add(LSTM(128, input_shape=(X_train.shape[1],200),dropout=0.2, recurrent_dropout=0.1, return_sequences=False))
model.add(Dense(200))
model.add(Dense(y_train.shape[1],activation='sigmoid'))
###Edit:
TimeDistributed applies a given layer to each temporal slices of inputs.In your case for example, the temporal dimension is X_train.shape[1]. Let's assume X_train.shape[1] == 10 and consider the following line.
model.add(TimeDistributed(Dense(200)))
Here the TimeDistributed wrapper creates one dense layer(Dense(200)) for each temporal slices(total of 10 dense layers). So for each temporal dimension you will get output with shape(batch_size, 200) and the final output tensor would have shape of (batch_size, 10, 200). But you said you want 2D output. So the TimeDistributed wouldn't work to get 2D from 3D inputs.
The other case is if you remove TimeDistributed wrapper and use only dense, like this.
model.add(Dense(200))
Then the dense layer first flatten the input to have shape (batch_size * 10, 200) and computes the dot product of fully connected layer. After dot product the dense layer reshapes the outputs to have the same shape as inputs. In your case (batch_size, 10, 200) and it is still 3D tensor.
But if you don't want to change the lstm layer you can replace TimeDistributed layer with another lstm layer with return_sequences set to false. Now your model would look like this.
model = Sequential()
e = Embedding(input_dim=vocab_size2, input_length=22, output_dim=200, weights=[embedding_matrix2], trainable=False)
model.add(e)
model.add(LSTM(128, input_shape=(X_train.shape[1],200),dropout=0.2, recurrent_dropout=0.1, return_sequences=True))
model.add(LSTM(200, input_shape=(X_train.shape[1],200),dropout=0.2, recurrent_dropout=0.1, return_sequences=False))
model.add(Dense(y_train.shape[1],activation='sigmoid'))

Keras GRU/LSTM layer input dimension error

I am kind of new to deep learning and I have been trying to create a simple sentiment analyzer using deep learning methods for natural language processing and using the Reuters dataset. Here is my code:
import numpy as np
from keras.datasets import reuters
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Dropout, GRU
from keras.utils import np_utils
max_length=3000
vocab_size=100000
epochs=10
batch_size=32
validation_split=0.2
(x_train, y_train), (x_test, y_test) = reuters.load_data(path="reuters.npz",
num_words=vocab_size,
skip_top=5,
maxlen=None,
test_split=0.2,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
tokenizer = Tokenizer(num_words=max_length)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
y_train = np_utils.to_categorical(y_train, 50)
y_test = np_utils.to_categorical(y_test, 50)
model = Sequential()
model.add(GRU(50, input_shape = (49,1), return_sequences = True))
model.add(Dropout(0.2))
model.add(Dense(256, input_shape=(max_length,), activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(50, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_split=validation_split)
score = model.evaluate(x_test, y_test)
print('Test Accuracy:', round(score[1]*100,2))
What I do not understand is why, every time I try to use a GRU or LSTM cell instead of a Dense one, I get this error:
ValueError: Error when checking input: expected gru_1_input to have 3
dimensions, but got array with shape (8982, 3000)
I have seen online that adding return_sequences = True could solve the issue, but as you can see, the issue remains in my case.
What should I do in this case?
The problem is that the shape of x_train is (8982, 3000) so it means that (considering the preprocessing stage) there are 8982 sentences encoded as one-hot vectors with vocab size of 3000. On the other hand a GRU (or LSTM) layer accepts a sequence as input and therefore its input shape should be (batch_size, num_timesteps or sequence_length, feature_size). Currently the features you have are the presence (1) or absence (0) of a particular word in a sentence. So to make it work with GRU you need to add a third dimension to x_train and x_test:
x_train = np.expand_dims(x_train, axis=-1)
x_test = np.expand_dims(x_test, axis=-1)
and then remove that return_sequences=True and change the input shape of GRU to input_shape=(3000,1). This way you are telling the GRU layer that you are processing sequences of length 3000 where each element consists of one single feature. (As a side note I think you should pass the vocab_size to num_words argument of Tokenizer. That indicates the number of words in vocabulary. Instead, pass max_length to maxlen argument of load_data which limits the length of a sentence.)
However, I think you may get better results if you use an Embedding layer as the first layer and before the GRU layer. That's because currently the way you encode sentences does not take into account the order of words in a sentence (it just cares about their existence). Therefore, feeding GRU or LSTM layers, which relies on the order of elements in a sequence, with this representation does not make sense.

Specifying Dense using keras library

I slightly misunderstand how to create a simple Sequence for my data.
The data has the following dimensions:
X_train.shape
(2369, 12)
y_train.shape
(2369,)
X_test.shape
(592, 12)
y_test.shape
(592,)
This is how I create the model:
batch_size = 128
nb_epoch = 20
in_out_neurons = X_train.shape[1]
dimof_middle = 100
model = Sequential()
model.add(Dense(batch_size, batch_input_shape=(None, in_out_neurons)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(batch_size))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(in_out_neurons))
model.add(Activation('linear'))
# I am solving the regression problem, not the classification one
model.compile(loss="mean_squared_error", optimizer="rmsprop")
history = model.fit(X_train, y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, y_test))
The error message:
Exception: Error when checking model input: expected dense_input_14 to
have shape (None, 1) but got array with shape (2369, 12)ç
The error is:
Error when checking model target: expected activation_42 to have shape
(None, 12) but got array with shape (2369, 1)
This error occurs at line:
model.add(Dense(in_out_neurons))
How to change Dense to make it work?
Another question is how to add a simple autoencoder in order to initialize weights of ANN?
One of your problems is that you seem to misunderstand what a batch is.
A batch is the number of training samples computed at a time, so instead of computing one training sample from X_train at a time you use, for example, 100 at a time. The important bit here is that this has nothing to do with your model.
So when you write
model.add(Dense(batch_size, batch_input_shape=(None, in_out_neurons)))
then you create a fully connected layer with an output size of one batch. That does not make a lot of sense.
Another problem is that your model's output is 12 neurons while your Y is only one value/neuron. Your model looks like this:
|
v
[128]
[128]
[ 12]
|
v
Then what fit() does is, it inputs a matrix of shape (128, 12) ((batch size, X_train.shape[1])) into the model and attempts to compare the output of shape (128,12) from the last layer to the corresponding Y values of the batch (shape (128,1)).

Categories

Resources