How to create customize LSTM Layer? - python

I have a model like this:
model = Sequential()
model.add(Embedding(100, 5, input_length=X.shape[1]))
model.add(LSTM(100))
model.add(Dense(2, activation='softmax'))
print(model.summary())
I was surfing on the internet for enhance the layer like this on below:
Input Layer - 7 feature
Recurrent Layer - 8 hidden unit
Fully Connected Layer - 4 neuron
Output Layer - 2 neuron
Here's my code I've been tried
model = tf.keras.Sequential([
tf.keras.layers.LSTM(feature, input_shape = input_shape),
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(4, activation=tf.nn.relu)
tf.keras.layers.Dense(2, activation=tf.nn.softmax)
])
But I didn't find anything - how to enhance layer like this?
Thanks in advance

Related

getting incompatible shape error in tensorflow functional model

I am trying to implement a deep model using tensorflow.keras which contains an embedding layer + Conv1D + 2 BiLstm layers. This is the implementation in sequential mode:
model = models.Sequential()
model.add(layers.Embedding(vocab_size, embedding_dim, weights=[embedding_matrix], input_length=limit_on_length, trainable=False))
model.add(layers.Conv1D(50, 4, padding='same', activation='relu'))
model.add(layers.Bidirectional(layers.LSTM(units=200, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)))
model.add(layers.Bidirectional(layers.LSTM(units=200, return_sequences=False,dropout=0.2, recurrent_dropout=0.2)))
model.add(layers.Dense(len(set(tr_intents)), activation='softmax'))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
And I fit the model like this:
model.fit(train_padded, to_categorical(tr_intents), epochs=15, batch_size=32, validation_split=0.1)
Everything goes on very good in this sequential mode. But when I implement the model in functional mode, I get this kind of error:
ValueError: Shapes (32, 22) and (32, 11, 22) are incompatible
And here is my implementation in functional structure:
input_layer = layers.Input(shape=(None,))
x = layers.Embedding(vocab_size, embedding_dim, weights=[embedding_matrix], input_length=limit_on_length, trainable=False)(input_layer)
x = layers.Conv1D(50, 4, padding='same', activation='relu')(x)
x = layers.Bidirectional(layers.LSTM(units=200, return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(x)
x = layers.Bidirectional(layers.LSTM(units=200, return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(x)
intents_out = layers.Dense(n_intents, activation='softmax')(x)
model = models.Model(input_layer, intents_out)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
Can anybody help me with this error? I need to implement the model in functional mode cause I have to add one more output layer to it.
Number of my intents(or labels) is 22 and each sentence has length of 11.

Tensorboard displaying only 8 nodes for Embedding layer

I tried to visualize the graph of a network with an input Embedding layer with 100 as output
dims using gloVe.100d. However, tensorboard states that only 8 nodes is present in the Embedding layer.
def lstm_model():
model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(input_dim=10000, output_dim=100,
embeddings_initializer=tf.keras.initializers.Constant(value=embeddings_matrix)),
tf.keras.layers.LSTM(units=16, return_sequences=True),
tf.keras.layers.LSTM(units=8, return_sequences=True),
tf.keras.layers.LSTM(units=1),
tf.keras.layers.Dense(units=1, activation='sigmoid')])
return model

What is the correct method to construct a many-to-many LSTM model using Keras in Python?

I am trying to make a 3 sequence many-to-many LSTM model, but I am confused about it's implementation in Keras. I searched on internet for examples of many-to-many models, but each website gives different method. That has confused me even more. What is the correct method of those? I want a model like this:
Some of the various methods I found were
Using encoder, decoder
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
model = Sequential()
# encoder layer
model.add(LSTM(100, activation='relu', input_shape=(3, 1)))
# repeat vector
model.add(RepeatVector(3))
# decoder layer
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mse')
Another with encoder, decoder
from keras.models import Model
from keras.layers import Input, LSTM, Dense
encoder_inputs = Input(shape=(None, 1))
encoder = LSTM(100, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None, 1))
decoder_lstm = LSTM(100, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model = Sequential()
model.add(LSTM(100,input_shape=(3,1),return_sequences=True))
model.add(TimeDistributed(Dense(2)))
model.compile(optimizer='adam', loss='mse')
model = Sequential()
model.add(LSTM(100,input_shape=(3,1),return_sequences=True))
model.compile(optimizer='adam', loss='mse')
Which one of these is the correct method? which one will give the model like the one I want?
You have to mention your problem statement first.
1 and 2 are best for neural machine translation problems. While 2 is superior because it is considering return states in LSTM layer. 3 is also a good architecture where logic from input to output is simple. 4 is a very basic architecture becuase nth output in the output array has knowledge about [0 to n-1th input, not later ones] also no fully connected (Dense) layer so even moderate logic cannot be learned here.

Doubts about TensorFlow Keras

I was reading this tutorial when a doubt appeared. In the tutorial de dataset has 10 attributes (after the one-hot conversion), but when the model was created, the input layer has more neurons (64) than inputs (10).
Here's the code:
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
I thought that the number of neurons should be equals to the number of entries. Can anyone explain this?
Thanks for your attention
The 64 in your Dense layers specifies the number of neurons of the first and second hidden layers. The input_shape parameter specifies how keras will automatically handle the number of neurons in the input layer, so 10 in your case.

How can I make the output of a convnet an image with keras?

I'm using 3d images as my input and my output...
model = Sequential()
#add model layers
model.add(Convolution3D(64, kernel_size=3, activation="relu", input_shape=(240, 240, 155, 1)))
model.add(Convolution3D(32, kernel_size=3, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
But instead of making the final layer a Dense with softmax, I want the output to be a 3D image of the same dimensions as the input.
What would I have to do to upsample?
There are many ways of upsampling - you can look in github for simple implementation of seg-net or U-net (or seg-u-net).

Categories

Resources