I wanted to add one tanh layer to embedding layer with keras functional api:
x=layers.Embedding(vocab_size, 8, input_length=max_length)(input)
output=keras.activations.tanh(x)
model = Model(inputs=input, outputs=output)
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(data, labels)
but system told me I must use keras layers ,not tensor. I searched a lot keras tutorials. There is only one way to solve this problem:
model.add(Activation('tanh'))
but it is Sequential model which I don't want to use.Is there some ways to solve this with functional api?
With the functional api it's almost the same as the Sequential model:
output = Activation('tanh')(x)
Related
We learned about using Keras to build LSTM model in class, however, I'm still confused on how should you set up the layers for the model. What are the rules and what does each step means?
For instance, for the code below:
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import Dense
numUnits = 50
model = Sequential()
model.add( LSTM(units=numUnits,return_sequences=True,
input_shape=(X_train.shape[1], 1)) )
model.add( Dropout(0.2) )
model.add( LSTM(units=numUnits) )
model.add( Dropout(0.2) )
model.add( Dense(units=1) )
model.compile( loss='mean_squared_error' )
What does each of these steps mean? Do we need to use dropout for after setting each layer? Does it always has to end with a Dense layer?
The first layer that is added to the model is an LSTM (Long Short-Term Memory) layer, which is a type of recurrent neural network layer that is well suited to process sequential data.
The Dropout layer is used to randomly drop a fraction of the units during training, which helps to prevent overfitting. It drops a fraction of units based on its input parameter.
The dense layer is the most common type of layer in a neural network, and it is typically used to transform the output of the previous layer into a format that is suitable for the task at hand. For example, if the task is a classification task, the dense layer may be used to transform the output of the previous layer into a probability distribution over the classes.
To answer your questions, Dropout layers are not required after every layer, but they are often placed after recurrent layers such as LSTM or GRU to prevent overfitting. The Dense layer is usually the last layer in a model, but it's not completely necessary.
I'm trying to use tensorflow's MobileNet v2.
I don't understand why, but it seems that the last fully connected layers, with the output categories (dimensionality 1000) layer is missing and I'm left with what seems to be just the embeddings after some convolutional layer.
Any idea on why this is happening? How can I add, or where can I find the pre-trained fully connected layers block?
Here is the code:
image = np.array(PIL.Image.open("amsterdam.jpg"))
image = np.expand_dims(image,0)
IMG_SIZE = image.shape[1:3]
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
tf.keras.utils.plot_model(base_model,to_file='model.png', show_shapes=True)
Here you can see the structure of the neural network as I plotted it with tf.keras.utils.plot_model:
Any idea on how to fix this?
include_top=False: return the model without dense layers for classification. You can add your own dense layers.
include_top=True: return the entire model.
If you want to get also dense layers for classification, use include_top=True as the default is. When you set include_top=False the model will not return the dense layers, in order to let you make your own dense layers and make your own classification to suit your needs.
I am trying to use pre-trained InceptionV3 model. However, I want to remove initial five layers and add my custom layers. How can I do that? I tried model.layers.pop(0), but that alone will not solve the problem.
Edit:
tf.keras does not help either as mentioned in the first answer:
model.layers.pop() doesn't work in the same way in tf.keras as it doesn in Keras. In tf.keras, model.layers is a view of the model. You can't remove the layers but what you can do is define the layer for which you want the output. For example,
base_model = InceptionV3(shape=shape, weights="imagenet", include_top=True)
# you don't want the last five layers:
base_model_output = base_model.layers[-6].output
# new layers
outputs = Dense(....)(base_model_output)
model = Model(base_model.input, outputs)
Since the first few layers starting from the input are changed, then the pretrained weights cannot be used. So, the architecture can be directly taken from here and modified accordingly instead of trying complex surgeries.
https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_v3.py
I'm messing around with the Keras api in tensorflow, attempting to implement an autoencoder. The sequential model works, but I want to be able to use the encoder (first two layers) and the decoder (last two layers) separately, but using the weights of my already trained model. Is there a way to do this? Do I have to make a custom model?
model = keras.Sequential()
model.add(encoder_1)
model.add(leaky_relu)
model.add(encoder_2)
model.add(leaky_relu2)
model.add(decoder_1)
model.add(leaky_relu3)
model.add(decoder_2)
encoder_model = keras.Sequential()
encoder_model.add(encoder_1)
encoder_model.add(leaky_relu)
encoder_model.add(encoder_2)
encoder_model.add(leaky_relu2)
decoder_model = keras.Sequential()
decoder_model.add(decoder_1)
model.add(leaky_relu3)
decoder_model.add(decoder_2)
I define my models like this but trying to run predict on either the encoder or decoder outputs
'Sequential' object has no attribute '_feed_input_names'
Yes, you should wrap the encoding and decoding layers in separate Model instances that you call separately. The Keras blogporst on autoencoders should contain everything you need to know: https://blog.keras.io/building-autoencoders-in-keras.html
I am using a convolutional neural net (CNN) called make_unet from here. It works and code is able to run with this CNN. But I know that in deep learning you have to initialize weights for optimization of the neural network.
The documentation in Keras clearly indicates the use of a kernel_initializer for weight initialization. However, I do not see any kernel_initializer in the make_unet function I am using.
Anyone who can provide some insight would be appreciated.
In Keras initialisers are passed on a per-layer basis via arguments kernel_initializer and bias_initializer, e.g.
Dense(64, kernel_initializer='random_uniform', bias_initializer='zeros')
All built-in layers come with a sensible default initialiser. For example, all convolutional layers use kernel_initializer='glorot_uniform', bias_initializer='zeros'. Keras gives you many alternative options. You can also create your custom initialisers.