I am looking for something like this:
inputs = tf.keras.Input(shape = input_shape)
# network structure
x = layers.Dense(4, activation='relu')(inputs)
x = layers.Dense(4, activation='relu')(x)
#output layer
outputs = layers.Dense(output_size, activation='linear')(x)
#scaling layer??
outputs = layers.Scale(output_size)(outputs)
#build model
model = tf.keras.models.Model(inputs=inputs, outputs=outputs, name = 'mymodel')
I want the layer to scale my outputs by a scalar. And I don't want to specify this scalar, but rather have the model learn this scalar by itself.
Is there such a layer?
Or can I achieve this with a Multiply layer in combination with something like sympy?
I need this for a quantum-computing model (made with tfq) which can only give outputs between 0 and 1. I can't use a dense layer, because that would bring in classical machine-learning, which I don't want to use.
A scale layer is usually unnecessary because the desired information is in the relationship between the outputs.
If you want specific values, you probably need to change the loss function.
However, this link can allow you to make a personalized layer: https://keras.io/guides/making_new_layers_and_models_via_subclassing/
Related
I am trying to merge output from two models and give them as input to the third model using keras sequential model.
Model1 :
inputs1 = Input(shape=(750,))
x = Dense(500, activation='relu')(inputs1)
x = Dense(100, activation='relu')(x)
Model1 :
inputs2 = Input(shape=(750,))
y = Dense(500, activation='relu')(inputs2)
y = Dense(100, activation='relu')(y)
Model3 :
merged = Concatenate([x, y])
final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(100, activation='relu'))
final_model.add(Dense(3, activation='softmax'))
Till here, my understanding is that, output from two models as x and y are merged and given as input to the third model. But when I fit this all like,
module3.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
module3.fit([in1, in2], np_res_array)
in1 and in2 are two numpy ndarray of dimention 10000*750 which contains my training data and np_res_array is the corresponding target. This gives me error as 'list' object has no attribute 'shape' As far as know, this is how we give multiple inputs to a model, but what is this error? How do I resolve it?
You can't do this using Sequential API. That's because of two reasons:
Sequential models, as their name suggests, are a sequence of layers where each layer is connected directly to its previous layer and therefore they cannot have branches (e.g. merge layers, multiple input/output layers, skip connections, etc.).
The add() method of Sequential API accepts a Layer instance as its argument and not a Tensor instance. In your example merged is a Tensor (i.e. concatenation layer's output).
Further, the correct way of using Concatenate layer is like this:
merged = Concatenate()([x, y])
However, you can also use concatenate (note the lowercase "c"), its equivalent functional interface, like this:
merged = concatenate([x, y])
Finally, to be able to construct that third model you also need to use the functional API.
Normally, there's no need to produce a one hot vector output in a neural network; however, I am trying to train a GAN, so the output of one network needs to match the input of the other. Currently the last layer in my generator is a dense softmax, so that I have a probability distribution over the outputs, but I need to convert that vector to a one-hot, so it matches the input the discriminator expects. There doesn't seem to be any built in layer to do this with keras. I'm trying to write a lambda expression, but can't seem to get it to work.
Here is the code right now:
s1 = Input(shape=(self.sentence_length,))
embed = Embedding(output_dim=self.embedding_vector_length,
input_dim=self.vocabulary_size,
input_length=self.sentence_length)(s1)
x = concatenate([embed,embed],axis=1)
x = LSTM(self.latent_dimension,return_sequences=True)(x)
x = LSTM(self.embedding_vector_length,return_sequences=True)(x)
x = Lambda(lambda s: s[:,15:,:])(x)
x = Dense(self.vocabulary_size,activation='softmax')(x)
# x = Lambda(???)
model = Model(s1,x)
model.summary()
I want to interpret an RNN by looking at the sequence-by-sequence values. It is possible to output these values with return_sequences. However, those values are then used as inputs into the next layer (e.g., a dense activation layer). I would like to output only the last value but record all values over the full sequence for interpretation. What's the easiest way to do this?
Create two models with the same layer, but in one of them you feed the Dense layer only with the last step of the RNN:
inputs = Input(inputShape)
outs = RNN(..., return_sequences=True)(inputs)
modelSequence = Model(inputs,outs)
#take only the last step
outs = Lambda(lambda x: x[:,-1])(outs)
outs = Dense(...)(outs)
modelSingle = Model(inputs,outs)
Use modelSingle,fit(x_data,y_data) to train as you've been doing normally.
Use modelSequence.predict(x_data) to see the results of the RNN without training.
I'm trying to combine two outputs that are produced by the same network that makes predictions on a 4 class task and a 10 class task. Then I look to combine these outputs to give a length 14 array which I use as my end target.
While this seems to work actively the predictions are always for one class so it produces a probability dist which is only concerned with selecting 1 out of the 14 options instead of 2. What I actually need it to do is to provide 2 predictions, one for each class. I want this all to be produced by the same model.
input = Input(shape=(100, 100), name='input')
lstm = LSTM(128, input_shape=(100, 100)))(input)
output1 = Dense(len(4), activation='softmax', name='output1')(lstm)
output2 = Dense(len(10), activation='softmax', name='output2')(lstm)
output3 = concatenate([output1, output2])
model = Model(inputs=[input], outputs=[output3])
My issue here is determining an appropriate loss function and method of prediction? For prediction I can simply grab the output of each layer after the softmax however I'm unsure how to set the loss function for each of these things to be trained.
Any ideas?
Thanks a lot
You don't need to concatenate the outputs, your model can have two outputs:
input = Input(shape=(100, 100), name='input')
lstm = LSTM(128, input_shape=(100, 100)))(input)
output1 = Dense(len(4), activation='softmax', name='output1')(lstm)
output2 = Dense(len(10), activation='softmax', name='output2')(lstm)
model = Model(inputs=[input], outputs=[output1, output2])
Then to train this model, you typically use two losses that are weighted to produce a single loss:
model.compile(optimizer='sgd', loss=['categorical_crossentropy',
'categorical_crossentropy'], loss_weights=[0.2, 0.8])
Just make sure to format your data right, as now each input sample corresponds to two output labeled samples. For more information check the Functional API Guide.
The features maps can be obtained using:
from keras import backend as K
# with a Sequential model
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[
[model.layers[
[model.layers[3].output])
layer_output = get_3rd_layer_output([X])[
layer_output = get_3rd_layer_output([X])[
layer_output = get_3rd_layer_output([X])[0]
This is good for visualisation of the data. But, I also intend to modify the output for each layer and then fed this output back to the network. Can anyone suggest me how I can do the same?
Thanks
I'm restoring this answer with edits to reflect additional information.
Assuming you have a model similar to this:
model = Sequential()
model.add(Dense(1000, input_dim=1000))
model.add(Dense(1000))
And you want to run a custom modification on the output of the first layer before passing it to the second layer you can use the lambda layer as so:
f = K.function(\* some function *\)
model = Sequential()
model.add(Dense(1000, input_dim=1000))
model.add(Lambda(lambda x: f(x))
model.add(Dense(1000))
If you just want to do this once you can do something like this:
modified_layer_output = your_old_function([X]) * some_modification
get_final_layer_output = K.function([model.layers[3].input],
[model.layers[-1].output])
result = get_final_layer_output(modified_layer_output)
You could also create a new model to learn on your modified layer output.
Edit:
You could do your write your own keras layer to do whatever you want with the input and pass it to the next layer like shown here (https://keras.io/layers/writing-your-own-keras-layers/).