ConvLSTM2D for one-to-many network - python

I want to use some ConvLSTM2D layer for a multi output regression model. One image should be the input and depending on the image a certain number of values should be the output padded by zeros. My Question is what function to use to have the same image as an input?
If I'm using
import keras.backend as K
K.tile(input, number_timesteps)
I'm getting the error:
AttributeError:'Tensor' object has no attribute '_keras_history'.
Is there any other way to solve this or do I have to input the same image multiple times?

All keras tensors in a model must be produced by a Layer.
When you use backend functions, you're not using layers.
You can use Lambda layers to wrap custom and backend functions:
tiledOutputs = Lambda(lambda x: K.tile(x, number_timesteps))(imageInputs)
Or add the layer to a sequential model:
model.add(Lambda(lambda x: K.tile(x, number_timesteps)))
But you're probably looking for K.stack([x]*number_timesteps, axis=1).

Related

How to get TensorFlow operations contained in Keras model

I have a TensorFlow Keras model (TensorFlow 2.6.0); here's a basic example:
import tensorflow as tf
x = inp = tf.keras.Input((5,))
x = tf.keras.layers.Dense(7, activation="relu")(x)
x = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inp, x)
I would like to get all the tf.Operation objects in the graph for the model, select specific operations, then create a new tf.function or tf.keras.Model to output the values of those tensors on arbitrary inputs.
For example, in my simple model above, I might want to get the outputs of all relu operators. I know in that case, I could redefine the model to include the output of that layer as another output of the model, but the point here is that I already have the model (it's much more complicated than above), and there are specific operators that I want to find to get the outputs of.
Have you tried this:
all_ops = tf.get_default_graph().get_operations()
If you got an empty list and you use Tensorflow 2.x , you may try this:
import tensorflow as tf
print(tf.__version__)
tf.compat.v1.disable_eager_execution() # disable eager execution
a = tf.constant([1],name='aa')
print(tf.compat.v1.get_default_graph().get_operations())
print(tf.compat.v1.get_default_graph().get_tensor_by_name('aa:0'))

Is it possible to create a model in Keras Functional API without an input layer?

I would like to create a model consisting of 2 convolutional, one flatten, and one dense layer in Keras. This would be a model with shared weights, so without any predefined input layer.
It is possible to do using the sequential way:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(10,3,2,'valid',activation=tf.nn.relu))
model.add(tf.keras.layers.Conv2D(20,3,2,'valid',activation=tf.nn.relu))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(200,activation=tf.nn.relu))
However, using the Functional API, produces a TypeError:
model2 = tf.keras.layers.Conv2D(10,3,2,'valid',activation=tf.nn.relu)
model2 = tf.keras.layers.Conv2D(20,3,2,'valid',activation=tf.nn.relu)(model2)
model2 = tf.keras.layers.Flatten()(model2)
model2 = tf.keras.layers.Dense(200,activation=tf.nn.relu)(model2)
Error :
TypeError: Inputs to a layer should be tensors. Got: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fb060598100>
Is it impossible to do this way, or am I missing something?
The keras sequential api is designed to be easier to use, and as a result is less flexible than the functional api. The benefit of this is that an input 'layer' shape can be inferred automatically by whatever shape of the data you pass to it. The downside is that this easier to use model is simplified, and so you can't do things like using multiple inputs.
From the keras docs:
A Sequential model is not appropriate when:
Your model has multiple inputs or multiple outputs
Any of your layers has multiple inputs or multiple outputs
You need to do layer sharing
You want non-linear topology (e.g. a residual connection, a
multi-branch model)
The functional api is designed to be more flexible i.e. multiple inputs, and so it doesn't make any sort of automatic inference for you, hence the error. You must explicitly pass an input layer in this case. For your use case, it might seem odd that it doesn't automatically infer the shape, however when you consider the wider use-case scenario it makes sense.
So the second scenario should be :
model2 = tf.keras.layers.Input((10,3,2)) # specified input layer
model2 = tf.keras.layers.Conv2D(10,3,2,'valid',activation=tf.nn.relu)(model2)
model2 = tf.keras.layers.Conv2D(20,3,2,'valid',activation=tf.nn.relu)(model2)
model2 = tf.keras.layers.Flatten()(model2)
model2 = tf.keras.layers.Dense(200,activation=tf.nn.relu)(model2)
Update
If you want to create two separate models and join them together, you should use the functional API, and then due to it's constraints you must therefore use input layers. So you could do something like:
import tensorflow as tf
from tensorflow.keras.layers import Input, Flatten, Dense, concatenate, Conv2D
from tensorflow.keras.models import Model
input1 = Input((10,3,2))
model1 = Dense(200,activation=tf.nn.relu)(input1)
input2 = Input((10,3,2))
model2 = Dense(200,activation=tf.nn.relu)(input2)
merged = concatenate([model1, model2])
merged = Conv2D(10,3,2,'valid',activation=tf.nn.relu)(merged)
merged = Flatten()(merged)
merged = Dense(200,activation=tf.nn.relu)(merged)
model = Model(inputs=[input1, input2], outputs=merged)
Above we have two separate inputs and then two Dense layers - you can build these separate lines however you want, and then to merge them together to pass them through a convolutional layer you need to use a tf.keras.layers.concatenate layer, and then you can continue the joint model from there. Wrapping the whole thing inside a Model object then allows you access training and inference methods like fit/predict etc.
The linking in keras works by propagating tensors through the layers. So in your second example, at the beginning model2 is an instance of a keras.layers.Layer and not a tf.Tensor that why you get the error.
Input creates a tensor which can then be used to link the layers. So if there is not a specific reason, you just add one:
model2 = tf.keras.layers.Input((10,3,2))
model2 = tf.keras.layers.Conv2D(10,3,2,'valid',activation=tf.nn.relu)(model2)

Add extra output to existing Chainer network

Let's say I create a simple fully connected network:
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import Sequential
model = Sequential(
L.Linear(n_in, n_hidden),
F.relu,
L.Linear(n_hidden, n_hidden),
F.relu,
L.Linear(n_hidden, n_out)
)
# Compute the forward pass
y = model(x)
I want to train this model with n_out outputs, then, after it is trained, to add extra outputs before fine-tuning the network.
I have found ways to remove the last layer in order to retrain a new last layer, however this is not what I want: I want to keep the weights of the existing outputs. The weights of the new outputs would be initialized randomly.
How about introducing an additional linear layer L.Linear(n_hidden, n_extra_out) (without removing any of the existing ones) where n_extra_out is the number of additional outputs. You can then extract the output from the last F.relu (you might want to consider replacing the Sequential object with an instance of a chainer.Chain implementation for this, similar to this example https://github.com/chainer/chainer/blob/master/examples/mnist/train_mnist.py#L16) and pass it as inputs to both your pretrained last linear layer as well as this new layer. The two outputs can then be concatenated using F.concat.

What is the difference between these two ways of building a model in keras?

I am new to Keras and after going through a few tutorials i started building a model and found these two styles of implementations. However i am getting an error in the first one and second one works fine. Can someone explain the difference between the two?
First Method:
visible = Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True)
encoder = LSTM(100,activation='relu')(visible)
Second Method:
model = Sequential()
model.add(Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True))
model.add(LSTM(100,activation ='relu'))
This is the error I get:
ValueError: Layer lstm_59 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.embeddings.Embedding'>. Full input: [<keras.layers.embeddings.Embedding object at 0x00000207BC7DBCC0>]. All inputs to the layer should be tensors.
They're two ways of creating DL models in Keras. The first code snippet follows functional style. This style is used for creating complex models like multi-input/output, shared layers etc.
https://keras.io/getting-started/functional-api-guide/
The second code snippet is Sequential style. Simple models can be created which involves just stacking of layers.
https://keras.io/getting-started/sequential-model-guide/
If you read the functional API guide, you'll notice the following point:
'A layer instance is callable (on a tensor), and it returns a tensor'
Now the error you're seeing would make sense. This line only creates the layer and doesn't invoke it by passing a tensor.
visible = Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True)
Subsequently, passing this Embedding object to LSTM layer throws an error as it is expecting a Tensor.
This is an example from the functional API guide. Notice the output tensors getting passed from one layer to another.
main_input = Input(shape=(100,), dtype='int32', name='main_input')
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
lstm_out = LSTM(32)(x)

Keras Concatenate Layers: Difference between different types of concatenate functions

I just recently started playing around with Keras and got into making custom layers. However, I am rather confused by the many different types of layers with slightly different names but with the same functionality.
For example, there are 3 different forms of the concatenate function from https://keras.io/layers/merge/ and https://www.tensorflow.org/api_docs/python/tf/keras/backend/concatenate
keras.layers.Concatenate(axis=-1)
keras.layers.concatenate(inputs, axis=-1)
tf.keras.backend.concatenate()
I know the 2nd one is used for functional API but what is the difference between the 3? The documentation seems a bit unclear on this.
Also, for the 3rd one, I have seen a code that does this below. Why must there be the line ._keras_shape after the concatenation?
# Concatenate the summed atom and bond features
atoms_bonds_features = K.concatenate([atoms, summed_bond_features], axis=-1)
# Compute fingerprint
atoms_bonds_features._keras_shape = (None, max_atoms, num_atom_features + num_bond_features)
Lastly, under keras.layers, there always seems to be 2 duplicates. For example, Add() and add(), and so on.
First, the backend: tf.keras.backend.concatenate()
Backend functions are supposed to be used "inside" layers. You'd only use this in Lambda layers, custom layers, custom loss functions, custom metrics, etc.
It works directly on "tensors".
It's not the choice if you're not going deep on customizing. (And it was a bad choice in your example code -- See details at the end).
If you dive deep into keras code, you will notice that the Concatenate layer uses this function internally:
import keras.backend as K
class Concatenate(_Merge):
#blablabla
def _merge_function(self, inputs):
return K.concatenate(inputs, axis=self.axis)
#blablabla
Then, the Layer: keras.layers.Concatenate(axis=-1)
As any other keras layers, you instantiate and call it on tensors.
Pretty straighforward:
#in a functional API model:
inputTensor1 = Input(shape) #or some tensor coming out of any other layer
inputTensor2 = Input(shape2) #or some tensor coming out of any other layer
#first parentheses are creating an instance of the layer
#second parentheses are "calling" the layer on the input tensors
outputTensor = keras.layers.Concatenate(axis=someAxis)([inputTensor1, inputTensor2])
This is not suited for sequential models, unless the previous layer outputs a list (this is possible but not common).
Finally, the concatenate function from the layers module: keras.layers.concatenate(inputs, axis=-1)
This is not a layer. This is a function that will return the tensor produced by an internal Concatenate layer.
The code is simple:
def concatenate(inputs, axis=-1, **kwargs):
#blablabla
return Concatenate(axis=axis, **kwargs)(inputs)
Older functions
In Keras 1, people had functions that were meant to receive "layers" as input and return an output "layer". Their names were related to the merge word.
But since Keras 2 doesn't mention or document these, I'd probably avoid using them, and if old code is found, I'd probably update it to a proper Keras 2 code.
Why the _keras_shape word?
This backend function was not supposed to be used in high level codes. The coder should have used a Concatenate layer.
atoms_bonds_features = Concatenate(axis=-1)([atoms, summed_bond_features])
#just this line is perfect
Keras layers add the _keras_shape property to all their output tensors, and Keras uses this property for infering the shapes of the entire model.
If you use any backend function "outside" a layer or loss/metric, your output tensor will lack this property and an error will appear telling _keras_shape doesn't exist.
The coder is creating a bad workaround by adding the property manually, when it should have been added by a proper keras layer. (This may work now, but in case of keras updates this code will break while proper codes will remain ok)
Keras historically supports 2 different interfaces for their layers, the new functional one and the old one, that requires model.add() calls, hence the 2 different functions.
For the TF -- their concatenate() functions does not do everything that required for Keras to work, hence, the additional calls to make ._keras_shape variable correct and not to upset Keras that expects that variable to have some particular value.

Categories

Resources