Keras + Tensorflow: 'ConvLSTM2D' object has no attribute 'outbound_nodes' - python

I’m trying to have a ConvLSTM as part of my functioning tensorflow network, because I had some issues with using the tensorflow ConvLSTM implementation, I settled for using the ConvLSTM2D Keras Layer instead.
To make Keras available in my Tensorflow session I used the blogposts suggestion (I’m using the Tensorflow backend):
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
A snippet of my code (that what causes the issues):
# state has a shape of [1, 75, 32, 32] with batchsize=1
state = tf.concat([screen, screen2, non_spatial], axis=1)
# Reshaping state to get time=1 to have the right shape for the ConvLSTM
state_reshaped = tf.reshape(state, [1, 1, 75, 32, 32])
# Keras ConvLSTM2D Layer
# I tried leaving out the batch_size for the input_shape but it didn't make a difference for the error and it seems to be fine
lstm_layer = ConvLSTM2D(filters=5, kernel_size=(3, 3), input_shape=(1, 1, 75, 32, 32), data_format='channels_first', stateful=True)(state_reshaped)
fc1 = layers.fully_connected(inputs=layers.flatten(lstm_layer), num_outputs=256, activation_fn=tf.nn.relu)
This gives me the following error:
AttributeError: 'ConvLSTM2D' object has no attribute 'outbound_nodes’”
I have no idea what this means. I thought it might has to do with mixing Keras ConvLSTM and tensorflows flatten. So I tried using Keras Flatten() instead like this:
# lstm_layer shape is (5, 5, 30, 30)
lstm_layer = Flatten(data_format='channels_first')(lstm_layer)
fc1 = layers.fully_connected(inputs=lstm_layer, num_outputs=256, activation_fn=tf.nn.relu)
and got the following error: ValueError: The last dimension of the inputs to 'Dense' should be defined. Found 'None'.
This error is caused by Flatten(), for whatever reason, having an output shape of (?, ?) and the fullyconnected layer needing to have a defined shape for the last dimension but I don't understand why it would be undefined. It was defined before.
Using Reshape((4500,))(lstm_layer) instead gives me the same no attribute 'outbound_nodes' error.
I googled the issue and I seem to not be the only one but I couldn't find a solution.
How can I solve this issue?
Is the unknown output shape of Flatten() a bug or wanted behavior, if so why?

I encountered the same problem and had a bit of a dig into the tensorflow code. The problem is that there was some refactoring done for Keras 2.2.0 and tf.keras hasn't yet been updated to this new API.
The 'outbound_nodes' attribute was renamed to '_outbound_nodes' in Keras 2.2.0. It's pretty easy to fix, there's two references in base.py you need to update:
/site-packages/tensorflow/python/layers/base.py
After updating it works fine for me.

In my case I was getting the error on a custom subclass, but the following solution can be applied nonetheless, if you subclass ConvLSTM2D and add this to your new class:
#property
def outbound_nodes(self):
if hasattr(self, '_outbound_nodes'):
print("outbound_nodes called but _outbound_nodes found")
return getattr(self, '_outbound_nodes', [])

I found the solution, even though I don't know why it works.
Currently I'm using Tensorflow 1.8 and Keras 2.2. If you downgrade Keras to ~2.1.1 it works without any problems and you can easily use Keras layers together with tensorflow. This fixed AttributeError: 'ConvLSTM2D' object has no attribute 'outbound_nodes’” and then I just used layers.flatten(lstm_layer) and everything worked.

As others have pointed out, this is because of a mismatch between your installed tensorflow and keras libraries.
Their solutions work, but in my opinion, the cleanest and easiest way to solve this is by using the keras layers contained within the tensorflow package itself rather than by using the keras library directly.
i.e, replace
from keras.layers import ConvLSTM2D
by
from tensorflow.python.keras.layers import ConvLSTM2D
This will ensure that your tensorflow and keras function calls / objects are always compatible, and solved this issue for me.

Related

Keras kernel_initializer must either be wrapped in an init_scope or callable error

I noticed that assigning the keras.initializer inside a layer results in an error claiming that the variable initializers must either be wrapped in an init_scope or callable. However, I fail to see how my below use-case is different than the example provided here. Is there a workaround of this issue or am I making some obvious error in using keras initializers?
Here is the minimal example that I could come up with:
import tensorflow as tf
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from tensorflow.keras import initializers
inputs_test=Input((512,512,3))
initializer_truncated_norm = initializers.TruncatedNormal(mean=0, stddev=0.02)
deconv_filter = Conv2DTranspose(3, (2, 2), strides=(2, 2), padding='same', kernel_initializer=initializer_truncated_norm)(inputs_test)
model2 = Model(inputs=inputs_test, outputs=deconv_filter)
optimizer = Adam(lr=1e-4)
model2.compile(optimizer=optimizer, loss='mse')
model2.summary()
Here is the exact error that I get when running such function
ValueError: Tensor-typed variable initializers must either be wrapped
in an init_scope or callable (e.g., tf.Variable(lambda : tf.truncated_normal([10, 40]))) when building functions. Please file
a feature request if this restriction inconveniences you.
I ran your example without errors. tf==2.3.1, keras==2.4.0.
Also tried it with tf==2.0 in Colab and it worked OK again. I suggest you to upgrade to the latest TF and try again.
Also change your imports from from keras to from tensorflow.keras
If still fails - post the full error stack here and version info.

tflite quantization how to change the input dtype

see possible solution at the end of the post
I am trying to fully quantize the keras-vggface model from rcmalli to run on an NPU. The model is a Keras model (not tf.keras).
When using TF 1.15 for quantization with:
print(tf.version.VERSION)
num_calibration_steps=5
converter = tf.lite.TFLiteConverter.from_keras_model_file('path_to_model.h5')
#converter.post_training_quantize = True # This only makes the weight in8 but does not initialize model quantization
def representative_dataset_gen():
for _ in range(num_calibration_steps):
pfad='path_to_image(s)'
img=cv2.imread(pfad)
# Get sample input data as a numpy array in a method of your choosing.
yield [img]
converter.representative_dataset = representative_dataset_gen
tflite_quant_model = converter.convert()
open("quantized_model", "wb").write(tflite_quant_model)
The model is converted but as I need full int8 quantization, I add:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
This error message appears:
ValueError: Cannot set tensor: Got value of type UINT8 but expected type FLOAT32 for input 0, name: input_1
clearly, the input of the model still requires float32.
Questions:
Do I have to adapt the quantization method that the input dtype is changed ? or
Do I have to change the input layer of the model to dtype int8 beforehand?
Or is that actually reporting that the model is not actually quantized?
If 1 or 2 is the answer, would you also have a best practice tip for me?
Addition:
Using :
h5_path = 'my_model.h5'
model = keras.models.load_model(h5_path)
model.save(os.getcwd() +'/modelTF2')
to save the h5 as pb with TF 2.2 and then using converter=tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
as TF 2.x tflite takes floats, and convert them to uint8s internally . I thought that could be a solution. Unfortunately, this error message appears:
tf.lite.TFLiteConverter.from_keras_model giving 'str' object has no attribute 'call'
Apparently TF2.x cannot handle pure keras models.
using tf.compat.v1.lite.TFLiteConverter.from_keras_model_file() to solve this error just repeats the error from above, as we are back again at "TF 1.15" level.
Addition 2
Another solution is to transfer the keras model to tf.keras manually. I will look into that if there is no other solution.
Regarding the comment of Meghna Natraj
To recreate the model (using TF 1.13.x) just:
pip install git+https://github.com/rcmalli/keras-vggface.git
and
from keras_vggface.vggface import VGGFace
pretrained_model = VGGFace(model='resnet50', include_top=False, input_shape=(224, 224, 3), pooling='avg') # pooling: None, avg or max
pretrained_model.summary()
pretrained_model.save("my_model.h5") #using h5 extension
The input layer is connected. Too bad, that looked like a good/easy fix.
Possible Solution
It seems to work using TF 1.15.3 I used 1.15.0 beforehand. I will check if I did something else different by accident.
A possible reason why this fails is that the model has input tensors that are not connected to the output tensor, i.,e they are probably unused.
Here is a colab notebook where I've reproduced this error. Modify the io_type at the beginning of the notebook to tf.uint8 to see an error similar to one you got.
SOLUTION
You need to manually inspect the model and to see if there are any inputs that are dangling/lost/not connected to the output and remove them.
Post a link to the model and I can try to debug it as well.

'tensorflow' has no attribute 'Session'

I am trying to convert a Tensor to a numpy array.
The tensor i have has a shape as below
LastDenseLayer.output.shape
TensorShape([None, 128])
When i am running the code as below,
with tf.Session() as sess:
LastLayer = LastDenseLayer.output.eval()
getting the below error
module 'tensorflow' has no attribute 'Session'
I am running Keras model and trying to get the values of a specific layer out of that.
Unable to understand that is wrong here.
Regards
Sachin
TensorFlow 2.x removed tf.Session because eager execution is now a default. Please refer to the TensorFlow migration guide for more information.
It's recommended to update your code to meet Tensorflow 2.0 requirements. As a quick solution you can use tf.compat.v1.Session() instead of tf.Session():
with tf.compat.v1.Session() as sess:
LastLayer = LastDenseLayer.output.eval()

Layer not built error, even after model.build() in tensorflow 2.0.0

Reference I was following:
https://www.tensorflow.org/api_docs/python/tf/keras/Model#save
I really want to run the model; give it some inputs; grab some layer outputs coming from inside the model.
model = tf.keras.models.load_model('emb_movielens100k_all_cols_dec122019')
input_shape = (None, 10)
model.build(input_shape)
All good so far; no errors no warnings.
model.summary()
ValueError: You tried to call `count_params` on IL, but the layer isn't built. You can build it manually via: `IL.build(batch_input_shape)`
How to fix?
Following code does not fix it:
IL.build(input_shape) # no
model.layer-0.build(input_shape) # no
This seems to work: But it's a long way from my goal of running the model and grabbing some layer outputs. Isn't there an easy way in TF 2.0.0?
layer1 = model.get_layer(index=1)
This throws an error:
model = tf.saved_model.load('emb_movielens100k_all_cols_dec122019')
input_shape = (None, 10)
model.build(input_shape) #AttributeError: '_UserObject' object has no attribute 'build'
The fix was to use save_model(), not model.save(). Also needed to use save_format="h5" during save, not default format. Like this:
tf.keras.models.save_model(model, "h5_emb.hp5", save_format="h5")
Also needed to use model_load(), not saved_model.load(), to load to memory from disk. Like this:
model = tf.keras.models.load_model('h5_emb.hp5')
The other tutorial and documentation ways of doing save and load returned a model that did not work right for predictions or summary.
This is tensorflow version 2.0.0.
Hope this helps others.

How to implement a stacked RNNs in Tensorflow?

I want to implement an RNN using Tensorflow1.13 on GPU. Following the official recommendation, I write the following code to get a stack of RNN cells
lstm = [tk.layers.CuDNNLSTM(128) for _ in range(2)]
cells = tk.layers.StackedRNNCells(lstm)
However, I receive an error message:
ValueError: ('All cells must have a state_size attribute. received cells:', [< tensorflow.python.keras.layers.cudnn_recurrent.CuDNNLSTM object at 0x13aa1c940>])
How can I correct it?
This may be a Tensorflow bug and I would suggest creating an issue on Github. However, if you want to by pass the bug, you can use:
import tensorflow as tf
import tensorflow.keras as tk
lstm = [tk.layers.CuDNNLSTM(128) for _ in range(2)]
stacked_cells = tf.nn.rnn_cell.MultiRNNCell(lstm)
This will work but it will give a deprecation warning that you can suppress.
Thanks #qlzh727. Here, I quote the response:
Either StackedRNNCells or StackedRNNCells only works with Cell, not layer. The difference between the cell and layer in RNN is that cell will only process one time step within the whole sequence, whereas the layer will process the whole sequence. You can treat RNN layer as:
for t in whole_time_steps:
output_t, state_t = cell(input_t, state_t-1)
If you want to stack 2 LSTM layers to together with cudnn in 1.x, you can do:
l1 = tf.layers.CuDNNLSTM(128, return_sequence=True)
l2 = tf.layers.CuDNNLSTM(128)
l1_output = l1(input)
l2_oupput = l2(l1_output)
In tf 2.x, we unify the cudnn and normal implementation together, you can just change the example above with tf.layers.LSTM(128, return_sequence=True), which will use the cudnn impl if available.

Categories

Resources