KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')" - python

I have a Python code that uses Keras. I didn't post the code because it is a bit long, and the issue seems not to be related to the code itself.
This is the error I'm having:
File "h5py\h5a.pyx", line 77, in h5py.h5a.open (D:\Build\h5py\h5py-2.7.0\h5py\h5a.c:2350)
KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')"
What could be the issue? Is it related to Keras? How can I solve this issue?
EDIT 1
The error seems to be related to this part of code:
# load VGG16 weights
f = h5py.File(weights_path)
for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights)
f.close()
print('Model loaded.')
Thanks.

Use the weights file vgg16_weights_th_dim_ordering_th_kernels.h5 from https://github.com/fchollet/deep-learning-models/releases
This file is in Keras 2 format.

I had the same issue. I solved it by building the vgg16 network where I needed it by adding this line.
Vmodel = applications.VGG16(weights='imagenet', include_top=False, input_shape=(3, img_width, img_height))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=Vmodel.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)
# add the model on top of the convolutional base
# model.add(top_model)
model = Model(inputs=Vmodel.input, outputs=top_model(Vmodel.output))
So basically instead of creating a vgg16 Conv net of your own and loading the vgg16 weights into it. I created a vgg16 model and then added the last layers to the model. I hope this works for you.

Apparently "nb_layers" refers to the number of layers, so instead you can use a work around.
In this case:
f = h5py.File(filename, 'r')
nb_layers = len(f.attrs["layer_names"])

Related

How to delete last layer from a transformer model (Transformers for Images)?

Can anybody please tell me how can i remove the last layer of pretrained swin transformer? I am attaching a screenshot of the summary of the model. I am also adding the code. Thanks in advance
I was getting the following error:
AttributeError: 'Sequential' object has no attribute 'head'
`import torch
HUB_URL = "SharanSMenon/swin-transformer-hub:main"
MODEL_NAME = "swin_tiny_patch4_window7_224"
model = torch.hub.load(HUB_URL, MODEL_NAME, pretrained=True)
from torchinfo import summary
summary(model, input_size=(batch_size,3,224,224))`
Summary of Swin Transformer
I tried to remove the last layers using this code:
modelnew = torch.nn.Sequential(*(list(model.children())[:-1]))
Last layer was remove but after removing I was not able to check the summary of the model. I was also not able to add a custom layer to the model using the following code:
n_inputs = modelnew.head.in_features
modelnew1 = torch.nn.Sequential(
torch.nn.Linear(n_inputs, 5)
)

how Convert my tensorflow 2 using tflearn model to graph.pb file

I am trying to convert my model to CoreML so I save my model using this code
model2.save("modelcnn2.tfl")
then giving three model files as follow:
checkpoint
modelcnn2.tfl.data-00000-of-00001
modelcnn2.tfl.index
modelcnn2.tfl.meta
so how can convert to one graph.pb then convert to CoreMl
I use this code
import tensorflow as tf
meta_path = '/content/drive/MyDrive/check/modelcnn2.tfl.meta' # Your .meta file
output_node_names = ['0',
'1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18',
'19''20','21','22','23','24','25','26','27','28'] # Output nodes
with tf.compat.v1.Session() as sess:
# Restore the graph
saver = tf.compat.v1.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
but this error appeared
KeyError: "The name 'Adam' refers to an Operation not in the graph."
so if any suggestion helps me, it is appreciated
The error is telling you exactly what the issue is.
Adam is not a supported operation. You have two options:
(1) Create a new model without an Adam layer.
(2) Implement a Custom Operators.

convert tf1 .pb saved model to tf2 model

For one of my projects, I need to use this model.
But this model is in tf1. On downloading the "20180402-114759" model, I got a ".pb" file, a ".meta" file, and 2 ".ckpt" files.
I know that the ".pb" file contains the model and weights, but it is in tf1 format, and I want to use this model as part of my tf2 model, which will be something like this:-
MODEL_DIR_NAME = "20180402-114759"
loaded_model = tf.saved_model.load(MODEL_DIR_NAME)
input = tf.keras.layers.Input(shape=(160, 160, 3))
output = loaded_model(input)
output = tf.keras.layers.Dense(512)(output)
model = tf.keras.Model(inputs=input, outputs=output)
model.layers[1].trainable = False #setting the loaded_model layer as non-trainable.
model.summary()
...
model.compile(...)
model.fit(...)
model.predict(...)
And finally, save the trained model. Can anyone help me with this?

Can't save in SavedModel format Tensorflow

I am trying to save my ANN model using SavedModel format. The command that I used was:
model.save("my_model")
It supposed to give me a folder namely "my_model" that contains all saved_model.pb, variables and asset, instead it gives me an HDF file namely my_model. I am using keras v.2.3.1 and tensorflow v.2.2.0
Here is a bit of my code:
from keras import optimizers
from keras import backend
from keras.models import Sequential
from keras.layers import Dense
from keras.activations import relu,tanh,sigmoid
network_layout = []
for i in range(3):
network_layout.append(8)
model = Sequential()
#Adding input layer and first hidden layer
model.add(Dense(network_layout[0],
name = "Input",
input_dim=inputdim,
kernel_initializer='he_normal',
activation=activation))
#Adding the rest of hidden layer
for numneurons in network_layout[1:]:
model.add(Dense(numneurons,
kernel_initializer = 'he_normal',
activation=activation))
#Adding the output layer
model.add(Dense(outputdim,
name="Output",
kernel_initializer="he_normal",
activation="relu"))
#Compiling the model
model.compile(optimizer=opt,loss='mse',metrics=['mse','mae','mape'])
model.summary()
#Training the model
history = model.fit(x=Xtrain,y=ytrain,validation_data=(Xtest,ytest),batch_size=32,epochs=epochs)
model.save('my_model')
I have read the API documentation in the tensorflow website and I did what it said to use model.save("my_model") without any file extension, but I can't get it right.
Your help will be very appreciated. Thanks a bunch!
If you would like to use tensorflow saved model format, then use:
tms_model = tf.saved_model.save(model,"export/1")
This will create a folder export and a subfolder 1 inside that. Inside the 1 folder you can see the assets, variables and .pb file.
Hope this will help you out.
Make sure to change your imports like this
from tensorflow.keras import optimizers

Layer not built error, even after model.build() in tensorflow 2.0.0

Reference I was following:
https://www.tensorflow.org/api_docs/python/tf/keras/Model#save
I really want to run the model; give it some inputs; grab some layer outputs coming from inside the model.
model = tf.keras.models.load_model('emb_movielens100k_all_cols_dec122019')
input_shape = (None, 10)
model.build(input_shape)
All good so far; no errors no warnings.
model.summary()
ValueError: You tried to call `count_params` on IL, but the layer isn't built. You can build it manually via: `IL.build(batch_input_shape)`
How to fix?
Following code does not fix it:
IL.build(input_shape) # no
model.layer-0.build(input_shape) # no
This seems to work: But it's a long way from my goal of running the model and grabbing some layer outputs. Isn't there an easy way in TF 2.0.0?
layer1 = model.get_layer(index=1)
This throws an error:
model = tf.saved_model.load('emb_movielens100k_all_cols_dec122019')
input_shape = (None, 10)
model.build(input_shape) #AttributeError: '_UserObject' object has no attribute 'build'
The fix was to use save_model(), not model.save(). Also needed to use save_format="h5" during save, not default format. Like this:
tf.keras.models.save_model(model, "h5_emb.hp5", save_format="h5")
Also needed to use model_load(), not saved_model.load(), to load to memory from disk. Like this:
model = tf.keras.models.load_model('h5_emb.hp5')
The other tutorial and documentation ways of doing save and load returned a model that did not work right for predictions or summary.
This is tensorflow version 2.0.0.
Hope this helps others.

Categories

Resources