How to remove multiple layers from pretrained ResNet50V2 Keras model - python

I'm trying to remove multiple layers form a pre-trained Keras model (ResNet50V2), but no matter what I do it's not working. I've read countless other questions on stack overflow, github issues, and forum posts related to this topic in the past month, and I still can't make it work... So I'll ask directly. What might I be doing wrong?
from ray.rllib.models.tf.tf_modelv2 import TFModelV2
from ray.rllib.utils.framework import try_import_tf
from ray.rllib.models import ModelCatalog
tf = try_import_tf()
def resnet_core(x):
x = tf.keras.applications.resnet_v2.preprocess_input(x)
resnet = tf.keras.applications.ResNet50V2(
include_top=False,
weights="imagenet",
)
remove_n = 130
for i in range(remove_n):
resnet._layers.pop()
print(len(resnet._layers))
s = tf.keras.models.Model(resnet.input, resnet._layers[-1].output, name='resnet-core')
for layer in s.layers:
print('adding layer',layer.name)
for layer in s.layers[:]:
layer.trainable = False
s.build(None)
return s(x)
class ImpalaCNN(TFModelV2):
def __init__(self, obs_space, action_space, num_outputs, model_config, name):
super().__init__(obs_space, action_space, num_outputs, model_config, name)
inputs = tf.keras.layers.Input(shape=obs_space.shape, name="observations")
x = inputs
x = resnet_core(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.ReLU()(x)
x = tf.keras.layers.Dense(units=256, activation="relu", name="hidden")(x)
logits = tf.keras.layers.Dense(units=num_outputs, name="pi")(x)
value = tf.keras.layers.Dense(units=1, name="vf")(x)
self.base_model = tf.keras.Model(inputs, [logits, value])
self.register_variables(self.base_model.variables)
def forward(self, input_dict, state, seq_lens):
obs = tf.cast(input_dict["obs"], tf.float32)
logits, self._value = self.base_model(obs)
return logits, state
def value_function(self):
return tf.reshape(self._value, [-1])
# Register model in ModelCatalog
ModelCatalog.register_custom_model("impala_cnn_tf", ImpalaCNN)
The error I'm getting is:
...
File "/Users/manu/anaconda3/envs/procgen/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 376, in __init__
self._build_policy_map(policy_dict, policy_config)
File "/Users/manu/anaconda3/envs/procgen/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 859, in _build_policy_map
policy_map[name] = cls(obs_space, act_space, merged_conf)
File "/Users/manu/anaconda3/envs/procgen/lib/python3.7/site-packages/ray/rllib/policy/tf_policy_template.py", line 143, in __init__
obs_include_prev_action_reward=obs_include_prev_action_reward)
File "/Users/manu/anaconda3/envs/procgen/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 163, in __init__
framework="tf")
File "/Users/manu/anaconda3/envs/procgen/lib/python3.7/site-packages/ray/rllib/models/catalog.py", line 317, in get_model_v2
registered))
ValueError: It looks like variables {<tf.Variable 'default_policy/
conv4_block4_1_conv/kernel:0' ... }
were created as part of <impala_cnn_tf.ImpalaCNN object at
0x19a8ccc90> but does not appear in model.variables()
({<tf.Variable 'default_policy/pi/
kernel:0' shape=(256, 15) dtype=float32> ...}). Did you forget to call
model.register_variables() on the variables in question?
The error seems to indicate some variables from the layers I'm trying to skip were not registered, but that's because I don't want to use them! Any ideas?
More context in case it helps:
If I set remove_n = 0 I don't see the error (but of course the whole ResNet50V2 is being used)
I'm a newbie with Keras and ML. This might be a very dumb question.
The reason I'm trying to remove many layers, not just the last one, is that I want the model to fit on a small GPU.
I'm trying to train the model using rllib for the aicrowd procgen competition (using the competition to learn Keras/RL)
Full code here: https://github.com/maraoz/neurips2020-procgen-starter-kit/blob/master/models/impala_cnn_tf.py#L55
Full error log here: https://pastebin.com/X0Dk7wdd
Thanks in advance!

Rather than popping off layers, you could try accessing the 130th layer from the last layer. Then, you can build a new model using the input of your original model and the output of this layer.
model = tf.keras.models.Model(resnet.input, resnet.layers[-130].output)
This will do essentially the same thing as what you tried but its much easier and safer since you aren't accessing any private properties of the model itself.

Related

How to copy a tf.keras.models.Model subclass?

I need to copy a keras model and there is no way that I know of which can be done unless the model is not a tf.keras.models.Model() subclass.
Note: The use copy.deepcopy() will work without giving any errors however it will result in another error whenever the copy is used.
Example:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
if __name__ == '__main__':
model1 = MyModel()
model2 = tf.keras.models.clone_model(model1)
Results in:
Traceback (most recent call last):
File "/Users/emadboctor/Library/Application Support/JetBrains/PyCharm2020.3/scratches/scratch.py", line 600, in <module>
model2 = tf.keras.models.clone_model(model1)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 430, in clone_model
return _clone_functional_model(
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 171, in _clone_functional_model
raise ValueError('Expected `model` argument '
ValueError: Expected `model` argument to be a functional `Model` instance, but got a subclass model instead.
Currently, we can't use tf.keras.models.clone_model for subclassed model API whereas we can for sequential and functional API. From doc,
model Instance of Model (could be a functional model or a Sequential model).
Here is a workaround for your need. It makes sense if we need to copy a trained model, where we can get some optimized parameters. So, the main task is we need to create a new model by copying an existing model. The most convenient way for now of this scenario is to get trained weight and set to the newly created model instances. Let first build a model, train it and then get and set weight matrices to the new model.
import tensorflow as tf
import numpy as np
class ModelSubClassing(tf.keras.Model):
def __init__(self, num_classes):
super(ModelSubClassing, self).__init__()
self.conv1 = tf.keras.layers.Conv2D(32, 3, strides=2, activation="relu")
self.gap = tf.keras.layers.GlobalAveragePooling2D()
self.dense = tf.keras.layers.Dense(num_classes)
def call(self, input_tensor, training=False):
# forward pass: block 1
x = self.conv1(input_tensor)
x = self.gap(x)
return self.dense(x)
def build_graph(self, raw_shape):
x = tf.keras.layers.Input(shape=raw_shape)
return tf.keras.Model(inputs=[x], outputs=self.call(x))
# compile
sub_classing_model = ModelSubClassing(10)
sub_classing_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = tf.keras.metrics.CategoricalAccuracy(),
optimizer = tf.keras.optimizers.Adam())
# plot for debug
tf.keras.utils.plot_model(
sub_classing_model.build_graph(x_train.shape[1:]),
show_shapes=False,
show_dtype=False,
show_layer_names=True,
expand_nested=False,
dpi=96,
)
DataSet
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
# train set / data
x_train = np.expand_dims(x_train, axis=-1)
x_train = x_train.astype('float32') / 255
# train set / target
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
# fit
sub_classing_model.fit(x_train, y_train, batch_size=128, epochs=1)
# 469/469 [==============================] - 2s 2ms/step - loss: 8.2821
New Model / Copy
For the subclasses model, we have to initiate the class object.
sub_classing_model_copy = ModelSubClassing(10)
sub_classing_model_copy.build((x_train.shape))
sub_classing_model_copy.set_weights(sub_classing_model.get_weights()) # <- get and set wg
# plot for debug ; same as original plot
# but know, layer name is no longer same
# i.e. if, old: conv2d_40 , new/copy: conv2d_41
tf.keras.utils.plot_model(
sub_classing_model_copy.build_graph(x_train.shape[1:]),
show_shapes=False,
show_dtype=False,
show_layer_names=True,
expand_nested=False,
dpi=96,
)
def clones(module, N):
Creation of N identical layers.
:param module: module to clone
:param N: number of copies
:return: keras model of module copies
seqm=KM.Sequential()
for i in range(N):
m = copy.deepcopy(module)
m.name=m.name+str(i)
seqm.add(m)
return seqm

NotImplementedError: Layers with arguments in `__init__` must override `get_config`

I'm trying to save my TensorFlow model using model.save(), however - I am getting this error.
The model summary is provided here:
Model Summary
The code for the transformer model:
def transformer(vocab_size, num_layers, units, d_model, num_heads, dropout, name="transformer"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
dec_inputs = tf.keras.Input(shape=(None,), name="dec_inputs")
enc_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='enc_padding_mask')(inputs)
# mask the future tokens for decoder inputs at the 1st attention block
look_ahead_mask = tf.keras.layers.Lambda(
create_look_ahead_mask,
output_shape=(1, None, None),
name='look_ahead_mask')(dec_inputs)
# mask the encoder outputs for the 2nd attention block
dec_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='dec_padding_mask')(inputs)
enc_outputs = encoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[inputs, enc_padding_mask])
dec_outputs = decoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask])
outputs = tf.keras.layers.Dense(units=vocab_size, name="outputs")(dec_outputs)
return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name)
I don't understand why it's giving this error since the model trains perfectly fine.
Any help would be appreciated.
My saving code for reference:
print("Saving the model.")
saveloc = "C:/tmp/solar.h5"
model.save(saveloc)
print("Model saved to: " + saveloc + " succesfully.")
It's not a bug, it's a feature.
This error lets you know that TF can't save your model, because it won't be able to load it.
Specifically, it won't be able to reinstantiate your custom Layer classes: encoder and decoder.
To solve this, just override their get_config method according to the new arguments you've added.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
For example, if your encoder class looks something like this:
class encoder(tf.keras.layers.Layer):
def __init__(
self,
vocab_size, num_layers, units, d_model, num_heads, dropout,
**kwargs,
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.num_layers = num_layers
self.units = units
self.d_model = d_model
self.num_heads = num_heads
self.dropout = dropout
# Other methods etc.
then you only need to override this method:
def get_config(self):
config = super().get_config().copy()
config.update({
'vocab_size': self.vocab_size,
'num_layers': self.num_layers,
'units': self.units,
'd_model': self.d_model,
'num_heads': self.num_heads,
'dropout': self.dropout,
})
return config
When TF sees this (for both classes), you will be able to save the model.
Because now when the model is loaded, TF will be able to reinstantiate the same layer from config.
Layer.from_config's source code may give a better sense of how it works:
#classmethod
def from_config(cls, config):
return cls(**config)
This problem is caused by mixing imports between the keras and tf.keras libraries, which is not supported.
Use tf.keras.models or usr keras.models everywhere
You should never mix imports between these libraries, as it will not work and produces all kinds of strange error messages. These errors change with versions of keras and tensorflow.
I suggest You try the following:
model = tf.keras.Model(...)
model.save_weights("some_path")
...
model.load_weights("some_path")
I think simple solution is to install the tensorflow==2.4.2 for gpu tensorflow-gpu==2.4.2 , i faced the issue and debug the whole day but it was not resolved. finally i installed the older stable version and error is gone

H5py layer - unable to create link('name already exists') after 1st epoch while using a custom keras layer

I am trying to build an RCL block in Keras for recurrent convolution network as described in the paper : https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liang_Recurrent_Convolutional_Neural_2015_CVPR_paper.pdf
My model using my custom keras layer gets compiled however, after running 1 epoch when model is being saved i encounter an error thrown by h5py:
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5o.pyx in h5py.h5o.link()
RuntimeError: Unable to create link (name already exists)
I am using already existing keras layers inside my custom layer:
class Recurrent_block(tf.keras.layers.Layer):
def __init__(self, ch_out,t):
self.num_outputs = num_outputs
self.t = t
self.ch_out = ch_out
super(Recurrent_block, self).__init__()
def build(self, input_shape)
self.shape = tf.TensorShape([input_shape[0].value,input_shape[1].value,input_shape[2].value,self.ch_out])
self.fc = tf.keras.models.Sequential([tf.keras.layers.Conv2D(filters=self.ch_out, kernel_size=(3,3), strides=(1, 1), padding='same'),tf.keras.layers.BatchNormalization(),tf.keras.layers.Activation('relu')])
self.fc.build(self.shape)
print(self.fc.trainable_weights)
self._trainable_weights = self.fc.trainable_weights
super(Recurrent_block, self).build(self.shape)
def call(self,x):
x = tf.keras.layers.Conv2D(self.ch_out,kernel_size=1,strides=(1, 1),padding='same')(x)
for i in range(self.t+1):
if i==0:
x1 = self.fc(x)
x1 = self.fc(x+x1)
return x1
Can someone please help either resolving this error or just guiding me on how to combine multiple keras layer into one custom layer and also to change the call function in the way i did.

Loading Weights - NoneType Found

I am working on an LSTM for a final project. I've been following TensorFlow's tutorial here: https://www.tensorflow.org/tutorials/sequences/text_generation for most of it, especially for how to save and load the models. However, it's coming up with this error:
Traceback (most recent call last):
File "D:\xxx\Documents\Class Coding\Artificial Intelligence\Shelley>\Writerbot.py", line 187, in
restore_progress()
File "D:\xxx\Documents\Class Coding\Artificial Intelligence\Shelley\Writerbot.py", line 141, in restore_progress
shelley.load_weights(weights)
File "C:\Users\xxx\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\network.py", line 1508, in load_weights
if _is_hdf5_filepath(filepath):
File "C:\Users\xxx\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\network.py", line 1648, in _is_hdf5_filepath
return filepath.endswith('.h5') or filepath.endswith('.keras')
AttributeError: 'NoneType' object has no attribute 'endswith'
And here is my code related to loading and restoring weights, as best as I can tell, since the rest of the error's coming from keras:
def create_shelley(vocab, embedding, numunits, batch):
"""This is what actually creates a neural network."""
shelley = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab, embedding,
batch_input_shape=[batch, None]),
lstm(numunits,
return_sequences=True,
recurrent_initializer='glorot_uniform',
stateful=True),
tf.keras.layers.Dense(vocab)
])
return shelley
def train():
"""We create weight checkpoints as we train our neural network on files fed into it."""
checkpoints = 'D:\\xxx\\Documents\\Class Coding\\Artificial Intelligence\\Shelley\\trainingcheckpoints'
prefix = os.path.join(checkpoints, "ckpt_{epoch}")
callback=tf.keras.callbacks.ModelCheckpoint(
filepath=prefix,
save_weights_only=True)
print(epochsteps)
history = shelley.fit(botfeed.repeat(), epochs=epochs, steps_per_epoch=epochsteps, callbacks=[callback])
def restore_progress():
"""Load the most recent weight checkpoint."""
trainingcheckpoints = "D:\\Robin Pegau\\Documents\\Class Coding\\Artificial Intelligence\\Shelley\\trainingcheckpoints\\checkpoint"
weights = tf.train.latest_checkpoint(trainingcheckpoints)
shelley = create_shelley(vocab, embed, totalunits, batch = 1)
shelley.load_weights(weights)
shelley.build(tf.TensorShape([1, None]))
restore_progress()
There is a "checkpoint" file that has no filetype. There are also files that look like "ckpt_[x].index" and "ckpt_[x].data-00000-of-00001
Thank you all for your help in advance.

error when using keras' sk-learn API

  i'm learning keras these days, and i met an error when using scikit-learn API.Here are something maybe useful:
ENVIRONMENT:
python:3.5.2
keras:1.0.5
scikit-learn:0.17.1
CODE
import pandas as pd
from keras.layers import Input, Dense
from keras.models import Model
from keras.models import Sequential
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sqlalchemy import create_engine
from sklearn.cross_validation import KFold
def read_db():
"get prepared data from mysql."
con_str = "mysql+mysqldb://root:0000#localhost/nbse?charset=utf8"
engine = create_engine(con_str)
data = pd.read_sql_table('data_ml', engine)
return data
def nn_model():
"create a model."
model = Sequential()
model.add(Dense(output_dim=100, input_dim=105, activation='softplus'))
model.add(Dense(output_dim=1, input_dim=100, activation='softplus'))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
data = read_db()
y = data.pop('PRICE').as_matrix()
x = data.as_matrix()
model = nn_model()
model = KerasRegressor(build_fn=model, nb_epoch=2)
model.fit(x,y) #something wrong here!
ERROR
Traceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/forecast/gridsearch.py", line 43, in <module>
model.fit(x,y)
File "D:\Program Files\Python35\lib\site-packages\keras\wrappers\scikit_learn.py", line 135, in fit
**self.filter_sk_params(self.build_fn.__call__))
TypeError: __call__() missing 1 required positional argument: 'x'
Process finished with exit code 1
  the model works well without packaging with kerasRegressor, but i wanna using sk_learn's gridSearch after this, so i'm here for help. I tried but still have no idea.
something maybe helpful:
keras.warappers.scikit_learn.py
class BaseWrapper(object):
def __init__(self, build_fn=None, **sk_params):
self.build_fn = build_fn
self.sk_params = sk_params
self.check_params(sk_params)
def fit(self, X, y, **kwargs):
'''Construct a new model with build_fn and fit the model according
to the given training data.
# Arguments
X : array-like, shape `(n_samples, n_features)`
Training samples where n_samples in the number of samples
and n_features is the number of features.
y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)`
True labels for X.
kwargs: dictionary arguments
Legal arguments are the arguments of `Sequential.fit`
# Returns
history : object
details about the training history at each epoch.
'''
if self.build_fn is None:
self.model = self.__call__(**self.filter_sk_params(self.__call__))
elif not isinstance(self.build_fn, types.FunctionType):
self.model = self.build_fn(
**self.filter_sk_params(self.build_fn.__call__))
else:
self.model = self.build_fn(**self.filter_sk_params(self.build_fn))
loss_name = self.model.loss
if hasattr(loss_name, '__name__'):
loss_name = loss_name.__name__
if loss_name == 'categorical_crossentropy' and len(y.shape) != 2:
y = to_categorical(y)
fit_args = copy.deepcopy(self.filter_sk_params(Sequential.fit))
fit_args.update(kwargs)
history = self.model.fit(X, y, **fit_args)
return history
  error occored in this line:
self.model = self.build_fn(
**self.filter_sk_params(self.build_fn.__call__))
self.build_fn here is keras.models.Sequential
models.py
class Sequential(Model):
def call(self, x, mask=None):
if not self.built:
self.build()
return self.model.call(x, mask)
So, what's that x mean and how to fix this error?
Thanks!
xiao, I ran into the same issue! Hopefully this helps:
Background and The Issue
The documentation for Keras states that, when implementing Wrappers for scikit-learn, there are two arguments. The first is the build function, which is a "callable function or class instance". Specifically, it states that:
build_fn should construct, compile and return a Keras model, which will then be used to fit/predict. One of the following three values could be passed to build_fn:
A function
An instance of a class that implements the call method
None. This means you implement a class that inherits from either KerasClassifier or KerasRegressor. The call method of the present class will then be treated as the default build_fn.
In your code, you create the model, and then pass the model as the value for the argument build_fn when creating the KerasRegressor wrapper:
model = nn_model()
model = KerasRegressor(build_fn=model, nb_epoch=2)
Herein lies the issue. Rather than passing your nn_model function as the build_fn, you pass an actual instance of the Keras Sequential model. For this reason, when fit() is called, it cannot find the call method, because it is not implemented in the class you returned.
Proposed Solution
What I did to make things work is pass the function as build_fn, rather than an actual model:
data = read_db()
y = data.pop('PRICE').as_matrix()
x = data.as_matrix()
# model = nn_model() # Don't do this!
# set build_fn equal to the nn_model function
model = KerasRegressor(build_fn=nn_model, nb_epoch=2) # note that you do not call the function!
model.fit(x,y) # fixed!
This is not the only solution (you could set build_fn to a class that implements the call method appropriately), but the one that worked for me. I hope it helps you!
User-defined keyword arguments passed to __init__() that is to say, all keyword arguments that were given to __init__() will be passed to model_build_fn directly. For example, calling KerasClassifier(myparam=10) will result in a model_build_fn(my_param=10)
here's an example:
class MyMultiOutputKerasRegressor(KerasRegressor):
# initializing
def __init__(self, **kwargs):
KerasRegressor.__init__(self, **kwargs)
# simpler fit method
def fit(self, X, y, **kwargs):
KerasRegressor.fit(self, X, [y]*3, **kwargs)
(...)
def get_quantile_reg_rpf_nn(layers_shape=[50,100,200,100,50], inDim= 4, outDim=1, act='relu'):
# do model stuff...
(...)
initialize the Keras regressor:
base_model = MyMultiOutputKerasRegressor(build_fn=get_quantile_reg_rpf_nn,
layers_shape=[50,100,200,100,50], inDim= 4,
outDim=1, act='relu', epochs=numEpochs,
batch_size=batch_size, verbose=0)

Categories

Resources