`fit_generator` is not yet enabled for unbuilt Model subclasses - python

I am trying to utilize Model.fit_generator of Keras in tensorflow 1.10.
Simplified reproducible code:
import tensorflow as tf
import numpy as np
class TestNet(tf.keras.Model):
def __init__(self, class_count, name='TestNet', **kwargs):
super(TestNet, self).__init__(name=name, **kwargs)
self.convolution = tf.keras.layers.Conv1D(class_count, kernel_size=1, input_shape=(None, 3))
def call(self, points):
return self.convolution(points)
def segmentation_loss(labels, logits):
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
return tf.reduce_mean(cross_entropy)
def generate():
while True:
yield (np.zeros(shape=(100,3)), np.zeros(shape=(100)))
if __name__ == "__main__":
test_net = TestNet(class_count=5)
optimizer = tf.keras.optimizers.Adam()
test_net.compile(optimizer, loss=segmentation_loss)
history = test_net.fit_generator(generate, steps_per_epoch=1000, epochs=10)
While this works in tensorflow 1.14, executing this in 1.10 yields the NotImplementedError in the title:
NotImplementedError: fit_generator is not yet enabled for unbuilt Model subclasses
Anybody knows how to work around this?

I see a mistake in fit_generator, update to this:
history = test_net.fit_generator(generate(), steps_per_epoch=1000, epochs=10)
Now, I don't know how Tensorflow 1.10 works, but this kind of modeling is rather new, usually a Keras model is built like this:
#model's inputs
inputs = Input((None,3))
#model's layers
convolution = Conv1D(class_count, kernel_size=1)
#model's call
outputs = convolution(inputs)
#model finish
test_net = Model(inputs, outputs)
A workaround for you:
inputs = Input(shape)
test_net = TestNet(...)
outputs = test_net.call(inputs)
test_net_model = Model(inputs, outputs)

Related

How to pass fix hyperparameters as variables for Keras-Tuner?

I'd like to do a hyperparameter-tuning on a Keras model with Keras tuner.
import tensorflow as tf
from tensorflow import keras
import keras_tuner as kt
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
hp_units = hp.Int('units', min_value=32, max_value=512, step=32)
model.add(keras.layers.Dense(units=hp_units, activation='relu'))
model.add(keras.layers.Dense(10))
hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])
model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
tuner = kt.Hyperband(model_builder,
objective='val_accuracy',
max_epochs=10,
factor=3)
tuner.search(train_X, train_y, epochs=50)
So far, so good. However, I additionally want to define some model parameters (like input image dimensions) as input parameters for model_builder, I'm clueless, how to done:
def model_builder(hp, img_dim1, img_dim2):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(img_dim1, img_dim2)))
...
and
tuner = kt.Hyperband(model_builder(img_dim1, img_dim2),
objective='val_accuracy',
max_epochs=10,
factor=3)
seemingly doesn't work. How to feed img_dim1, img_dim2 to the model beyond hp?
A simple solution is using "partial function" in python like the following:
from functools import partial
#...
model_builder_ready = partial(model_builder, img_dim1 = value1, img_dim2 = value2)
tuner = kt.Hyperband(model_builder_ready,
objective='val_accuracy',
max_epochs=10,
factor=3)
The solution I came up with was to create a function that return a function (probably what partial does), so this should look like this :
def model_builder(img_dim1, img_dim2):
def func(hp):
"""
Your original builder but here img_dim1 and img_dim2 exist in the scope so you can use them as parameter
"""
return func
tuner = kt.Hyperband(model_builder(img_dim1, img_dim2),
objective='val_accuracy',
max_epochs=10,
factor=3)

How to copy a tf.keras.models.Model subclass?

I need to copy a keras model and there is no way that I know of which can be done unless the model is not a tf.keras.models.Model() subclass.
Note: The use copy.deepcopy() will work without giving any errors however it will result in another error whenever the copy is used.
Example:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
if __name__ == '__main__':
model1 = MyModel()
model2 = tf.keras.models.clone_model(model1)
Results in:
Traceback (most recent call last):
File "/Users/emadboctor/Library/Application Support/JetBrains/PyCharm2020.3/scratches/scratch.py", line 600, in <module>
model2 = tf.keras.models.clone_model(model1)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 430, in clone_model
return _clone_functional_model(
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 171, in _clone_functional_model
raise ValueError('Expected `model` argument '
ValueError: Expected `model` argument to be a functional `Model` instance, but got a subclass model instead.
Currently, we can't use tf.keras.models.clone_model for subclassed model API whereas we can for sequential and functional API. From doc,
model Instance of Model (could be a functional model or a Sequential model).
Here is a workaround for your need. It makes sense if we need to copy a trained model, where we can get some optimized parameters. So, the main task is we need to create a new model by copying an existing model. The most convenient way for now of this scenario is to get trained weight and set to the newly created model instances. Let first build a model, train it and then get and set weight matrices to the new model.
import tensorflow as tf
import numpy as np
class ModelSubClassing(tf.keras.Model):
def __init__(self, num_classes):
super(ModelSubClassing, self).__init__()
self.conv1 = tf.keras.layers.Conv2D(32, 3, strides=2, activation="relu")
self.gap = tf.keras.layers.GlobalAveragePooling2D()
self.dense = tf.keras.layers.Dense(num_classes)
def call(self, input_tensor, training=False):
# forward pass: block 1
x = self.conv1(input_tensor)
x = self.gap(x)
return self.dense(x)
def build_graph(self, raw_shape):
x = tf.keras.layers.Input(shape=raw_shape)
return tf.keras.Model(inputs=[x], outputs=self.call(x))
# compile
sub_classing_model = ModelSubClassing(10)
sub_classing_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = tf.keras.metrics.CategoricalAccuracy(),
optimizer = tf.keras.optimizers.Adam())
# plot for debug
tf.keras.utils.plot_model(
sub_classing_model.build_graph(x_train.shape[1:]),
show_shapes=False,
show_dtype=False,
show_layer_names=True,
expand_nested=False,
dpi=96,
)
DataSet
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
# train set / data
x_train = np.expand_dims(x_train, axis=-1)
x_train = x_train.astype('float32') / 255
# train set / target
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
# fit
sub_classing_model.fit(x_train, y_train, batch_size=128, epochs=1)
# 469/469 [==============================] - 2s 2ms/step - loss: 8.2821
New Model / Copy
For the subclasses model, we have to initiate the class object.
sub_classing_model_copy = ModelSubClassing(10)
sub_classing_model_copy.build((x_train.shape))
sub_classing_model_copy.set_weights(sub_classing_model.get_weights()) # <- get and set wg
# plot for debug ; same as original plot
# but know, layer name is no longer same
# i.e. if, old: conv2d_40 , new/copy: conv2d_41
tf.keras.utils.plot_model(
sub_classing_model_copy.build_graph(x_train.shape[1:]),
show_shapes=False,
show_dtype=False,
show_layer_names=True,
expand_nested=False,
dpi=96,
)
def clones(module, N):
Creation of N identical layers.
:param module: module to clone
:param N: number of copies
:return: keras model of module copies
seqm=KM.Sequential()
for i in range(N):
m = copy.deepcopy(module)
m.name=m.name+str(i)
seqm.add(m)
return seqm

Why does tf.executing_eagerly() return False in TensorFlow 2?

Let me explain my set up. I am using TensorFlow 2.1, the Keras version shipped with TF, and TensorFlow Probability 0.9.
I have a function get_model that creates (with the functional API) and returns a model using Keras and custom layers. In the __init__ method of these custom layers A, I call a method A.m, which executes the statement print(tf.executing_eagerly()), but it returns False. Why?
To be more precise, this is roughly my setup
def get_model():
inp = Input(...)
x = A(...)(inp)
x = A(...)(x)
...
model = Model(inp, out)
model.compile(...)
return model
class A(tfp.layers.DenseFlipout): # TensorFlow Probability
def __init__(...):
self.m()
def m(self):
print(tf.executing_eagerly()) # Prints False
The documentation of tf.executing_eagerly says
Eager execution is enabled by default and this API returns True in most of cases. However, this API might return False in the following use cases.
Executing inside tf.function, unless under tf.init_scope or tf.config.experimental_run_functions_eagerly(True) is previously called.
Executing inside a transformation function for tf.dataset.
tf.compat.v1.disable_eager_execution() is called.
But these cases are not my case, so tf.executing_eagerly() should return True in my case, but no. Why?
Here's a simple complete example (in TF 2.1) that illustrates the problem.
import tensorflow as tf
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
tf.print("tf.executing_eagerly() =", tf.executing_eagerly())
return inputs
def get_model():
inp = tf.keras.layers.Input(shape=(1,))
out = MyLayer(8)(inp)
model = tf.keras.Model(inputs=inp, outputs=out)
model.summary()
return model
def train():
model = get_model()
model.compile(optimizer="adam", loss="mae")
x_train = [2, 3, 4, 1, 2, 6]
y_train = [1, 0, 1, 0, 1, 1]
model.fit(x_train, y_train)
if __name__ == '__main__':
train()
This example prints tf.executing_eagerly() = False.
See the related Github issue.
As far as I know, when an input to a custom layer is symbolic input, then the layer is executed in graph (non-eager) mode. However, if your input to the custom layer is an eager tensor (as in the following example #1, then the custom layer is executed in the eager mode. So your model's output tf.executing_eagerly() = False is expected.
Example #1
from tensorflow.keras import layers
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(initial_value=b_init(shape=(units,),
dtype='float32'),
trainable=True)
def call(self, inputs):
print("tf.executing_eagerly() =", tf.executing_eagerly())
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((1, 2)) # returns tf.executing_eagerly() = True
#x = tf.keras.layers.Input(shape=(2,)) #tf.executing_eagerly() = False
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
#output in graph mode: Tensor("linear_9/Identity:0", shape=(None, 4), dtype=float32)
#output in Eager mode: tf.Tensor([[-0.03011466 0.02563028 0.01234017 0.02272708]], shape=(1, 4), dtype=float32)
Here is another example with Keras functional API where custom layer was used (similar to you). This model is executed in graph mode and prints tf.executing_eagerly() = False as in your case.
from tensorflow import keras
from tensorflow.keras import layers
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
print("tf.executing_eagerly() =", tf.executing_eagerly())
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
You might be running in a Colab. If so, try the following immediately after importing Tensorflow:
tf.compat.v1.enable_v2_behavior()
More generally, check the docs at https://www.tensorflow.org/api_docs/python/tf/executing_eagerly for more information on eager execution.

NotImplementedError: Layers with arguments in `__init__` must override `get_config`

I'm trying to save my TensorFlow model using model.save(), however - I am getting this error.
The model summary is provided here:
Model Summary
The code for the transformer model:
def transformer(vocab_size, num_layers, units, d_model, num_heads, dropout, name="transformer"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
dec_inputs = tf.keras.Input(shape=(None,), name="dec_inputs")
enc_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='enc_padding_mask')(inputs)
# mask the future tokens for decoder inputs at the 1st attention block
look_ahead_mask = tf.keras.layers.Lambda(
create_look_ahead_mask,
output_shape=(1, None, None),
name='look_ahead_mask')(dec_inputs)
# mask the encoder outputs for the 2nd attention block
dec_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='dec_padding_mask')(inputs)
enc_outputs = encoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[inputs, enc_padding_mask])
dec_outputs = decoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask])
outputs = tf.keras.layers.Dense(units=vocab_size, name="outputs")(dec_outputs)
return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name)
I don't understand why it's giving this error since the model trains perfectly fine.
Any help would be appreciated.
My saving code for reference:
print("Saving the model.")
saveloc = "C:/tmp/solar.h5"
model.save(saveloc)
print("Model saved to: " + saveloc + " succesfully.")
It's not a bug, it's a feature.
This error lets you know that TF can't save your model, because it won't be able to load it.
Specifically, it won't be able to reinstantiate your custom Layer classes: encoder and decoder.
To solve this, just override their get_config method according to the new arguments you've added.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
For example, if your encoder class looks something like this:
class encoder(tf.keras.layers.Layer):
def __init__(
self,
vocab_size, num_layers, units, d_model, num_heads, dropout,
**kwargs,
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.num_layers = num_layers
self.units = units
self.d_model = d_model
self.num_heads = num_heads
self.dropout = dropout
# Other methods etc.
then you only need to override this method:
def get_config(self):
config = super().get_config().copy()
config.update({
'vocab_size': self.vocab_size,
'num_layers': self.num_layers,
'units': self.units,
'd_model': self.d_model,
'num_heads': self.num_heads,
'dropout': self.dropout,
})
return config
When TF sees this (for both classes), you will be able to save the model.
Because now when the model is loaded, TF will be able to reinstantiate the same layer from config.
Layer.from_config's source code may give a better sense of how it works:
#classmethod
def from_config(cls, config):
return cls(**config)
This problem is caused by mixing imports between the keras and tf.keras libraries, which is not supported.
Use tf.keras.models or usr keras.models everywhere
You should never mix imports between these libraries, as it will not work and produces all kinds of strange error messages. These errors change with versions of keras and tensorflow.
I suggest You try the following:
model = tf.keras.Model(...)
model.save_weights("some_path")
...
model.load_weights("some_path")
I think simple solution is to install the tensorflow==2.4.2 for gpu tensorflow-gpu==2.4.2 , i faced the issue and debug the whole day but it was not resolved. finally i installed the older stable version and error is gone

How to call functions of the class Sequential() if those are not in the source code?

I am quite new to machine learning and I am trying to implement my custom layer in keras. I found a couple of tutorials and it seems comparatively straight forward. What I do not understand, though, is how to implement my new custom layer in Sequential(). See for example this classification problem that I took from the tensorflow website(https://www.tensorflow.org/tutorials/keras/basic_text_classification), posted here for your convenience:
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
results = model.evaluate(test_data, test_labels)
print(results)
Do I have to change the source code for keras.Sequential() or is there an easy way?
Furthermore, looking at the source code for the class Sequential() made me wonder: I can't figure out how functions like 'summary()','compile()', 'fit()' and 'evaluate()' can be called if those are not even provided in the source code in this class. Here is the source code for Sequential():
https://github.com/keras-team/keras/blob/a1397169ddf8595736c01fcea084c8e34e1a3884/keras/engine/sequential.py
Sequential is a Model, and not a layer.
The functions you mentioned (summary, compile, fit, evaluate) are implemented in the Model class linked here, as Sequential is a subclass of Model.
If you're writing a custom layer, you should be subclassing Layer instead, and not Model or Sequential.
You would need to implement build, call, and compute_output_shape to create your own layer.
There's a few examples on the Keras documentation:
from keras import backend as K
from keras.layers import Layer
class MyLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
To use it, import the MyLayer class from whichever file you put it in, and then add it like the default Keras layers:
from custom.layers import MyLayer
model = keras.Sequential()
model.add(MyLayer())

Categories

Resources