I am running a hyperparameter search on a new version keras tuner for a NN, and I get an error that 1) didn't exist in the old one, 2) doesn't make sense.
ValueError: Unknown initializer: relu. Please ensure this object is passed to the custom_objects argument.
I don't get why 'relu' is passed as initializer, the code breaks at the definition of learning rate (as per keras tuner documentation):
hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=hp_learning_rate),
loss=self.loss_function,
metrics=['sparse_categorical_accuracy', 'accuracy'])
However, it works with
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss=self.loss_function,
metrics=['sparse_categorical_accuracy', 'accuracy'])
I am not specifying any weight initializers, so they all should be default ones.
Any idea why this could be happening?
thank you very much in advance!
Related
Recently, I had an error in one custom layer of a larger model. The error can be reproduced by this code. However, after fixing it in the smaller example, the larger model still presented the same error. It is hard to tell which custom layer, or part of the code in the larger model is the culprit. How can I debug and find exactly which line of code is creating this error? For instance in the smaller code below it seems to be the tf.reduce_mean(), but how can I determine this, since it occurs when loading the model? Ultimately, I need to employ this debugging technique for the larger model.
Code:
import tensorflow as tf
import numpy as np
from tensorflow.keras import Input, Model
tf.compat.v1.disable_eager_execution()
#tf.compat.v1.enable_eager_execution()
inputs = Input(shape=(2,))
output_loss = tf.keras.backend.mean(inputs)
outputs = [inputs, output_loss]
model = Model(inputs, outputs)
loss = tf.reduce_mean((output_loss)) #Error
#loss = tf.math.rsqrt((output_loss)) #No Error
model.add_loss(loss)
model.compile(optimizer="adam", loss=[None] * len(model.outputs))
model.fit(np.random.random((5, 2)), epochs=2)
model.save("my_model_.h5")
#Error when loading and loss tf.reduce_mean
model_ = keras.models.load_model("my_model_.h5", compile=False)# ValueError: Inconsistent values for attr 'Tidx' DT_FLOAT vs. DT_INT32 while building NodeDef 'tf_op_layer_Mean_1/Mean_1'
model_.summary()
error
ValueError: Inconsistent values for attr 'Tidx' DT_FLOAT vs. DT_INT32 while building NodeDef
From the tensorflow doc i have read here, I have tried to minimise the adam optimizer.
optimizer = tf.compat.v1.train.AdamOptimizer
print("Using AdamOptimizer...")
train_step = optimizer.minimize(loss, global_step = global_step,var_list = [process_image])
But I receive this error below from the code. Even though I have passed through the 'loss' argument. I think it may be due to using Tensorflow 2?
Do you have a loss tensor called loss?
In that case, you could try to write simply :
optimizer.minimize(loss = loss, ...)
I am stuck with tensorflow 1.12, and I need to use layer normalization. I can't find some examples of this, and as I am new to tensorflow I am unable to figure out where I am going wrong.
tf.contrib.layers.layer_norm is the function that I want to include in my tf.keras.Sequential() like this -
self.module = K.Sequential([
tf.contrib.layers.layer_norm(trainable=True),
K.layers.Activation(self.activation),
K.layers.Dense(units=self.output_size, activation=None, kernel_initializer=self.initializer)
])
I also tried using
self.ln = tf.contrib.layers.layer_norm(trainable=True)
### and in call()
self.ln(self.module)
In all the cases, it throws the error at the line defining tf.contrib.layers.layer_norm(trainable=True)-
TypeError: layer_norm() missing 1 required positional argument: 'inputs'
I understand that the inputs need to be given as the argument to layernorm, but if I want it to trainable, it can only be defined in __init__(). Where am I going wrong?
I use mainly PyTorch, so it is quite obvious that I am not able to grasp the ideology of tf. Any suggestions will be very helpful!
Sequential needs to be initialized by a list of Layer instances, such as tf.keras.layers.Activation, tf.keras.layers.Dense. tf.contrib.layers.layer_norm is functional instead of Layer instance.
There is a third party implementation of layer normalization in keras style - keras-layer-normalization. But I haven't tested in tensorflow.
I have seen such kind of code as follow:
embed_word = Embedding(params['word_voc_size'], params['embed_dim'], weights=[word_embed_matrix], input_length = params['word_max_size']
, trainable=False, mask_zero=True)
When I look up the document in Keras website [https://faroit.github.io/keras-docs/2.1.5/layers/embeddings/][1]
I didnt see weights argument,
keras.layers.Embedding(input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None)
So I am confused,why we can use the argument weights which was not defined the in Keras document?
My keras version is 2.1.5. Hope someone can help me.
Keras' Embedding layer subclasses the Layer class (every Keras layer does this). The weights attribute is implemented in this base class, so every subclass will allow to set this attribute through a weights argument. This is also why you won't find it back in the documentation or the implementation of the Embedding layer itself.
You can check the base layer implementation here (Ctrl + F for 'weight').
Here I developed a neural network classifier to solve the titanic problem.
from sknn.mlp import Classifier, Layer
nn = Classifier(
layers=[
Layer("Maxout", units=100, pieces=2),
Layer("Softmax")],
learning_rate=0.001,
n_iter=25)
nn.fit(X_train, y_train)
I got this error, I have tried a lot to fix it but nothing works with me.
Please, help me
TypeError: init() got an unexpected keyword argument 'pieces'
The signature of Layer does not define any argument called pieces. To create two layers with same parameters, you'll have to define the Layer object twice:
layers=[
Layer("Sigmoid", units=100),
Layer("Sigmoid", units=100),
Layer("Softmax", units=1)] # The units parameter is not optional
More so, "Maxout" does not look like a Layer type. Not sure of where you found that.
Specifically, options are Rectifier, Sigmoid, Tanh, and ExpLin
for non-linear layers and Linear or Softmax for output layers