TensorFlow Estimator : model_fn has following not expected args: ['self'] - python

I'm using TensorFlow (1.1) high-level API Estimators to create my neural net. But I'm using it into a class and I have to call an instance of my class to generate the model of the neural network. (Here self.a)
class NeuralNetwork(object):
def __init__(self):
""" Create neural net """
regressor = tf.estimator.Estimator(model_fn=self.my_model_fn,
model_dir="/tmp/data")
// ...
def my_model_fn(self, features, labels, mode):
""" Generate neural net model """
self.a = a
predictions = ...
loss = ...
train_op = ...
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op)
But I get the error :
ValueError: model_fn [...] has following not expected args: ['self'].
I tried to remove the self for the args of my model but got another error TypeError: … got multiple values for keyword argument.
Is there any way to use these EstimatorSpec into a class ?

It looks like the Estimator's argument checking is a bit overzealous. As a workaround, you can wrap the member-function model_fn in a lambda like so:
import tensorflow as tf
class ModelClass(object):
def __init__(self):
self._constant = 2.
self.regressor = tf.estimator.Estimator(
model_fn=lambda features, labels, mode: self._model_fn(
features, labels, mode))
def _model_fn(self, features, labels, mode):
loss = tf.constant(self._constant)
train_op = tf.no_op()
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op)
ModelClass()
However, this is rather annoying. Would you mind filing a feature request on Github to relax this argument checking for member functions?
Update: Should be fixed in TensorFlow 1.3+. Thanks, Yuan!

Related

How to get output layer values during training tensorflow

Is it possible to get the output layer values during the training in order to build a custom loss function.
to be more specific i want to get the output value and compute the loss using external method
my problem is I can't pass tf.eval() before initialize variables using tf.global_variables_initializer()
def run_command(im_path,p):
s = 'cmd'+p.eval()
os.system(s)
im = imread(im_path)
return im
def cross_corr(y_true, y_pred):
path1 = 'path_to_input_image'
true_image = run_command(path1, y_pred)
path2 = 'path_to_predicted_image'
predicted_image = run_command(path2,y_true)
pearson_r, update_op = tf.contrib.metrics.streaming_pearson_correlation(predicted_image, true_image, name='pearson_r')
loss = 1-(tf.math.square(pearson_r))
return loss
***
***
# Create the network
***
tf.global_variables_initializer()
***
# run training
with tf.Session() as sess:
***
If your model has a single output value, you can subclass tf.keras.losses.Loss. For example, a trivial custom loss function (which wouldn't be very good in training) implementation:
import tensorflow as tf
class LossCustom(tf.keras.losses.Loss):
def __init__(self, some_arg):
super(LossCustom, self).__init__()
self.param_some_arg = some_arg
def get_config(self):
config = super(LossCustom, self).get_config()
config.update({
"some_arg": self.param_some_arg,
})
return config
def call(self, y_true, y_pred):
result = y_pred - y_true
return tf.math.reduce_sum(result)
However, if you have multiple outputs and want to run them through the same loss function, you need to artificially combine them into a single value.
I do this in one of my models with an a dummy layer at the end of my model like this:
import tensorflow as tf
class LayerCheeseMultipleOut(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(LayerCheeseMultipleOut, self).__init__(**kwargs)
def call(self, inputs):
return tf.stack(inputs, axis=1) # [batch_size, OUTPUTS, .... ]
Then, in your custom loss function, unstack again like so:
output_a, output_b = tf.unstack(y_pred, axis=1)

NotImplementedError: Layers with arguments in `__init__` must override `get_config`

I'm trying to save my TensorFlow model using model.save(), however - I am getting this error.
The model summary is provided here:
Model Summary
The code for the transformer model:
def transformer(vocab_size, num_layers, units, d_model, num_heads, dropout, name="transformer"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
dec_inputs = tf.keras.Input(shape=(None,), name="dec_inputs")
enc_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='enc_padding_mask')(inputs)
# mask the future tokens for decoder inputs at the 1st attention block
look_ahead_mask = tf.keras.layers.Lambda(
create_look_ahead_mask,
output_shape=(1, None, None),
name='look_ahead_mask')(dec_inputs)
# mask the encoder outputs for the 2nd attention block
dec_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='dec_padding_mask')(inputs)
enc_outputs = encoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[inputs, enc_padding_mask])
dec_outputs = decoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask])
outputs = tf.keras.layers.Dense(units=vocab_size, name="outputs")(dec_outputs)
return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name)
I don't understand why it's giving this error since the model trains perfectly fine.
Any help would be appreciated.
My saving code for reference:
print("Saving the model.")
saveloc = "C:/tmp/solar.h5"
model.save(saveloc)
print("Model saved to: " + saveloc + " succesfully.")
It's not a bug, it's a feature.
This error lets you know that TF can't save your model, because it won't be able to load it.
Specifically, it won't be able to reinstantiate your custom Layer classes: encoder and decoder.
To solve this, just override their get_config method according to the new arguments you've added.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
For example, if your encoder class looks something like this:
class encoder(tf.keras.layers.Layer):
def __init__(
self,
vocab_size, num_layers, units, d_model, num_heads, dropout,
**kwargs,
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.num_layers = num_layers
self.units = units
self.d_model = d_model
self.num_heads = num_heads
self.dropout = dropout
# Other methods etc.
then you only need to override this method:
def get_config(self):
config = super().get_config().copy()
config.update({
'vocab_size': self.vocab_size,
'num_layers': self.num_layers,
'units': self.units,
'd_model': self.d_model,
'num_heads': self.num_heads,
'dropout': self.dropout,
})
return config
When TF sees this (for both classes), you will be able to save the model.
Because now when the model is loaded, TF will be able to reinstantiate the same layer from config.
Layer.from_config's source code may give a better sense of how it works:
#classmethod
def from_config(cls, config):
return cls(**config)
This problem is caused by mixing imports between the keras and tf.keras libraries, which is not supported.
Use tf.keras.models or usr keras.models everywhere
You should never mix imports between these libraries, as it will not work and produces all kinds of strange error messages. These errors change with versions of keras and tensorflow.
I suggest You try the following:
model = tf.keras.Model(...)
model.save_weights("some_path")
...
model.load_weights("some_path")
I think simple solution is to install the tensorflow==2.4.2 for gpu tensorflow-gpu==2.4.2 , i faced the issue and debug the whole day but it was not resolved. finally i installed the older stable version and error is gone

Using L-BFGS optimizer with Tensorflow estimator API

I am using Tensorflow Estimator API but haven't figured out how to use the L-BFGS optimizer available at tf.contrib.opt.ScipyOptimizerInterface.
It seems the estimator API expects some optimizer from the tf.train module but no BFGS implementation is available there. The only one defined in contrib does not follow the same interface.
To be more specific, in the official tutorial to define custom estimators, it's shown how to use the AdagradOptimizer:
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
However, the API of the ScipyOptimizerInterface is as follows:
optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})
with tf.Session() as session:
optimizer.minimize(session)
Taking a full example:
from sklearn import datasets
import numpy as np
def _custom_model_fn(features, labels, mode, feature_columns):
predictions = tf.feature_column.linear_model(features, feature_columns)
predictions = tf.reshape(predictions, [-1])
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {'predictions': predictions}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
loss = tf.losses.mean_squared_error(labels=labels, predictions=predictions,
reduction=tf.losses.Reduction.SUM_BY_NONZERO_WEIGHTS)
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=metrics)
# Create training op.
assert mode == tf.estimator.ModeKeys.TRAIN
# train_op = tf.contrib.opt.ScipyOptimizerInterface(loss, options={'maxiter': 10})
optimizer = tf.train.FtrlOptimizer(learning_rate=1.)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode,
predictions=predictions,
loss=loss,
train_op=train_op)
class MyRegressor(tf.estimator.Estimator):
def __init__(self, feature_columns, model_dir=None, config=None):
def _model_fn(features, labels, mode, config):
return _custom_model_fn(features, labels, mode, feature_columns)
super(MyRegressor, self).__init__(model_fn=_model_fn)
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
diabetes_X = diabetes.data[:, np.newaxis, 2]
diabetes_y = diabetes.target
# Create the custom estimator and train it
feature_columns = [tf.feature_column.numeric_column('x')]
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': np.array(diabetes.data[:, 2])},
y=np.array(diabetes.target),
num_epochs=None,
shuffle=True)
myregressor = MyRegressor(feature_columns)
myregressor.train(train_input_fn, steps=10000)
If I un-comment the line to use the ScipyOptimizer instead, I obviously get an error as follows
TypeError: train_op must be Operation or Tensor, given: <tensorflow.contrib.opt.python.training.external_optimizer.ScipyOptimizerInterface object
Is there an easy way to use the Scipy optimizer?
Thanks in advance.

How to save a trained model (Estimator) and Load it back to test it with data in Tensorflow?

I have this snippet, for my model
import pandas as pd
import tensorflow as tf
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python import SKCompat
#Assume my dataset is using X['train'] as input and y['train'] as output
regressor = SKCompat(learn.Estimator(model_fn=lstm_model(TIMESTEPS, RNN_LAYERS, DENSE_LAYERS),model_dir=LOG_DIR))
validation_monitor = learn.monitors.ValidationMonitor(X['val'], y['val'], every_n_steps=PRINT_STEPS, early_stopping_rounds=1000)
regressor.fit(X['train'], y['train'],
monitors=[validation_monitor],
batch_size=BATCH_SIZE,
steps=TRAINING_STEPS)
#After training this model I want to save it in a folder, so I can use the trained model for implementing in my algorithm to predict the output
#What is the correct format to use here to save my model in a folder called 'saved_model'
regressor.export_savedmodel('/saved_model/')
#I want to import it later in some other code, How can I import it?
#is there any function like import model from file?
How can I save this estimator? I tried finding some examples for tf.contrib.learn.Estimator.export_savedmodel, I did not have a success? Help Appreciated.
The function export_savedmodel requires the argument serving_input_receiver_fn, that is a function without arguments, which defines the input from the model and the predictor. Therefore, you must create your own serving_input_receiver_fn, where the model input type match with the model input in the training script, and the predictor input type match with the predictor input in the testing script.
On the other hand, if you create a custom model, you must define the export_outputs, defined by the function tf.estimator.export.PredictOutput, which input is a dictionary that define the name that has to match with the name of the predictor output in the testing script.
For example:
TRAINING SCRIPT
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_tensors')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
feature_spec = {"words": tf.FixedLenFeature([25],tf.int64)}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
def estimator_spec_for_softmax_classification(logits, labels, mode):
predicted_classes = tf.argmax(logits, 1)
if (mode == tf.estimator.ModeKeys.PREDICT):
export_outputs = {'predict_output': tf.estimator.export.PredictOutput({"pred_output_classes": predicted_classes, 'probabilities': tf.nn.softmax(logits)})}
return tf.estimator.EstimatorSpec(mode=mode, predictions={'class': predicted_classes, 'prob': tf.nn.softmax(logits)}, export_outputs=export_outputs) # IMPORTANT!!!
onehot_labels = tf.one_hot(labels, 31, 1, 0)
loss = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels, logits=logits)
if (mode == tf.estimator.ModeKeys.TRAIN):
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
eval_metric_ops = {'accuracy': tf.metrics.accuracy(labels=labels, predictions=predicted_classes)}
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def model_custom(features, labels, mode):
bow_column = tf.feature_column.categorical_column_with_identity("words", num_buckets=1000)
bow_embedding_column = tf.feature_column.embedding_column(bow_column, dimension=50)
bow = tf.feature_column.input_layer(features, feature_columns=[bow_embedding_column])
logits = tf.layers.dense(bow, 31, activation=None)
return estimator_spec_for_softmax_classification(logits=logits, labels=labels, mode=mode)
def main():
# ...
# preprocess-> features_train_set and labels_train_set
# ...
classifier = tf.estimator.Estimator(model_fn = model_custom)
train_input_fn = tf.estimator.inputs.numpy_input_fn(x={"words": features_train_set}, y=labels_train_set, batch_size=batch_size_param, num_epochs=None, shuffle=True)
classifier.train(input_fn=train_input_fn, steps=100)
full_model_dir = classifier.export_savedmodel(export_dir_base="C:/models/directory_base", serving_input_receiver_fn=serving_input_receiver_fn)
TESTING SCRIPT
def main():
# ...
# preprocess-> features_test_set
# ...
with tf.Session() as sess:
tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], full_model_dir)
predictor = tf.contrib.predictor.from_saved_model(full_model_dir)
model_input = tf.train.Example(features=tf.train.Features( feature={"words": tf.train.Feature(int64_list=tf.train.Int64List(value=features_test_set)) }))
model_input = model_input.SerializeToString()
output_dict = predictor({"predictor_inputs":[model_input]})
y_predicted = output_dict["pred_output_classes"][0]
(Code tested in Python 3.6.3, Tensorflow 1.4.0)

error when using keras' sk-learn API

  i'm learning keras these days, and i met an error when using scikit-learn API.Here are something maybe useful:
ENVIRONMENT:
python:3.5.2
keras:1.0.5
scikit-learn:0.17.1
CODE
import pandas as pd
from keras.layers import Input, Dense
from keras.models import Model
from keras.models import Sequential
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sqlalchemy import create_engine
from sklearn.cross_validation import KFold
def read_db():
"get prepared data from mysql."
con_str = "mysql+mysqldb://root:0000#localhost/nbse?charset=utf8"
engine = create_engine(con_str)
data = pd.read_sql_table('data_ml', engine)
return data
def nn_model():
"create a model."
model = Sequential()
model.add(Dense(output_dim=100, input_dim=105, activation='softplus'))
model.add(Dense(output_dim=1, input_dim=100, activation='softplus'))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
data = read_db()
y = data.pop('PRICE').as_matrix()
x = data.as_matrix()
model = nn_model()
model = KerasRegressor(build_fn=model, nb_epoch=2)
model.fit(x,y) #something wrong here!
ERROR
Traceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/forecast/gridsearch.py", line 43, in <module>
model.fit(x,y)
File "D:\Program Files\Python35\lib\site-packages\keras\wrappers\scikit_learn.py", line 135, in fit
**self.filter_sk_params(self.build_fn.__call__))
TypeError: __call__() missing 1 required positional argument: 'x'
Process finished with exit code 1
  the model works well without packaging with kerasRegressor, but i wanna using sk_learn's gridSearch after this, so i'm here for help. I tried but still have no idea.
something maybe helpful:
keras.warappers.scikit_learn.py
class BaseWrapper(object):
def __init__(self, build_fn=None, **sk_params):
self.build_fn = build_fn
self.sk_params = sk_params
self.check_params(sk_params)
def fit(self, X, y, **kwargs):
'''Construct a new model with build_fn and fit the model according
to the given training data.
# Arguments
X : array-like, shape `(n_samples, n_features)`
Training samples where n_samples in the number of samples
and n_features is the number of features.
y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)`
True labels for X.
kwargs: dictionary arguments
Legal arguments are the arguments of `Sequential.fit`
# Returns
history : object
details about the training history at each epoch.
'''
if self.build_fn is None:
self.model = self.__call__(**self.filter_sk_params(self.__call__))
elif not isinstance(self.build_fn, types.FunctionType):
self.model = self.build_fn(
**self.filter_sk_params(self.build_fn.__call__))
else:
self.model = self.build_fn(**self.filter_sk_params(self.build_fn))
loss_name = self.model.loss
if hasattr(loss_name, '__name__'):
loss_name = loss_name.__name__
if loss_name == 'categorical_crossentropy' and len(y.shape) != 2:
y = to_categorical(y)
fit_args = copy.deepcopy(self.filter_sk_params(Sequential.fit))
fit_args.update(kwargs)
history = self.model.fit(X, y, **fit_args)
return history
  error occored in this line:
self.model = self.build_fn(
**self.filter_sk_params(self.build_fn.__call__))
self.build_fn here is keras.models.Sequential
models.py
class Sequential(Model):
def call(self, x, mask=None):
if not self.built:
self.build()
return self.model.call(x, mask)
So, what's that x mean and how to fix this error?
Thanks!
xiao, I ran into the same issue! Hopefully this helps:
Background and The Issue
The documentation for Keras states that, when implementing Wrappers for scikit-learn, there are two arguments. The first is the build function, which is a "callable function or class instance". Specifically, it states that:
build_fn should construct, compile and return a Keras model, which will then be used to fit/predict. One of the following three values could be passed to build_fn:
A function
An instance of a class that implements the call method
None. This means you implement a class that inherits from either KerasClassifier or KerasRegressor. The call method of the present class will then be treated as the default build_fn.
In your code, you create the model, and then pass the model as the value for the argument build_fn when creating the KerasRegressor wrapper:
model = nn_model()
model = KerasRegressor(build_fn=model, nb_epoch=2)
Herein lies the issue. Rather than passing your nn_model function as the build_fn, you pass an actual instance of the Keras Sequential model. For this reason, when fit() is called, it cannot find the call method, because it is not implemented in the class you returned.
Proposed Solution
What I did to make things work is pass the function as build_fn, rather than an actual model:
data = read_db()
y = data.pop('PRICE').as_matrix()
x = data.as_matrix()
# model = nn_model() # Don't do this!
# set build_fn equal to the nn_model function
model = KerasRegressor(build_fn=nn_model, nb_epoch=2) # note that you do not call the function!
model.fit(x,y) # fixed!
This is not the only solution (you could set build_fn to a class that implements the call method appropriately), but the one that worked for me. I hope it helps you!
User-defined keyword arguments passed to __init__() that is to say, all keyword arguments that were given to __init__() will be passed to model_build_fn directly. For example, calling KerasClassifier(myparam=10) will result in a model_build_fn(my_param=10)
here's an example:
class MyMultiOutputKerasRegressor(KerasRegressor):
# initializing
def __init__(self, **kwargs):
KerasRegressor.__init__(self, **kwargs)
# simpler fit method
def fit(self, X, y, **kwargs):
KerasRegressor.fit(self, X, [y]*3, **kwargs)
(...)
def get_quantile_reg_rpf_nn(layers_shape=[50,100,200,100,50], inDim= 4, outDim=1, act='relu'):
# do model stuff...
(...)
initialize the Keras regressor:
base_model = MyMultiOutputKerasRegressor(build_fn=get_quantile_reg_rpf_nn,
layers_shape=[50,100,200,100,50], inDim= 4,
outDim=1, act='relu', epochs=numEpochs,
batch_size=batch_size, verbose=0)

Categories

Resources