Using trained keras model in google ml-engine - python

I am trying to use gcloud ml-engine with tensorflow, more precisely I would like to use an already trained keras model.
I managed to do this with a sciktlearn model but this not the same here...
First i train a simple model with Keras
import numpy as np
from tensorflow import keras
# Creating the dataset
X = np.random.random((500,9))
y = (np.random.random(500)>0.5).astype(int)
# Splitting
idx_train, idx_test = np.arange(400), np.arange(400,500)
X_train, X_test = X[idx_train], X[idx_test]
y_train, y_test = y[idx_train], y[idx_test]
def define_model():
input1 = keras.layers.Input(shape=(9,),name="values")
hidden = keras.layers.Dense(50, activation='relu', name="hidden")(input1)
preds = keras.layers.Dense(1, activation='sigmoid', name="labels")(hidden)
model = keras.models.Model(inputs=input1,
outputs=preds)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=["accuracy"])
model.summary()
return model
model = define_model()
model.fit(X_train, y_train,
batch_size=10,
epochs=10, validation_split=0.2)
I read i need a SavedModel to use it in ml-engine here https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models
It seems I have to transform it to an estimator
model.save("./model_trained_test.h5")
estimator_model = keras.estimator.model_to_estimator(keras_model_path="./model_trained_test.h5")
I manage to make prediction with this estimator
def input_function(features,labels=None,shuffle=False):
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"values": features},
y=labels,
shuffle=shuffle
)
return input_fn
score = estimator_model.evaluate(input_function(X_test, labels=y_test.reshape(-1,1)))
In order to export it to a SavedModel I need a serving_input_receiver_fn. I did not find on the internet an example of my situation, which seemed simple to me, so I tried this function and then I saved the model in the "here_are_estimators" folder
feature_spec = {'values': tf.FixedLenFeature(9, dtype=tf.float32)}
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[None],
name='input_tensors')
receiver_tensors = {'examples': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
estimator_model.export_savedmodel("./here_are_estimators",
serving_input_receiver_fn=serving_input_receiver_fn)
my input.json looks like this
{"examples":[{"values":[[0.2,0.3,0.4,0.5,0.9,1.5,1.6,7.3,1.5]]}]}
I uploaded the content of the generated file, a variables folder and a saved_model.pb file to GCS in the directory DEPLOYMENT_SOURCE
When I try to run a local prediction with gcloud with this command:
gcloud ml-engine local predict --model-dir $DEPLOYMENT_SOURCE --json-instances="input.json" --verbosity debug --framework tensorflow
I have this error
cloud.ml.prediction.prediction_utils.PredictionError: Failed to run the provided model: Exception during running the graph: Cannot feed value of shape (1, 1) for Tensor 'input_tensors:0', which has shape '(?,)' (Error code: 2)
I guess something is wrong with my input.json or the serving_input_receiver_fn, or both ?, but I cant find out what. If someone can tell me what is wrong it will be much appreciated :)

You shouldn't be trying to parse tf.Example since you are sending JSON. Try this for the export:
def serving_input_receiver_fn():
inputs = {"values": tf.placeholder(dtype=tf.float32,
shape=[None, 9],
name='input_tensors')}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
estimator_model.export_savedmodel("./here_are_estimators", serving_input_receiver_fn=serving_input_receiver_fn)
The input should look like:
{"values":[0.2,0.3,0.4,0.5,0.9,1.5,1.6,7.3,1.5]}
There's also a more concise "shorthand":
[0.2,0.3,0.4,0.5,0.9,1.5,1.6,7.3,1.5]

Related

How to properly setup a data set for training a Keras model

I am trying to create a dataset for audio recognition with a simple Keras sequential model.
This is the function I am using to create the model:
def dnn_model(input_shape, output_shape):
model = keras.Sequential()
model.add(keras.Input(input_shape))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation = "relu"))
model.add(layers.Dense(output_shape, activation = "softmax"))
model.compile( optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['acc'])
model.summary()
return model
And I am Generating my trainingsdata with this Generator function:
def generator(x_dirs, y_dirs, hmm, sampling_rate, parameters):
window_size_samples = tools.sec_to_samples(parameters['window_size'], sampling_rate)
window_size_samples = 2**tools.next_pow2(window_size_samples)
hop_size_samples = tools.sec_to_samples(parameters['hop_size'],sampling_rate)
for i in range(len(x_dirs)):
features = fe.compute_features_with_context(x_dirs[i],**parameters)
praat = tools.praat_file_to_target( y_dirs[i],
sampling_rate,
window_size_samples,
hop_size_samples,
hmm)
yield features,praat
The variables x_dirs and y_dirs contain a list of paths to labels and audiofiles. In total I got 8623 files to train my model. This is how I train my model:
def train_model(model, model_dir, x_dirs, y_dirs, hmm, sampling_rate, parameters, steps_per_epoch=10,epochs=10):
model.fit((generator(x_dirs, y_dirs, hmm, sampling_rate, parameters)),
epochs=epochs,
batch_size=steps_per_epoch)
return model
My problem now is that if I pass all 8623 files it will use all 8623 files to train the model in the first epoch and complain after the first epoch that it needs steps_per_epoch * epochs batches to train the model.
I tested this with only 10 of the 8623 files with a sliced list, but than Tensorflow complains that there are needed 100 batches.
So how do I have my Generator yield out data that its works best? I always thought that steps_per_epoch just limits the data received per epoch.
The fit function is going to exhaust your generator, that is to say, once it will have yielded all your 8623 batches, it wont be able to yield batches anymore.
You want to solve the issue like this:
def generator(x_dirs, y_dirs, hmm, sampling_rate, parameters, epochs=1):
for epoch in range(epochs): # or while True:
window_size_samples = tools.sec_to_samples(parameters['window_size'], sampling_rate)
window_size_samples = 2**tools.next_pow2(window_size_samples)
hop_size_samples = tools.sec_to_samples(parameters['hop_size'],sampling_rate)
for i in range(len(x_dirs)):
features = fe.compute_features_with_context(x_dirs[i],**parameters)
praat = tools.praat_file_to_target( y_dirs[i],
sampling_rate,
window_size_samples,
hop_size_samples,
hmm)
yield features,praat

What are other weight types for model wrappers in Keras?

from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
model = ResNet50(weights='imagenet')
In this code, there is the "wrapper" (that's what it's referred to) ResNet50. What are the other types of weights I can use for this? I tried looking around but I don't even understand the source code; there is nothing conclusive there either
You can find it on the keras doc
Or github code
There is only two options, either None if you just want the architecture without the weights, or imagenet to load imagenet weights.
Edit : how to use our own weights :
# Take a DenseNET201
backbone = tf.keras.applications.DenseNet201(input_shape=input_shape, weights=None, include_top=False)
# Change the model a little bit, because why not
input_image = tf.keras.layers.Input(shape=input_shape)
x = backcone(input_image)
x = tf.keras.layers.Conv2D(classes, (3, 3), padding='same', name='final_conv')(input)
x = tf.keras.layers.Activation(activation, name=activation)(x)
model = tf.keras.Model(input, x)
#... some additional code
# training part
optimizer = tf.keras.optimizers.Adam(lr=FLAGS.learning_rate)
model.compile(loss=loss,
optimizer=optimizer,
metrics=['accuracy', f1_m, recall_m, precision_m])
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath=ckpt_name)]
model.fit(train_generator, validation_data=validation_generator, validation_freq=1, epochs=10, callbacks=callbacks)
# using the callback there will weights saved in cktp_name each epoch
# Inference part, just need to reinstance the model (lines after #Change part comment)
model.load_weights(ckpt_name)
results = model.predict(test_generator, verbose=1)
You don't need to change the model obviously, you could have used x = backbone(x) and then model = tf.keras.Model(input, x)

fine tune a model using Keras Functional API

I am using VGG16 to finetune it on my dataset.
Here's the model:
def finetune(self, aux_input):
model = applications.VGG16(weights='imagenet', include_top=False)
# return model
drop_5 = Input(shape=(7, 7, 512))
flatten = Flatten()(drop_5)
# aux_input = Input(shape=(1,))
concat = Concatenate(axis=1)([flatten, aux_input])
fc1 = Dense(512, kernel_regularizer=regularizers.l2(self.weight_decay))(concat)
fc1 = Activation('relu')(fc1)
fc1 = BatchNormalization()(fc1)
fc1_drop = Dropout(0.5)(fc1)
fc2 = Dense(self.num_classes)(fc1_drop)
top_model_out = Activation('softmax')(fc2)
top_model = Model(inputs=drop_5, outputs=top_model_out)
output = top_model(model.output)
complete_model = Model(inputs=[model.input, aux_input], outputs=output)
return complete_model
I have two inputs to the model. In the above function, I'm using Concatenate for the flattened array and my aux_input.
I'm not sure if this would work with imagenet weights.
When I run this, I get an error:
ValueError: Graph disconnected: cannot obtain value for tensor
Tensor("aux_input:0", shape=(?, 1), dtype=float32) at layer
"aux_input". The following previous layers were accessed without
issue: ['input_2', 'flatten_1']
Not sure where am I going wrong.
If it matters, this is fit function:
model.fit(x={'input_1': x_train, 'aux_input': y_aux_train}, y=y_train, batch_size=batch_size,
epochs=maxepoches, validation_data=([x_test, y_aux_test], y_test),
callbacks=[reduce_lr, tensorboard], verbose=2)
But, I get an error before this fit function when I call model.summary().
The problem is that you are using aux_input in your top_model but you don't specify it as an input in your definition of top_model. Try replacing your definition of top_model and output with the following:
top_model = Model(inputs=[drop_5, aux_input], outputs=top_model_out)
output = top_model([model.output, aux_input])

Keras model.fit() with tf.dataset fails while using tf.train works fine

Summary: according to the documentation, Keras model.fit() should accept tf.dataset as input (I am using TF version 1.12.0). I can train my model if I manually do the training steps but using model.fit() on the same model, I get an error I cannot resolve.
Here is a sketch of what I did: my dataset, which is too big to fit in the memory, consists of many files each with different number of rows of (100 features, label). I'd like to use tf.data to build my data pipeline:
def data_loader(filename):
'''load a single data file with many rows'''
features, labels = load_hdf5(filename)
...
return features, labels
def make_dataset(filenames, batch_size):
'''read files one by one, pick individual rows, batch them and repeat'''
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.map( # Problem here! See edit for solution
lambda filename: tuple(tf.py_func(data_loader, [filename], [float32, tf.float32])))
dataset = dataset.flat_map(
lambda features, labels: tf.data.Dataset.from_tensor_slices((features, labels)))
dataset = dataset.batch(batch_size)
dataset = dataset.repeat()
dataset = dataset.prefetch(1000)
return dataset
_BATCH_SIZE = 128
training_set = make_dataset(training_files, batch_size=_BATCH_SIZE)
I'd like to try a very basic logistic regression model:
inputs = tf.keras.layers.Input(shape=(100,))
outputs = tf.keras.layers.Dense(1, activation='softmax')(inputs)
model = tf.keras.Model(inputs, outputs)
If I train it manually everything works fine, e.g.:
labels = tf.placeholder(tf.float32)
loss = tf.reduce_mean(tf.keras.backend.categorical_crossentropy(labels, outputs))
train_step = tf.train.GradientDescentOptimizer(.05).minimize(loss)
iterator = training_set.make_one_shot_iterator()
next_element = iterator.get_next()
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
for i in range(training_size // _BATCH_SIZE):
x, y = sess.run(next_element)
train_step.run(feed_dict={inputs: x, labels: y})
However, if I instead try to use model.fit like this:
model.compile('adam', 'categorical_crossentropy', metrics=['acc'])
model.fit(training_set.make_one_shot_iterator(),
steps_per_epoch=training_size // _BATCH_SIZE,
epochs=1,
verbose=1)
I get an error message ValueError: Cannot take the length of Shape with unknown rank. inside the keras'es _standardize_user_data function.
I have tried quite a few things but could not resolve the issue. Any ideas?
Edit: based on #kvish's answer, the solution was to change the map from a lambda to a function that would specify the correct tensor dimensions, e.g.:
def data_loader(filename):
def loader_impl(filename):
features, labels, _ = load_hdf5(filename)
...
return features, labels
features, labels = tf.py_func(loader_impl, [filename], [tf.float32, tf.float32])
features.set_shape((None, 100))
labels.set_shape((None, 1))
return features, labels
and now, all needed to do is to call this function from map:
dataset = dataset.map(data_loader)
Probably tf.py_func produces an unknown shape which Keras cannot infer. We can set the shape of the tensor returned by it using set_shape(your_shape) method and that would help Keras infer the shape of the result.

Loading weights through checkpoint keras provides different results

I'm new with keras. In fact, I conceive an autoencoder and train it on a part of Diabetes Dataset. Then, I use keras checkpointer to save the weights so that I can load it later in order to acheive some operations on encoded data vector (calculating the mean of encoded data to extract a class representation)
The problem
when I load the weights and then get the encoded data, I got different results each time I run the code. I turn the compile and fit statements into commentetd state after training the autoencoder to do not repeat the training process each time I run the code.
Here is the code :
checkpointer = ModelCheckpoint(filepath="weights.best.h5",
verbose=0,
save_best_only=True,
save_weights_only=True)
tensorboard = TensorBoard(log_dir='/tmp/autoencoder',
histogram_freq=0,
write_graph=True,
write_images=True)
input_enc = Input(shape=(input_size,))
hidden_1 = Dense(hidden_size1, activation='relu')(input_enc)
hidden_11 = Dense(hidden_size2, activation='relu')(hidden_1)
code = Dense(code_size, activation='relu')(hidden_11)
hidden_22 = Dense(hidden_size2, activation='relu')(code)
hidden_2 = Dense(hidden_size1, activation='relu')(hidden_22)
output_enc = Dense(input_size, activation='tanh')(hidden_2)
autoencoder_yes = Model(input_enc, output_enc)
autoencoder_yes.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['accuracy'])
history_yes = autoencoder_yes.fit(df_noyau_norm_y, df_noyau_norm_y,
epochs=200,
batch_size=batch_size,
shuffle = True,
validation_data=(df_test_norm_y, df_test_norm_y),
verbose=1,
callbacks=[checkpointer, tensorboard]).history
autoencoder_yes.save_weights("weights.best.h5")
autoencoder_yes.load_weights("weights.best.h5")
encoder_yes = Model (inputs = input_enc,outputs = code)
encoded_input = Input(shape=(code_size, ))
encoded_data_yes = encoder_yes.predict(df_noyau_norm_y)
print(encoded_data_yes.tolist())
X_YES= sum(encoded_data_yes) / 7412
print (X_YES)
Anybody can help me find out the reason and how to resolve this issue?
Thanks

Categories

Resources