I'm trying to train a TF estimator using the RNNEstimator() class, but I'm having trouble with defining the estimator. My goal is the following:
Create a tf.data.Dataset.
Feed it into the RNN estimator.
The first part seems to be working correctly. I define the
def _parse_func(record):
# takes tf record as input and returns the following tensors
# numeric_tensor.shape = (5,170) and y.shape=()
return {'numerical': numeric_tensor,}, y
def input_fn(filenames=['data.tfrecord']):
# Returns parsed tf record i.e. the tf.data.Dataset
dataset = tf.data.TFRecordDataset(filenames=filenames)
dataset = dataset.map(map_func=_parse_func)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size=BATCH_SIZE)
return dataset
Now let's move onto the meaty part.
Estimators take care of creating the session and graph. So I simply create the estimator in the following format:
# create the column
column = tf.contrib.feature_column.sequence_numeric_column('numerical')
# create the estimator
estimator = RNNEstimator(
head=tf.contrib.estimator.regression_head(),
sequence_feature_columns=[column],
num_units=[32, 16], cell_type='lstm')
# train the estimator
estimator.train(input_fn=input_fn, steps=100)
However, this doesn't work. It gives me a variety of errors! In particularly, at the moment I get:
TypeError: Input must be a SparseTensor.
Additionally, I seem to be unable to change the loss to log-loss. I tried setting it by passing it to the head parameter using:
head = tf.contrib.estimator.regression_head(loss_fn=tf.losses.log_loss)
Related
I'm trying to train a pre-trained model in Python 3.8 with Keras 2.3.1 and Tensorflow 2.2.3. Since my dataset is very large, I have to use a data generator. I want to assign sample weights to each sample, to make certain samples more important than others. I've already defined the weights that I want to assign to each sample, but I'm looking for a way to implement them in my training. Here's my custom data generator with the code that I've tried so far:
def __iter__(self):
batch_token_ids, batch_segment_ids = [], []
sample_weights = np.ones(batch_size)
for file in dataset:
for sample in file:
# Read the input, output, and sample weight from the dataset.
inputs, outputs, weight = get_data(sample)
# Encode the input and output using a tokenizer.
token_ids, segment_ids = tokenizer.encode(inputs, outputs, maxlen=maxlen)
batch_token_ids.append(token_ids)
batch_segment_ids.append(segment_ids)
# Add the weight to a np array (sample_weights)
sample_weights[len(batch_token_ids)-1] = weight
# Yield after getting batch_size samples.
if len(batch_token_ids) == self.batch_size:
"""
The input format that my model supports is the token and segment ids in x,
with None as the value for y. Changing it raises an error.
"""
yield [batch_token_ids, batch_segment_ids], None, sample_weights
batch_token_ids, batch_segment_ids = [], []
sample_weights = np.ones(batch_size)
According to the Keras documentation, I'm supposed to input the sample weights as the third value that the generator returns. However, I noticed that nothing changed in the training after implementing this. After doing some debugging into what happens to the sample weights after yielding, I noticed that the weights never get used when I try this, because of this code here in training.py of the Keras engine: (line 655)
if y is not None:
# Long code to process the inputs and sample weights.
# Since I don't have a y input, this doesn't get run.
else:
y = []
sample_weights = []
Is there a way to implement sample weights for my code without changing the input format? Also, if I'm using a custom loss function, would I need to change that as well for the sample weights to take effect?
I'm using Keras with a Tensorflow backend for building a model for this problem: https://www.kaggle.com/cfpb/us-consumer-finance-complaints (just practicing).
I train my Keras model using the tf.data.Dataset API. Now, I have a Pandas DataFrame, df_testing, whose columns are complaint (strings) and label (also strings). I want to predict on these new samples. I create a tf.data.Dataset object, perform preprocessing, make an Iterator, and call predict on my model:
data = df_testing["complaint"].values
labels = df_testing["label"].values
dataset = tf.data.Dataset.from_tensor_slices((data))
dataset = dataset.map(lambda x: ({'reviews': x}))
dataset = dataset.batch(self.batch_size).repeat()
dataset = dataset.map(lambda x: self.preprocess_text(x, self.data_table))
dataset = dataset.map(lambda x: x['reviews'])
dataset = dataset.make_initializable_iterator()
My training used a tf.data.Dataset where each element was of the form ({'reviews': "movie was great"}, "positive") so I'm mimicking that here for prediction. Also, my preprocessing just turns my string into a Tensor of integers.
When I call:
preds = model.predict(dataset)
But I'm told my predict call fails:
ValueError: When using iterators as input to a model, you should specify the `steps` argument.
So I modify this call to be:
preds = model.predict(dataset, steps=3)
But now I get back:
ValueError: Please provide data as a list or tuple of 2 elements - input and target pair. Received Tensor("IteratorGetNext_2:0", shape=(?, 100), dtype=int32)
What am I doing incorrectly here? I shouldn't have to provide a tuple of 2 elements when predicting (I shouldn't need the label).
Thanks for any help you can offer!
What version of Keras are you on? I cannot find that specific error message in the code base, but I think I found where it used to be.
Here's the error in a version of the code that I think is close to the version you're running: commit
And here's the updated version of that error: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/engine/training_eager.py#L464
The conditions of the input validation have changed (in the newest version your input would be accepted), but what's relevant is that the error message is much more clear:
raise ValueError(
'Please provide data as a list or tuple of 1, 2, or 3 elements '
' - `(input)`, or `(input, target)`, or `(input, target,'
'sample_weights)`. Received %s. We do not use the `target` or'
'`sample_weights` value here.' % inputs.output_shapes)
The target value is never used in the predict function, and so can be anything. Looking at the rest of the function next_element[1] is never used.
[TLDR] Using your current version, add a dummy target value to the data, or update your Keras.
The following code worked for me (tested on tensorflow 1.10.0):
[TLDR] Only insert empty dictionary as a dummy input and specify the number of steps:
model.predict(x={},steps=4)
Full code:
import numpy as np
import tensorflow as tf
from tensorflow.data import Dataset
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
# dummy data:
x = np.arange(4).reshape(-1, 1).astype('float32')
y = np.arange(5, 9).reshape(-1, 1).astype('float32')
# build the Datasets
ds_x = Dataset.from_tensor_slices(x).repeat().batch(4)
it_x = ds_x.make_one_shot_iterator()
ds_y = Dataset.from_tensor_slices(y).repeat().batch(4)
it_y = ds_y.make_one_shot_iterator()
# build compile and train the model
input_vals = Input(tensor=it_x.get_next())
output = Dense(1, activation='relu')(input_vals)
model = Model(inputs=input_vals, outputs=output)
model.compile('rmsprop', 'mse', target_tensors=[it_y.get_next()])
model.fit(steps_per_epoch=1, epochs=5, verbose=2)
# infer using the dataset
model.predict(x={},steps=4)
I've started recently to play with tensorflow and, more specifically, with the new dataset API.
I've successfully used a dataset to feed training data to my simple model by plugging dataset's iterators to the nodes of my graph representing input and label. Something like:
input = input_dataset.make_one_shot_iterator().get_next()
label = label_dataset.make_one_shot_iterator().get_next()
Now I'm wondering what to do when I have to do inference on a user input, that is, the user gives me one single input value and I have to make my prediction. If I had a placeholder I would just put the user input in a feed_dict, but with the dataset api I have very little idea how to do something similar. Shall I have a separate graph only for inference in which my input variable is a placeholder?
I've tried already to make a feedable iterator as described here but that only works with a placeholder for strings, while my input are int32.
Thanks for any advice.
For that specific purpose, tensorflow provides tf.placeholder_with_default API
# Create a Dataset
dataset = tf.data.Dataset.zip((input_dataset, label_dataset)).batch(32).repeat(...)
# Create Iterator
input, label = dataset.make_one_shot_iterator()
# Create Placholders
x = tf.placeholder_with_default(input, shape=[...], name='input')
y = tf.placeholder_with_default(label, shape-[...], name='label')
def nn_model(features, labels):
logits = ...
loss = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=logits))
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
return optimizer, loss
# Create Model
train_op, loss_op = nn_model(x, y)
# Training
sess.run(train_op)
# Inference
sess.run(logits, feed_dict={x:..., y:...})
I am using the tf.estimator API to predict punctuation. I trained it with pre-processed data using TFRecords and tf.train.shuffle_batch. Now I want to make predictions. I can do this fine feeding static NumPy data into tf.constant and returning this from the input_fn.
However I am working with sequence data and I need to feed one example at a time and the next input is dependent on the previous output. I also want to be able to process data input through HTTP requests.
Every time estimator.predict is called it re-loads the checkpoint and recreates the entire graph. This is slow and expensive. So I need to be able to dynamically feed data to the input_fn.
My current attempt is roughly this:
feature_input = tf.placeholder(tf.int32, shape=[1, MAX_SUBSEQUENCE_LEN])
q = tf.FIFOQueue(1, tf.int32, shapes=[[1, MAX_SUBSEQUENCE_LEN]])
enqueue_op = q.enqueue(feature_input)
def input_fn():
return q.dequeue()
estimator = tf.estimator.Estimator(model_fn, model_dir=model_file)
predictor = estimator.predict(input_fn=input_fn)
sess = tf.Session()
output = None
while True:
x = get_numpy_data(x, output)
if x is None:
break
sess.run(enqueue_op, {feature_input: x})
output = predictor.next()
save_to_file(output)
sess.close()
However I am getting the following error:
ValueError: Input graph and Layer graph are not the same: Tensor("EmbedSequence/embedding_lookup:0", shape=(1, 200, 128), dtype=float32) is not from the passed-in graph.
How can I asynchronously plug data into my existing graph through an input_fn to get predictions one at a time?
It turns out the main problem is that all tensors need to be created inside the input_fn or they don't get added to the same graph. I needed to run an enqueue operation but it was impossible to access anything returned from the input function.
I ended up inheriting the Estimator class and creating a custom predict function which allows me to dynamically add data to the prediction queue and return the results:
# async_estimator.py
import six
import tensorflow as tf
from tensorflow.python.estimator.estimator import Estimator
from tensorflow.python.estimator.estimator import _check_hooks_type
from tensorflow.python.estimator import model_fn as model_fn_lib
from tensorflow.python.framework import ops
from tensorflow.python.framework import random_seed
from tensorflow.python.training import saver
from tensorflow.python.training import training
class AsyncEstimator(Estimator):
def async_predictor(self,
dtype,
shape=None,
predict_keys=None,
hooks=None,
checkpoint_path=None):
"""Returns a tuple of functions: first runs predicitons on the model, second cleans up
Args:
dtype: the dtype of the input
shape: the shape of the input placeholder (optional)
predict_keys: list of `str`, name of the keys to predict. It is used if
the `EstimatorSpec.predictions` is a `dict`. If `predict_keys` is used
then rest of the predictions will be filtered from the dictionary. If
`None`, returns all.
hooks: List of `SessionRunHook` subclass instances. Used for callbacks
inside the prediction call.
checkpoint_path: Path of a specific checkpoint to predict. If `None`, the
latest checkpoint in `model_dir` is used.
Returns:
(predict, finish): tuple of functions
predict: runs a single prediction and returns the results
Args:
x: NumPy array of input
Returns:
Evaluated value of the prediction
finish: closes the session, allowing the program to exit
Raises:
ValueError: Could not find a trained model in model_dir.
ValueError: if batch length of predictions are not same.
ValueError: If there is a conflict between `predict_keys` and
`predictions`. For example if `predict_keys` is not `None` but
`EstimatorSpec.predictions` is not a `dict`.
"""
hooks = _check_hooks_type(hooks)
# Check that model has been trained.
if not checkpoint_path:
checkpoint_path = saver.latest_checkpoint(self._model_dir)
if not checkpoint_path:
raise ValueError('Could not find trained model in model_dir: {}.'.format(
self._model_dir))
with ops.Graph().as_default() as g:
random_seed.set_random_seed(self._config.tf_random_seed)
training.create_global_step(g)
input_placeholder = tf.placeholder(dtype=dtype, shape=shape)
queue = tf.FIFOQueue(1, dtype, shapes=shape)
enqueue_op = queue.enqueue(input_placeholder)
features = queue.dequeue()
estimator_spec = self._call_model_fn(features, None,
model_fn_lib.ModeKeys.PREDICT)
predictions = self._extract_keys(estimator_spec.predictions, predict_keys)
mon_sess = training.MonitoredSession(
session_creator=training.ChiefSessionCreator(
checkpoint_filename_with_path=checkpoint_path,
scaffold=estimator_spec.scaffold,
config=self._session_config),
hooks=hooks)
def predict(x):
if mon_sess.should_stop():
raise StopIteration
mon_sess.run(enqueue_op, {input_placeholder: x})
preds_evaluated = mon_sess.run(predictions)
if not isinstance(predictions, dict):
return preds_evaluated
else:
preds = []
for i in range(self._extract_batch_length(preds_evaluated)):
preds.append({
key: value[i]
for key, value in six.iteritems(preds_evaluated)
})
return preds
def finish():
mon_sess.close()
return predict, finish
And here is the rough code to use it:
import tensorflow as tf
from async_estimator import AsyncEstimator
def doPrediction(model_fn, model_dir, max_seq_length):
estimator = AsyncEstimator(model_fn, model_dir=model_dir)
predict, finish = estimator.async_predictor(dtype=tf.int32, shape=(1, max_seq_length))
output = None
while True:
# my input is dependent on the previous output
x = get_numpy_data(output)
if x is None:
break
output = predict(x)
save_to_disk(output)
finish()
Note: this is a simple solution which works for my needs, it may need to be modified for other cases. It is working on TensorFlow 1.2.1.
Hopefully TF will officially adopt something like this to make serving dynamic predictions with Estimator easier.
I am trying to train a simple binary logistic regression classifier using Tensorflow (version 0.9.0) in a very similar way to the beginner's tutorial and am encountering the following error when fitting the model:
ValueError: Tensor("centered_bias_weight:0", shape=(1,), dtype=float32_ref) must be from the same graph as Tensor("linear_14/BiasAdd:0", shape=(?, 1), dtype=float32).
Here is my code:
import tempfile
import tensorflow as tf
import pandas as pd
# Customized training data parsing
train_data = read_train_data()
feature_names = get_feature_names(train_data)
labels = get_labels(train_data)
# Construct dataframe from training data features
x_train = pd.DataFrame(train_data , columns=feature_names)
x_train["label"] = labels
y_train = tf.constant(labels)
# Create SparseColumn for each feature (assume all feature values are integers and either 0 or 1)
feature_cols = [ tf.contrib.layers.sparse_column_with_integerized_feature(f,2) for f in feature_names ]
# Create SparseTensor for each feature based on data
categorical_cols = { f: tf.SparseTensor(indices=[[i,0] for i in range(x_train[f].size)],
values=x_train[f].values,
shape=[x_train[f].size,1]) for f in feature_names }
# Initialize logistic regression model
model_dir = tempfile.mkdtemp()
model = tf.contrib.learn.LinearClassifier(feature_columns=feature_cols, model_dir=model_dir)
def eval_input_fun():
return categorical_cols, y_train
# Fit the model - similarly to the tutorial
model.fit(input_fn=eval_input_fun, steps=200)
I feel like I'm missing something critical... maybe something that was assumed in the tutorial but wasn't explicitly mentioned?
Also, I get the following warning every time I call fit():
WARNING:tensorflow:create_partitioned_variables is deprecated. Use tf.get_variable with a partitioner set, or tf.get_partitioned_variable_list, instead.
When you execute model.fit, the LinearClassifier is creating a separate tf.Graph based on the Ops contained in your eval_input_fun function. But, during the creation of this Graph, LinearClassifier doesn't have access to the definitions of categorical_cols and y_train you saved globally.
Solution: move all the Ops definitions (and their dependencies) inside eval_input_fun