Minimal DNNRegressor example with TensorFlow - python

I'm new to Python and TensorFlow and I'm trying to build a simple working example with fake data in TensorFlow. My goal is to use the DNNRegressor estimator to predict a real value from a multidimensional input. This is the code I wrote:
import pandas as pd
import tensorflow as tf
import numpy as np
# Amount of train samples
m_train = 1000
# Amount of test samples
m_test = 100
# Dimensions for each sample
n = 10
def from_dataset(ds):
return lambda: ds.make_one_shot_iterator().get_next()
# Create random samples with numpy
train_data = (np.random.sample((m_train,n)), np.random.sample((m_train,1)))
test_data = (np.random.sample((m_test,n)), np.random.sample((m_test,1)))
# Create two datasets, one for trainning and the other for testing
train_dataset = tf.data.Dataset.from_tensor_slices(train_data)
test_dataset = tf.data.Dataset.from_tensor_slices(test_data)
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=n)]
model = tf.estimator.DNNRegressor(hidden_units=[20, 20], feature_columns=feature_columns)
# Train the model
model.train(input_fn=from_dataset(train_dataset), steps=1000)
# Evaluate the unseen samples
eval_result = model.evaluate(input_fn=from_dataset(test_dataset))
And this is the error I get:
$ python fake.py
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp1j5irF
Traceback (most recent call last):
File "fake.py", line 28, in <module>
model.train(input_fn=from_dataset(train_dataset), steps=1000)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 314, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 743, in _train_model
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 725, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/canned/dnn.py", line 448, in _model_fn
config=config)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/canned/dnn.py", line 153, in _dnn_model_fn
'Given type: {}'.format(type(features)))
ValueError: features should be a dictionary of `Tensor`s. Given type: <class 'tensorflow.python.framework.ops.Tensor'>
I supose I have to use a dictionary of Tensors, but I'm just beginning in Python and I don't know how to do it.

You need to return the iterator returned by get_one(), rather than a lambda function that returns the iterator. Check out https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/get_started/regression/dnn_regression.py

Related

Problem with layer dimensions when using Keras sequence generator and fit_generator

I am running a visual question answering task.
The problems take as input : image features (which I have saved in a
h5py file) and question tokens (which I have pickled) and outputs
are the answers (the whole answer is considered a target , so 3129
answers –one word or more - and 3129 labels)
I am using the Keras sequence utility to create the generator.
I am getting a dimension error in the output layer when the model
is training. when I change the len function, based on its value the training process breaks down
I have copied my getitem function in the generator and also a sample
of my model.
Do I need to change my generator configuration or my model?
Epoch 1/1
Traceback (most recent call last):
File "<ipython-input-45-e55a5853e499>", line 32, in <module>
validation_data=valid_generator)
File "C:\python\envs\tf2-keras\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\python\envs\tf2-keras\lib\site-packages\keras\engine\training.py", line 1732, in fit_generator
initial_epoch=initial_epoch)
File "C:\python\envs\tf2-keras\lib\site-packages\keras\engine\training_generator.py", line 220, in fit_generator
reset_metrics=False)
File "C:\python\envs\tf2-keras\lib\site-packages\keras\engine\training.py", line 1508, in train_on_batch
class_weight=class_weight)
File "C:\python\envs\tf2-keras\lib\site-packages\keras\engine\training.py", line 621, in _standardize_user_data
exception_prefix='target')
File "C:\python\envs\tf2-keras\lib\site-packages\keras\engine\training_utils.py", line 145, in standardize_input_data
str(data_shape))
ValueError: Error when checking target: expected output to have shape (3129,) but got array with shape (1,)
def __len__(self):
'Denotes the number of batches per epoch'
# return int(np.floor(len(self.list_IDs) / self.batch_size))
return 512*866
# this is the getitem function
The __getitem__ of my generator look like this:
def __getitem__(self, index):
'Generate one batch of data'
imfeatures = np.empty((self.batch_size,2048))
question_tokens = np.empty((self.batch_size,14))
answers = np.empty((self.batch_size,3129))
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# self.T.append(indexes)
list_IDs_temp = [self.list_IDs[k] for k in indexes]
# Generate data
for i,k in enumerate(list_IDs_temp):
temp =self.Features['image_features'][k]
imfeatures[i,]=temp[0,:]
question_tokens[i,]=self.Questions[indexes[i]]
answers=self.Answer[indexes[i]]
return [imfeatures,question_tokens],answers
# this is where I instantiate the generators
#train_features is h5py file
# entries is where questions, answers, and ids are saved
batch_size=512
train_generator = DataGenerator(entries['train'].images,
train_fetures,
entries['train'].q_token,
entries['train'].target,
batch_size=batch_size,
shuffle = False)
valid_generator = DataGenerator(entries['val'].images,
valid_features,
entries['val'].q_token,
entries['val'].target,
batch_size=batch_size,
shuffle = False)
#And this is what my model looks like:
ImInput = Input(shape=(2048,),name='image_input')
QInput = Input(shape=(14,),name='question')
# some dense layers and dropouts
#Then the layers are merged
M =Multiply()[ImInput,QInput]
#Some dense layers and dropouts
output=Dense(3129,activation='softmax',name='output')(M)
model = Model([ImInput,QInput ],output)
model.compile(optimizer='RMSprop',loss='categorical_crossentropy',metrics = ['accuracy'])
model.fit_generator(train_generator,
epochs=1,
verbose =1,
validation_data=valid_generator)

errors says there have been some errors whereas i almost completely copy and paste the google tutorial code

Those are the complete errors, and I am so confused about what it is trying to say. I was doing a tutorial from google, and I almost completely copy the code from the tutorial but replace its dataset with my own dataset, and those errors occur.
Traceback (most recent call last):
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\pandas\core\indexes\base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas\_libs\index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Objective 1'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/julia/Anaconda/envs/myenv/Mycode.py", line 333, in <module>
validation_targets=validation_targets)
File "C:/Users/julia/Anaconda/envs/myenv/Mycode.py", line 288, in train_nn_regression_model
steps=steps_per_period
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1185, in _train_model_default
input_fn, ModeKeys.TRAIN))
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1022, in _get_features_and_labels_from_input_fn
self._call_input_fn(input_fn, mode))
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1113, in _call_input_fn
return input_fn(**kwargs)
File "C:/Users/julia/Anaconda/envs/myenv/Mycode.py", line 268, in <lambda>
training_targets["Objective 1"],
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\pandas\core\frame.py", line 2980, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Users\julia\Anaconda\envs\myenv\lib\site-packages\pandas\core\indexes\base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Objective 1'
I was a machine learning beginner, and when I use python and follow the codes from google machine learning tutorial, some errors came up, and I am not sure what is going on.
# Step 1 - Set up and import necessary packages
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
# Step 2 - Load our data
zerlite_13X_error = pd.read_csv("zerlite_13x_error.csv", sep=",")
# print(zerlite_13X_error.head()) # Load data done
# We will randomize data. just to be sure not to get any pathological ordering effects the
# performance of Stochastic Gradient Descent. And we first consider objective 1
zerlite_13X_error = zerlite_13X_error.reindex(
np.random.permutation(zerlite_13X_error.index))
# Define features and Configure columns
# Define features which are parameters 1 to parameters 8
def preprocess_features(zerlite_13X_error):
"""Prepares input features from zerlite_13X_error
Args:
zerlite_13X_error: A Pandas DataFrame expected to contain data
Return:
A DataFrame that contains the features to be used for the model.
including synthetic features
"""
selected_features = zerlite_13X_error[
["Parameter 1",
"Parameter 2",
"Parameter 3",
"Parameter 4",
"Parameter 5",
"Parameter 6",
"Parameter 7",
"Parameter 8"]]
processed_features = selected_features.copy()
# print(processed_features.head())
return processed_features
def preprocess_targets(zerlite_13X_error):
"""Prepares target features (i.e. labels) from zerlite_13X_error set
Args:
zerlite_13X_error: A Panda dataframe that was expected to contain data from
the zerolite_13X_error data set
Returns:
A dataframe that contains the target feature
"""
output_targets = pd.DataFrame()
# Create the output targets
output_targets["Objective 1"] = zerlite_13X_error["Objective 1"]
print(output_targets.head())
return output_targets
# For training Set, we will choose 14000 out of 20154 number, about 70% of data as training set
training_examples = preprocess_features(zerlite_13X_error.head(14000))
training_examples.describe()
print('-- Training Examples Describe --')
print(training_examples.describe())
training_targets = preprocess_targets(zerlite_13X_error.head(14000))
training_targets.describe()
print('-- Training Targets Describe --')
print(training_targets.describe())
# For Validation Set, we will choose 3000 examples, out of total 20154 examples
validation_examples = preprocess_features(zerlite_13X_error.iloc[14001:17001])
validation_examples.describe()
print('-- Validation Examples Describe --')
print(validation_examples.describe())
validation_targets = preprocess_targets(zerlite_13X_error.iloc[14001:17001])
validation_targets.describe()
print('-- Validation Targets Describe --')
print(validation_targets.describe())
# for Test Set, we will choose the last 3154 examples
test_examples = preprocess_features((zerlite_13X_error.tail(3154)))
test_examples.describe()
print('-- Test Examples Describe --')
print(test_examples.describe())
test_targets = preprocess_targets(zerlite_13X_error.tail(3154))
test_targets.describe()
print('-- Test Targets Describe --')
print(test_targets.describe())
# As we are now working with multiple features, modularize the code for configuring columns into a
# separate function
def construct_feature_columns(input_features):
"""Construct the TensorFlow columns:
Args:
input_features: The name of numerical input features to use
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
# Train and evaluate the model
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of multiple features
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data
num_epochs: Number of epochs for which data should be repeated. None = Repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays
features = {key: np.array(value) for key, value in dict(features).items()}
# Construct a dataset, and configure batching/repeating
ds = Dataset.from_tensor_slices((features, targets)) # Warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
# Now we will go creating a train model using neural network
def train_nn_regression_model(learning_rate, steps, batch_size, hidden_units,
training_examples, training_targets,
validation_examples, validation_targets):
"""Trains a neural network regression model of multiple features
In addition to training, this function also prints training progress information,
as well as plot of the training and validation loss over time
Args:
learning_rate: A 'float', the learning rate
steps: A non-zero 'int', the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero 'int', the batch size.
hidden_size" A 'list' of int values, specifying the number of neurons in each layer
training_examples: A 'DataFrame' containing one or more columns from
'zerlite_13X_error' to use as input features for training
training_targets: A 'DataFrame' containing exactly one column from
'zerlite_13X_error' to use as target for training
validation_examples: A 'DataFrame' containing one or more columns from
'zerlite_13X_error' to use as input features for validation
validation_targets: A 'DataFrame' containing exactly one column from
'zerlite_13X_error' to use as target for validation
Returns:
A 'DNNRegressor' object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor Object
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["Objective 1"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["Objective 1"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["Objective 1"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess loss metrics
print("Training Models ............")
print("RMSE (on training data): ")
training_rmse = []
validation_rmse = []
for period in range(0, periods): # Python shows error occuring here
# Train the model, starting from the prior state
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period)
# take a break and compute predictions
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss
print(" period %02d: %02f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished")
# Output a graph of loss metrics over periods
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error v.s. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
# Train NN model
dnn_regressor = train_nn_regression_model(
learning_rate=0.1,
steps=5000,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets) # Python shows error here

How to fix "TypeError: The added layer must be an instance of class Layer." in Python

I have written that little helloworld-kind of neural network. The problem is that I constantly get that error which says:
"Traceback (most recent call last):
File "C:/Users/Pigeonnn/PycharmProjects/Noss/Network.py", line 21, in <module>
model.add(keras.layers.InputLayer(input_shape))
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\training\checkpointable\base.py", line 442, in _method_wrapper
method(self, *args, **kwargs)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\sequential.py", line 145, in add
'Found: ' + str(layer))
TypeError: The added layer must be an instance of class Layer. Found: <keras.engine.input_layer.InputLayer object at 0x0000015EDB394DA0>"
Here's my code:
import keras
import numpy as np
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn.utils import shuffle
import tensorflow as tf
seed = 10
np.random.seed(seed)
dataset = np.loadtxt("dataset2.csv",delimiter=',',skiprows=1)
dataset = shuffle(dataset)
X = dataset[:,2:]
Y = dataset[:,1]
(X_train,X_test,Y_train,Y_test) = train_test_split(X, Y, test_size=0.15, random_state=seed)
input_shape = (13,)
model = tf.keras.models.Sequential()
model.add(keras.layers.InputLayer(input_shape))
model.add(keras.layers.core.Dense(128, activation='relu'))
model.add(keras.layers.core.Dense(128, activation='relu'))
model.add(keras.layers.core.Dense(4, activation='sigmoid'))
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=20)
EDIT: after some tweaks(changing loss function, removing tf model), I have another error, this time its:
Traceback (most recent call last):
File "C:/Users/Pigeonnn/PycharmProjects/Noss/Network.py", line 28, in
model.fit(X_train,Y_train,epochs=20)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 952, in fit
batch_size=batch_size)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training_utils.py", line 138, in standardize_input_data
str(data_shape))
ValueError: Error when checking target: expected dense_3 to have shape (4,) but got array with shape (1,)
You are using both tf.keras and keras modules, which are not compatible. Use only one and be consistent.

CNTK python API: How to get predictions from the trained model?

I have a trained model which I am loading using CNTK.load_model() function. I was looking at the MNIST Tutorial on the CNTK git repo as reference for model evaluation code. I have created a data reader (which is a MinibatchSource object) and trying to run model.eval(mb) where mb = minibatch_source.next_minibatch(...) (Similar to this answer)
But, I'm getting the following error message
Traceback (most recent call last):
File "LID_test.py", line 162, in <module>
test_and_evaluate()
File "LID_test.py", line 159, in test_and_evaluate
predictions = model.eval(mb)
File "/home/t-asbahe/anaconda3/envs/cntk-py35/lib/python3.5/site-packages/cntk/ops/functions.py", line 228, in eval
_, output_map = self.forward(arguments, self.outputs, device=device, as_numpy=as_numpy)
File "/home/t-asbahe/anaconda3/envs/cntk-py35/lib/python3.5/site-packages/cntk/utils/swig_helper.py", line 62, in wrapper
result = f(*args, **kwds)
File "/home/t-asbahe/anaconda3/envs/cntk-py35/lib/python3.5/site-packages/cntk/ops/functions.py", line 354, in forward
None, device)
File "/home/t-asbahe/anaconda3/envs/cntk-py35/lib/python3.5/site-packages/cntk/utils/__init__.py", line 393, in sanitize_var_map
if len(arguments) < len(op_arguments):
TypeError: object of type 'Variable' has no len()
I have no input_variable named 'Variable' in my model and I don't see any reason to get this error.
P.S.: My inputs are sparse inputs (one-hots)
You have a few options:
Pass a set of data as numpy array (instance in CNTK 202 tutorial) where onehot data is passed in as a numpy array.
pred = model.eval({model.arguments[0]:[onehot]})
Read the minibatch data and pass it to the eval function
eval_input_map = { input : reader_eval.streams.features }
eval_data = reader_eval.next_minibatch(eval_minibatch_size,
input_map = eval_input_map)
mydata = eval_data[input].value
predicted= model.eval(mydata)

Tensorflow error using my own data for text classification

I've been playing with the Tensorflow library doing the tutorials.
I'm using this example. And I changed the parameters in the example from this: n_classes = 15
to this: n_classes = 2 as I have only two classes to classify.
I read data like:
train = pandas.read_csv('tensorflow_feed/test/train_with_abs.csv', header=None)
X_train, y_train = train[1], train[0]
test = pandas.read_csv('tensorflow_feed/test/test_with_abs.csv', header=None)
X_test, y_test = test[1], test[0]
But it gives following error:
Total words: 35
Traceback (most recent call last):
File "/home/sumit/PycharmProjects/experiments/text_classification_save_restore.py", line 94, in <module>
classifier.fit(X_train, y_train)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/base.py", line 160, in fit
monitors=monitors)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 449, in _train_model
train_op, loss_op = self._get_train_ops(features, targets)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 673, in _get_train_ops
_, loss, train_op = self._call_model_fn(features, targets, ModeKeys.TRAIN)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 656, in _call_model_fn
features, targets, mode=mode)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/base.py", line 369, in _model_fn
predictions, loss = model_fn(features, targets)
File "/home/sumit/PycharmProjects/experiments/text_classification_save_restore.py", line 73, in rnn_model
word_list = tf.unpack(word_vectors, axis=1)
TypeError: unpack() got an unexpected keyword argument 'axis'
Process finished with exit code 1
The "axis" parameter was just added to tf.unpack on June 23, and the example you're looking at was changed to use it:
https://github.com/tensorflow/tensorflow/commit/eff93149a6dc8e6826898fd9f9c28c81e21c9836
So I suggest either:
use an older version of the example from before that commit, e.g.:
https://github.com/tensorflow/tensorflow/blob/892ca4ddc12852a7b4633fd08f163941356cb4e6/tensorflow/examples/skflow/text_classification_save_restore.py
build a newer Tensorflow from github HEAD.
I hope that helps!

Categories

Resources