As a learning tool, I am trying to do something simple.
I have two training CSV files:
One file with 36 columns (3500 records) with 0s and 1s. I am envisioning this file as a flattened 6x6 matrix.
I have another CSV file with 1 columnn of ground truth 0 or 1 (3500 records) which indicates if at least 4 of the 6 of elements in the 6x6 matrix's diagonal are 1's.
I also have two test CSV files which are the same structure as the training files except there are 500 records in each.
When I step through the program using the debugger, it appears that the...
estimator.train(
input_fn=lambda: get_inputs(x_paths=[x_train_file], y_paths=[y_train_file], batch_size=32), steps=100)
... runs OK. I see files in the checkpoint directory and see a loss function graph in Tensorboard.
But when the program gets to...
eval_result = estimator.evaluate(
input_fn=lambda: get_inputs(x_paths=[x_test_file], y_paths=[y_test_file], batch_size=32))
... it just hangs.
I have checked the test files and I also tried running the estimator.evaluate using the training files. Still hangs
I am using TensorFlow 1.6, Python 3.6
The following is all of the code:
import tensorflow as tf
import os
import numpy as np
x_train_file = os.path.join('D:', 'Diag', '6x6_train.csv')
y_train_file = os.path.join('D:', 'Diag', 'HasDiag_train.csv')
x_test_file = os.path.join('D:', 'Diag', '6x6_test.csv')
y_test_file = os.path.join('D:', 'Diag', 'HasDiag_test.csv')
model_chkpt = os.path.join('D:', 'Diag', "checkpoints")
def get_inputs(
count=None, shuffle=True, buffer_size=1000, batch_size=32,
num_parallel_calls=8, x_paths=[x_train_file], y_paths=[y_train_file]):
"""
Get x, y inputs.
Args:
count: number of epochs. None indicates infinite epochs.
shuffle: whether or not to shuffle the dataset
buffer_size: used in shuffle
batch_size: size of batch. See outputs below
num_parallel_calls: used in map. Note if > 1, intra-batch ordering
will be shuffled
x_paths: list of paths to x-value files.
y_paths: list of paths to y-value files.
Returns:
x: (batch_size, 6, 6) tensor
y: (batch_size, 2) tensor of 1-hot labels
"""
def x_map(line):
n_dims = 6
columns = [str(i1) for i1 in range(n_dims**2)]
# Decode the line into its fields
fields = tf.decode_csv(line, record_defaults=[[0]] * (n_dims ** 2))
# Pack the result into a dictionary
features = dict(zip(columns, fields))
return features
def y_map(line):
y_row = tf.string_to_number(line, out_type=tf.int32)
return y_row
def xy_map(x, y):
return x_map(x), y_map(y)
x_ds = tf.data.TextLineDataset(x_train_file)
y_ds = tf.data.TextLineDataset(y_train_file)
combined = tf.data.Dataset.zip((x_ds, y_ds))
combined = combined.repeat(count=count)
if shuffle:
combined = combined.shuffle(buffer_size)
combined = combined.map(xy_map, num_parallel_calls=num_parallel_calls)
combined = combined.batch(batch_size)
x, y = combined.make_one_shot_iterator().get_next()
return x, y
columns = [str(i1) for i1 in range(6 ** 2)]
feature_columns = [
tf.feature_column.numeric_column(name)
for name in columns]
estimator = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=[18, 9],
activation_fn=tf.nn.relu,
n_classes=2,
model_dir=model_chkpt)
estimator.train(
input_fn=lambda: get_inputs(x_paths=[x_train_file], y_paths=[y_train_file], batch_size=32), steps=100)
eval_result = estimator.evaluate(
input_fn=lambda: get_inputs(x_paths=[x_test_file], y_paths=[y_test_file], batch_size=32))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
There are two parameters that are causing this:
tf.data.Dataset.repeat has a count parameter:
count: (Optional.) A tf.int64 scalar tf.Tensor, representing the
number of times the dataset should be repeated. The default behavior
(if count is None or -1) is for the dataset be repeated indefinitely.
In your case, count is always None, so the dataset is repeated indefinitely.
tf.estimator.Estimator.evaluate has the steps parameter:
steps: Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
Steps are set for the training, but not for the evaluation, as a result the estimator is running until input_fn raises an end-of-input exception, which, as described above, never happens.
You should set either of those, I think count=1 is the most reasonable for evaluation.
Related
I have a text dataset that I want to use for a GAN and it should turn to onehotencode and this is how I Creating a Custom Dataset for my files
class Dataset2(torch.utils.data.Dataset):
def __init__(self, list_, labels):
'Initialization'
self.labels = labels
self.list_IDs = list_
def __len__(self):
'Denotes the total number of samples'
return len(self.list_IDs)
def __getitem__(self, index):
'Generates one sample of data'
# Select sample
mylist = self.list_IDs[index]
# Load data and get label
X = F.one_hot(mylist, num_classes=len(alphabet))
y = self.labels[index]
return X, y
It is working well and every time I call it, it works just fine but the problem is when I use DataLoader and try to use it, its shape is not the same as it just came out of the dataset, this is the shape that came out of the dataset
x , _ = dataset[1]
x.shape
torch.Size([1274, 22])
and this is the shape that came out dataloader
dataloader = DataLoader(dataset, batch_size=64, shuffle=True)
one = []
for epoch in range(epochs):
for i, (real_data, _) in enumerate(dataloader):
one.append(real_data)
one[3].shape
torch.Size([4, 1274, 22])
this 4 is number of samples in my data but it should not be there, how can I fix this problem?
You confirmed you only had four elements in your dataset. You have wrapped your dataset with a data loader with batch_size=64 which is greater than 4. This means the dataloader will only output a single batch containing 4 elements.
In turn, this means you only append a single element per epoch, and one[3].shape is a batch (the only batch of the data loader), shaped (4, 1274, 22).
Suppose you have a tensorflow dataset that has values and labels. In my case I created it from a time series as:
f = pd.read_csv('MY.csv', index_col=0, parse_dates=True)
#extract the column we are interested in
single_col = df[['Close']]
#Convert to TFDataset
WINDOW_SIZE = 10
dataset = tf.data.Dataset.from_tensor_slices((single_col_df.values))
d = dataset.window(WINDOW_SIZE, shift=1, drop_remainder=True)
d2 = d.flat_map(lambda window: window.batch(WINDOW_SIZE+1))
#create data and ground truth
d3 = d2.map(lambda window: (window[:-1], window[-1:]))
#get the total data and shuffle
len_ds = 0
for item in d2:
len_ds +=1
d_shuffled = d3.shuffle(buffer_size=len_ds)
# split train/test
train_size = int(0.7 * len_ds)
val_size = int(0.15 * len_ds)
test_size = int(0.15 * len_ds)
train_dataset = d_shuffled.take(train_size)
test_dataset = d_shuffled.skip(train_size)
val_dataset = test_dataset.skip(test_size)
test_dataset = test_dataset.take(test_size)
train_dataset = train_dataset.batch(32).prefetch(2)
val_dataset = val_dataset.batch(32)
Now for evaluation purposes I want to get the ground truth values of the test, so I am running
y = np.concatenate([y for x, y in test_dataset], axis=0)
but this is returning each time an array differently sorted, so it cannot be compared with the models predicted by the model. For example when running the above line in jupyter notebook and printing the first 5 values of y as `y[:5], one time I get
array([[26.04000092],
[16.39999962],
[18.98999977],
[42.31000137],
[19.82999992]])
another I get
array([[15.86999989],
[43.27999878],
[19.32999992],
[48.38000107],
[17.12000084]])
but the length of y remains the same so I am assuming that the elements are just shuffled around. Anyway with this I cannot compared these values with the predicted ones, since their order is different :
y_hat = model.predict(test_dataset)
Furthermore, I get also different evaluation results. For example,
x = []
y = []
for _x,_y in test_dataset:
x.append(_x)
y.append(_y)
x = np.array(x)
y = np.array(y)
model.evaluate(x=x, y=y)
each time the loop defining the arrays x and y is re-executed, I get different x and y arrays that result in a different evaluation result.
Your problem :
by calling shuffle on the whole dataset before splitting it, you actually reshuffle the dataset after each exhaustion of the dataset. Here is what is happening:
The first call of y = np.concatenate([y for x, y in test_dataset], axis=0) will exhaust the test dataset
The second call of y = np.concatenate([y for x, y in test_dataset], axis=0) will see that test_dataset is exhausted, and will trigger:
A reshuffle of the whole dataset
The call to skip to get a dataset of the right size
You end up with potentially samples of your train dataset of the first exhaustion in the test dataset of the second round.
The solution
If we look at the documentation of tf.data.Dataset.suffle :
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Set it to false to have a deterministic shuffle. If you still want to shuffle your training set each epoch, you need to call shuffle on the train set.
A dummy example :
import tensorflow as tf
tf.random.set_seed(0) # reproducibility
a = tf.range(10)
ds = tf.data.Dataset.from_tensor_slices(a)
ds_shuffled = ds.shuffle(10,reshuffle_each_iteration=False)
ds_train = ds_shuffled.take(7)
ds_train = ds_train.shuffle(7)
ds_test = ds_shuffled.skip(7)
Running it :
>>> [x.numpy() for x in ds_test]
[5, 8, 4]
>>> [x.numpy() for x in ds_test]
[5, 8, 4]
>>> [x.numpy() for x in ds_train]
[1, 3, 7, 2, 6, 9, 0]
>>> [x.numpy() for x in ds_train]
[3, 9, 6, 7, 2, 1, 0]
Try running it with reshuffle_each_iteration=True to see what happened in your own code
In addition to Lescurel's answer, another working solution seems to be this piece of code from Kaggle that uses sklearn:
from sklearn.model_selection import train_test_split
# Extract target values from the vanilla training dataset.
# Indices are generated along with the target values, which are used to filter dataset.
y_targets = np.array([ target.numpy() for _, target in iter(d_shuffled) ])
X_indices = np.arange(len(y_targets))
y_targets = y_targets.reshape((-1,))
y_targets.shape
#stratify array-like, default=None If not None, data is split in a stratified fashion, using this as the class labels.
X_train_indices, X_val_indices, y_train_targets, y_val_targets = train_test_split(
X_indices, y_targets, test_size=0.15, stratify=None, random_state=53)
X_test_indices, X_val_indices, y_test_targets, y_val_targets = train_test_split(
X_val_indices, y_val_targets, test_size=0.5, stratify=None, random_state=53)
def get_selected_dataset(ds, X_indices_np):
# Make a tensor of type tf.int64 to match the one by Dataset.enumerate().
X_indices_ts = tf.constant(X_indices_np, dtype=tf.int64)
def is_index_in(index, rest):
# Returns True if the specified index value is included in X_indices_ts.
#
# '==' compares the specified index value with each values in X_indices_ts.
# The result is a boolean tensor, looks like [ False, True, ..., False ].
# reduce_any() returns Ture if True is included in the specified tensor.
return tf.math.reduce_any(index == X_indices_ts)
def drop_index(index, rest):
return rest
# Dataset.enumerate() is similter to Python's enumerate().
# The method adds indices to each elements. Then, the elements are filtered
# by using the specified indices. Finally unnecessary indices are dropped.
selected_ds = ds \
.enumerate() \
.filter(is_index_in) \
.map(drop_index)
return selected_ds
I need to save the indices of the data that are used in every mini-batch.
For example if my data is:
x = np.array([[1.1], [2.2], [3.3], [4.4]])
and the first mini-batch is [1.1] and [3.3], then I want to store 0 and 2 (since [1.1] is the 0th observations and [3.3] is the 2nd observation).
I am using tensorflow in eager execution with the keras.sequential APIs.
As far as I can tell from reading the source code, this information is not stored anywhere so I was unable to do this with a callback.
I am currently solving my problem by creating an object that stores the indices.
class IndexIterator(object):
def __init__(self, n, n_epochs, batch_size, shuffle=True):
data_ix = np.arange(n)
if shuffle:
np.random.shuffle(data_ix)
self.ix_batches = np.array_split(data_ix, np.ceil(n / batch_size))
self.batch_indices = []
def generate_arrays(self, x, y):
batch_ixs = np.arange(len(self.ix_batches))
while 1:
np.random.shuffle(batch_ixs)
for batch in batch_ixs:
self.batch_indices.append(self.ix_batches[batch])
yield (x[self.ix_batches[batch], :], y[self.ix_batches[batch], :])
data_gen = IndexIterator(n=32, n_epochs=100, batch_size=16)
dnn.fit_generator(data_gen.generate_arrays(x, y),
steps_per_epoch=2,
epochs=100)
# This is what I am looking for
print(data_gen.batch_indices)
Is there no way to do this using a tensorflow callback?
Not sure if this will be more efficient than your solution, but is certainly more general.
If you have training data with n indices you can create a secondary Dataset that contains only these indices and zip it with the "real" dataset.
I.E.
real_data = tf.data.Dataset ...
indices = tf.data.Dataset.from_tensor_slices(tf.range(data_set_length)))
total_dataset = tf.data.Dataset.zip((real_data, indices))
# Perform optional pre-processing ops.
iterator = total_dataset.make_one_shot_iterator()
# Next line yields `(original_data_element, index)`
item_and_index_tuple = iterator.get_next()
`
Actually I'm building a keras model and I have a dataset in the msg format with over 10 million instances with 40 features which are all categorical. For the moment i'm using just a sample of it since reading all the dataset and encoding it is impossbile to fit into the memory. Here a part of the code i'm using:
import pandas as pd
from category_encoders import BinaryEncoder as be
from sklearn.preprocessing import StandardScaler
def model():
model = Sequential()
model.add(Dense(120, input_dim=233, kernel_initializer='uniform', activation='selu'))
model.add(Dense(12, kernel_initializer='uniform', activation='sigmoid'))
model.compile(SGD(lr=0.008),loss='mean_squared_error', metrics=['accuracy'])
return model
def addrDataLoading():
data=pd.read_msgpack('datum.msg')
data=data.dropna(subset=['s_address','d_address'])
data=data.sample(300000) # taking a sample of all the dataset to make the encoding possible
y=data[['s_address','d_address']]
x=data.drop(['s_address','d_address'],1)
encX = be().fit(x, y)
numeric_X= encX.transform(x)
encY=be().fit(y,y)
numeric_Y=encY.transform(y)
scaler=StandardScaler()
X_all=scaler.fit_transform(numeric_X)
x_train=X_all[0:250000,:]
y_train=numeric_Y.iloc[0:250000,:]
x_val=X_all[250000:,:]
y_val=numeric_Y.iloc[250000:,:]
return x_train,y_train,x_val,y_val
x_train,y_train,x_val,y_val=addrDataLoading()
model.fit(x_train, y_train,validation_data=(x_val,y_val),nb_epoch=20, batch_size=200)
So my question is how to use a custom data generator function to read and process all the data I have and not just a sample, and then use fit_generator() function to train my model?
EDIT
This is a sample of the data:
netData
I think that taking different samples from the data results in different encoding dimensions.
For this sample there's 16 different categories: 4 addresses (3 bit), 4 hostnames (3 bit ), 1 subnetmask (1 bit), 5 infrastructures (3 bit ), 1 accesszone(1 bit ), so the binary encoding will give us 11 bit and the new dimension of the data is 11 previously 5. So let's say for another sample in the address column we have 8 different categories this will give 4 bit in binary and we let the same number of categories in the other columns so the overall encoding will result in 12 dimensions. I believe that what's causing the problem.
Slightly slow solution (repeating the same actions)
Edit - fit BinatyEncoder before create generators
Drop NA first and work with clean data further to avoid reassignments of data frame.
data = pd.read_msgpack('datum.msg')
data.dropna(subset=['s_address','d_address']).to_msgpack('datum_clean.msg')
In this solution data_generator can process same data multiple times. If it's not critical, you can use this solution.
Define function which reads the data snd splits index to train and test. It won't consume a lot of memory.
import pandas as pd
from category_encoders import BinaryEncoder as be
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import numpy as np
def model():
#some code defining the model
def train_test_index_split():
# if there's enough memory to add one more column
data = pd.read_msgpack('datum_cleaned.msg')
train_idx, test_idx = train_test_split(data.index)
return data, train_idx, test_idx
data, train_idx, test_idx = train_test_index_split()
Define and initialize data generator, both for train and validation
def data_generator(data, encX, encY, bathc_size, n_steps, index):
# EDIT: As the data was cleaned, you don't need dropna
# data = data.dropna(subset=['s_address','d_address'])
for i in range(n_steps):
batch_idx = np.random.choice(index, batch_size)
sample = data.loc[batch_idx]
y = sample[['s_address', 'd_address']]
x = sample.drop(['s_address', 'd_address'], 1)
numeric_X = encX.transform(x)
numeric_Y = encY.transform(y)
scaler = StandardScaler()
X_all = scaler.fit_transform(numeric_X)
yield X_all, numeric_Y
Edited part now train binary encoders. You should sub-sample your data to create representative training set for encoders. I guess error with the shape of the data was caused by incorrecly trained BinaryEncoder (Error when checking input: expected dense_9_input to have shape (233,) but got array with shape (234,)):
def get_minimal_unique_frame(df):
return (pd.Series([df[column].unique() for column in df], index=df.columns)
.apply(pd.Series) # tranform list on unique values to pd.Series
.T # transope frame: columns is columns again
.fillna(method='ffill')) # fill NaNs with last value
x = get_minimal_unique_frame(data.drop(['s_address', 'd_address'], 1))
y = get_minimal_unique_frame(data[['s_address', 'd_address']])
NB: I never used category_encoders and have incompatible system configuration, so can't install and check it. So, former code can evoke problems. In that case, I guess, you should compare length of x and y data frames and make it the same, and probaly change an index of data frames.
encX = be().fit(x, y)
encY = be().fit(y, y)
batch_size = 200
train_steps = 100000
val_steps = 5000
train_gen = data_generator(data, encX, encY, batch_size, train_steps, train_idx)
test_gen = data_generator(data, encX, encY, batch_size, test_steps, test_idx)
Edit Please provide an exapmple of x_sample, run train_gen and save output, and post x_samples, y_smaples:
x_samples = []
y_samples = []
for i in range(10):
x_sample, y_sample = next(train_gen)
x_samples.append(x_sample)
y_samples.append(y_sample)
Note: data generator won't stop itself. But itt will be stopped after train_steps by fit_generator method.
Fit model with generators:
model.fit_generator(generator=train_gen, steps_per_epoch=train_steps, epochs=1,
validation_data=test_gen, validation_steps=val_steps)
As far as I know, python does not copy pandas dataframes if you won't do it explicitply with copy() or so. Because of it, both generators use the same object. But if you use Jupyter Notebook, data leaks/uncollected carbage may occur, and a memory troubles comes with them.
More efficient solution - scketch
Clean your data
data = pd.read_msgpack('datum.msg')
data.dropna(subset=['s_address','d_address']).to_msgpack('datum_clean.msg')
Create train/test split, preprocess it and store as numpy array, if you have enough disk space.
data, train_idx, test_idx = train_test_index_split()
def data_preprocessor(data, path, index):
# data = data.dropna(subset=['s_address','d_address'])
sample = data.loc[batch_idx]
y = sample[['s_address', 'd_address']]
x = sample.drop(['s_address', 'd_address'], 1)
encX = be().fit(x, y)
numeric_X = encX.transform(x)
encY = be().fit(y, y)
numeric_Y = encY.transform(y)
scaler = StandardScaler()
X_all = scaler.fit_transform(numeric_X)
np.save(path + '_X', X_all)
np.save(path + '_y', numeric_Y)
data_preprocessor(data, 'train', train_idx)
data_preprocessor(data, 'test', test_idx)
Delete unnecessary data:
del data
Load your files and use following generator:
train_X = np.load('train_X.npy')
train_y = np.load('train_y.npy')
test_X = np.load('test_X.npy')
test_y = np.load('test_y.npy')
def data_generator(X, y, batch_size, n_steps):
idxs = np.arange(len(X))
np.random.shuffle(idxs)
ptr = 0
for _ in range(n_steps):
batch_idx = idxs[ptr:ptr+batch_size]
x_sample = X[batch_idx]
y_sample = y[batch_idx]
ptr += batch_size
if ptr > len(X):
ptr = 0
yield x_sapmple, y_sample
Prepare generators:
train_gen = data_generator(train_X, train_y, batch_size, train_steps)
test_gen = data_generator(test_X, test_y, batch_size, test_steps)
And fit the model finaly. Hope one of this solutions will help. At least if python does pass arrays and data frames buy reference, not by value. Stackoverflow answer about it.
I've copied a tensorflow code from Sirajology's github. It should load a .csv into a single layer neural net.
My question is, how and where do I put the .csv file into the code?
Also, I don't understand if the code will automatically split the .csv into train and test data, or if I need to do that with some different code before it is fed to the neural net?
I've been spending a lot of time with python and tensorflow and understand some basic concepts but am still a total newb. Any help is appreciated! Thanks!!!
#I have eliminated all code that is obviously irrelevant to the question
tf.app.flags.DEFINE_string('train', None,
'File containing the training data (labels & features).')
tf.app.flags.DEFINE_string('test', None,
'File containing the test data (labels & features).')
tf.app.flags.DEFINE_integer('num_epochs', 1,
'Number of examples to separate from the training '
'data for the validation set.')
tf.app.flags.DEFINE_boolean('verbose', False, 'Produce verbose output.')
FLAGS = tf.app.flags.FLAGS
# Extract numpy representations of the labels and features given rows consisting of:
# label, feat_0, feat_1, ..., feat_n
def extract_data(filename):
# Arrays to hold the labels and feature vectors.
labels = []
fvecs = []
# Iterate over the rows, splitting the label from the features. Convert labels
# to integers and features to floats.
for line in file(filename):
row = line.split(",")
labels.append(int(row[0]))
fvecs.append([float(x) for x in row[1:]])
# Convert the array of float arrays into a numpy float matrix.
fvecs_np = np.matrix(fvecs).astype(np.float32)
# Convert the array of int labels into a numpy array.
labels_np = np.array(labels).astype(dtype=np.uint8)
# Convert the int numpy array into a one-hot matrix.
labels_onehot = (np.arange(NUM_LABELS) == labels_np[:, None]).astype(np.float32)
# Return a pair of the feature matrix and the one-hot label matrix.
return fvecs_np,labels_onehot
def main(argv=None):
# Be verbose?
verbose = FLAGS.verbose
# Get the data.
train_data_filename = FLAGS.train
test_data_filename = FLAGS.test
# Extract it into numpy matrices.
train_data,train_labels = extract_data(train_data_filename)
test_data, test_labels = extract_data(test_data_filename)
# Get the shape of the training data.
train_size,num_features = train_data.shape
# Get the number of epochs for training.
num_epochs = FLAGS.num_epochs
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step using the {feed_dict} argument to the Run() call below.
x = tf.placeholder("float", shape=[None, num_features])
y_ = tf.placeholder("float", shape=[None, NUM_LABELS])
# For the test data, hold the entire dataset in one constant node.
test_data_node = tf.constant(test_data)
# Define and initialize the network.
# These are the weights that inform how much each feature contributes to
# the classification.
W = tf.Variable(tf.zeros([num_features,NUM_LABELS]))
b = tf.Variable(tf.zeros([NUM_LABELS]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
# Optimization.
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Evaluation.
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# Create a local session to run this computation.
with tf.Session() as s:
# Run all the initializers to prepare the trainable parameters.
tf.initialize_all_variables().run()
if verbose:
print ('Initialized!')
print
print ('Training.')
# Iterate and train.
for step in xrange(num_epochs * train_size // BATCH_SIZE):
if verbose:
print (step,)
offset = (step * BATCH_SIZE) % train_size
batch_data = train_data[offset:(offset + BATCH_SIZE), :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
train_step.run(feed_dict={x: batch_data, y_: batch_labels})
if verbose and offset >= train_size-BATCH_SIZE:
print
# Give very detailed output.
if verbose:
print
print ('Weight matrix.')
print (s.run(W))
print
print ('Bias vector.')
print (s.run(b))
print
print ("Applying model to first test instance.")
first = test_data[:1]
print ("Point =", first)
print ("Wx+b = ", s.run(tf.matmul(first,W)+b))
print ("softmax(Wx+b) = ", s.run(tf.nn.softmax(tf.matmul(first,W)+b)))
print
print ("Accuracy:", accuracy.eval(feed_dict={x: test_data, y_: test_labels}))
if __name__ == '__main__':
tf.app.run()
It's expecting to receive it as argument in the terminal.
The lines below are checking for it:
tf.app.flags.DEFINE_string('train', None,
'File containing the training data (labels & features).')
tf.app.flags.DEFINE_string('test', None,
'File containing the test data (labels & features).')
tf.app.flags.DEFINE_integer('num_epochs', 1,
'Number of examples to separate from the training '
'data for the validation set.')
So, you just have to run it as:
python YourScript.py --train FileName.csv --test TestName.csv --num_epochs 5 --verbose True