Changing a TF Dataset from classified to numeric/regression data - python

This is my first attempt at branching out from ready-made datasets and models to something pieced together on my own. Using tensorflow, I'm trying to load a dataset of images where each image is assigned a normalized, numeric value so that I can try to build a regression CNN over it.
Unfortunately for me, tf.keras.preprocessing.image_dataset_from_directory expects the dataset to be discretely classified.
Is there a straightforward way to convert the BatchDataset object to a numeric labeling?
For further clarification, if I were to do a dir(my_dataset) or my_dataset.__dict__, I would like to know where the labels are.

I can answer this after poking around a little more at the BatchDataSet object returned from tf.keras.preprocessing.image_dataset_from_directory
Because these objects are very specialized iterators, it's not straightforward to access a single "row."
The following is how I got direct access to where this information is located from a BatchDataSet.
training_ds = tf.keras.preprocessing.image_dataset_from_directory(
"assets/",
validation_split=0.2,
subset="training",
seed=123,
color_mode="rgb",
image_size=(IMG_SIZE, IMG_SIZE),
batch_size=32)
# pull one batch from the BatchDataSet
one_batch = training_ds.take(1)
# transform the TakeDataset into a python iterator and then pull one batch using next()
training_data, labels = next(iter(batch))
# labels is a numpy array of int32 where each integer is the index of the class inside training_ds.class_names

Related

Using sklearn's roc_auc_score for OneVsOne Multi-Classification?

So I am working on a model that attempts to use RandomForest to classify samples into 1 of 7 classes. I'm able to build and train the model, but when it comes to evaluating it using roc_auc function, I'm able to perform 'ovr' (oneVsrest) but 'ovo' is giving me some trouble.
roc_auc_score(y_test, rf_probs, multi_class = 'ovr', average = 'weighted')
The above works wonderfully, I get my output, however, when I switch multi_class to 'ovo' which I understand might be better with class imbalances, I get the following error:
roc_auc_score(y_test, rf_probs, multi_class = 'ovo')
IndexError: too many indices for array
(I pasted the whole traceback below!)
Currently my data is set up as follow:
y_test (61,1)
y_probs (61, 7)
Do I need to reshape my data in a special way to use 'ovo'?
In the documentation, https://thomasjpfan.github.io/scikit-learn-website/modules/generated/sklearn.metrics.roc_auc_score.html, it says "binary y_true, y_score is supposed to be the score of the class with greater label. The multiclass case expects shape = [n_samples, n_classes] where the scores correspond to probability estimates."
Additionally, the whole traceback seems to hint as using maybe using a more binary array (hopefully that's the right term! I'm new to this!)
Very, very thankful for any ideas/thoughts!
#Tirth Patel provided the right answer, I needed to reshape my test set using one hot encoding. Thank you!

What does reset actually mean in Tensorflow 2 dataset?

I'm following tensorflow 2 Keras documentation. My model looks like this:
train_dataset = tf.data.Dataset.from_tensor_slices((np.array([_my_cus_func(i) for i in X_train]), y_train))
train_dataset = train_dataset.map(lambda vals,lab: _process_tensors(vals,lab), num_parallel_calls=4)
train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataset = train_dataset.batch(64,drop_remainder=True)
train_dataset = train_dataset.prefetch(1)
model=get_compiled_model()
model.fit(train_dataset, epochs=100)
The documentation says
Note that the Dataset is reset at the end of each epoch, so it can be
reused of the next epoch.
If you want to run training only on a specific number of batches from
this Dataset, you can pass the steps_per_epoch argument, which
specifies how many training steps the model should run using this
Dataset before moving on to the next epoch.
If you do this, the dataset is not reset at the end of each epoch,
instead we just keep drawing the next batches. The dataset will
eventually run out of data (unless it is an infinitely-looping
dataset).
What does the reset actually mean? Will tensorflow read data from tensor slices after every epoch? or it only reshuffles and runs map function? I want tensorflow to read data from numpy after epoch and run _my_cus_func. I can rather pass _my_cus_func on dataset map or apply api, but I'm more comfortable in doing this on python list or numpy array.
In this context, reset means start iterating over dataset from scratch. In your particular case, code lacks repeat() function. So, if you specify steps_per_epoch parameter like this
model.fit(train_dataset, steps_per_epoch=N, epochs=100)
It will iterate over the dataset for N steps, if N is less than actual number of examples, it will terminate training. If N is larger, it will finish one epoch, but still terminates when runs out of data . If you add repeat,
train_dataset = train_dataset.shuffle(buffer_size=10000).repeat()
It will start new cycle over dataset when actual number of examples is reached, not when new epoch starts.

Creating a Y_true Dataset in Keras

Here's my current call to model.fit in Keras
history_callback = model.fit(x_train/255.,
validation_train_data,
validation_split=validation_split,
batch_size=batch_size,
callbacks=callbacks)
in this example x_train is a list of numpy arrays that contains all of my image data. The way validation_train_data is structured though is its a list of numpy arrays of totally different sizes that is equal in length to the list of numpy arrays that contains my image. The data for each image though is contained in validation_train_data such that x_train[i] would correspond to a set containing validation_train_data[0][i], validation_train_data[1][i], validation_train_data[2][i], etc. Is there any way I can reformat my validation_train_data such that it can properly be used as a y_true in a custom keras loss function.
I managed to solve my problem by writing a generator function which generated a batch of x and y data as lists and put them together as a tuple. I then called fit_generator with the argument where generator = my_generator and it worked just fine. If you have odd input data then you should consider writing a generator to take care of it.
This is the tutorial I used to do so:
https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly

How can I make predictions from a trained model inside a Tensorflow input pipeline?

I am trying to train a model for emotion recognition, which uses one of VGG's layer's output as an input.
I could manage what I want by running the prediction in a first step, saving the extracted features and then using them as input to my network, but I am looking for a way to do the whole process at once.
The second model uses a concatenated array of feature maps as input (I am working with video data), so I am not able to simply wire it to the output of VGG.
I tried to use a map operation as depicted in the tf.data.dataset API documentations this way :
def trimmed_vgg16():
vgg16 = tf.keras.applications.vgg16.VGG16(input_shape=(224,224,3))
trimmed = tf.keras.models.Model(inputs=vgg16.get_input_at(0),
outputs=vgg16.layers[-3].get_output_at(0))
return trimmed
vgg16 = trimmed_vgg16()
def _extract_vgg_features(images, labels):
pred = vgg16_model.predict(images, batch_size=batch_size, steps=1)
return pred, labels
dataset = #load the dataset (image, label) as usual
dataset = dataset.map(_extract_vgg_features)
But I'm getting this error : Tensor Tensor("fc1/Relu:0", shape=(?, 4096), dtype=float32) is not an element of this graph which is pretty explicit. I'm stuck here, as I don't see a good way of inserting the trained model in the same graph and getting predictions "on the fly".
Is there a clean way of doing this or something similar ?
Edit: missed a line.
Edit2: added details
You should be able to connect the layers by first creating the vgg16 and then retrieving the output of the model as such and afterward you can use that tensor as an input to your own network.
vgg16 = tf.keras.applications.vgg16.VGG16(input_shape=(224,224,3))
network_input = vgg16.get_input_at(0)
vgg16_out = vgg16.layers[-3].get_output_at(0) # use this tensor as input to your own network

Hold out tensorflow 1.4 new dataset API

With the new dataset object, is there a way to divide a dataset into training and test dataset, according to a certain ratio, to get an hold out? and a k-fold cross validation?
In my case i wrote all data in only one TFRecord file and then i imported it with tf.data.TFRecordDataset.
Now, for hold out i'd like a way to split this given dataset in two datasets with a ratio. I solved this with data.take() and data.skip() but for ratio i need dataset's lenght, it's not graceful.
def split_dataset(dataset, ratio, n):
count_train = (n*ratio)//100
train = dataset.take(count_train)
test = dataset.skip(count_train)
return train,test
filenames = ["dataset_breast.tfrecords"]
dataset = tf.data.TFRecordDataset(filenames)
train_dataset, test_dataset = split_dataset(dataset, 80, 3360)
While for k-fold, i find only solution with scikit workaround on the dataset, before tf.data.TFRecordDataset importing.
I do not know of any feature like what you're describing. There are, of course, ways to achieve the functionality you're after. Here's two:
"Source" Placeholder
This one comes straight from the API docs. Though it is originally intended for TFRecordDataset, I imagine it could be adapted to other types. I'll copy/paste from the link:
filenames = tf.placeholder(tf.string, shape=[None])
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(...) # Parse the record into tensors.
dataset = dataset.repeat() # Repeat the input indefinitely.
dataset = dataset.batch(32)
iterator = dataset.make_initializable_iterator()
# You can feed the initializer with the appropriate filenames for the current
# phase of execution, e.g. training vs. validation.
# Initialize `iterator` with training data.
training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]
sess.run(iterator.initializer, feed_dict={filenames: training_filenames})
# Initialize `iterator` with validation data.
validation_filenames = ["/var/data/validation1.tfrecord", ...]
sess.run(iterator.initializer, feed_dict={filenames: validation_filenames})
A little discussion: This works with just one training tfrecord and one validation tfrecord, too. So, you could split your data using sklearn.model_selection.train_test_split before writing the TFRecords. Write one TFRecord dedicated to training data, one dedicated to validation data. With sklearn, you can specify a ratio (or an absolute number).
Two Datasets
Exactly like the name says, forget the filenames = tf.placeholder. Just create two iterators, one for testing and one for training. I usually use TFRecords, but you're free to try another type of dataset. Typically, I put the get_next calls into a tf.cond on a boolean tf.placeholder. If you're especially interested in this method, I could provide a MWE. But, the source placeholder seems to be the preferred method (seeing as it's in the docs...).

Categories

Resources