read_train_sets() missing 1 required positional argument: 'classes' - python

I've been looking over a few Tensorflow and Keras guides and am generally as much of a beginner as you can get when it comes to Python. Any help with the below problem would be much appreciated. I'm struggling to figure out what the problem with this line of code below is. I'm getting the read_train_sets from a separate file that is defined as:
def read_train_sets(self, train_path, image_width, image_height, classes, validation_size)
I then called this in a separate file with the following code:
data = read_train_sets(train_path, img_width, img_height, classes, validation_size=0.2)
But then I got an error message that says:
<ipython-input-22-e2aa446e36dd> in <module>
----> 1 data = read_train_sets(train_path, img_width, img_height, classes, validation_size=0.2)
TypeError: read_train_sets() missing 1 required positional argument: 'classes'
Any idea what this means? I thought I'm already calling classes, but then again, I could be wrong.

That read_train_set is a function that belongs to the class DataSet.
In your code:
data = read_train_sets(train_path, img_width, img_height, classes, validation_size=0.2)
You are using this function as you created it yourself, and the error tells you that you are lacking one argument because it does not identify the self argument when you call the function.
The correct way to call that function should be something like:
data = dataset.read_train_sets(train_path, img_width, img_height, classes, validation_size=0.2)
You will need to previously create a DataSet object (I named it dataset in the example).
Source: https://github.com/rdcolema/tensorflow-image-classification/blob/master/dataset.py

Related

Where does keras actually initialize the dataset?

I'm trying to figure out the implementation of SGD in tensorflow, especially how keras/tensorflow actually initializes and dispatches the dataset.
In the constructor (__init__ method) of class TensorLikeDataAdapter, self._dataset is initialized by this line
https://github.com/tensorflow/tensorflow/blob/r2.5/tensorflow/python/keras/engine/data_adapter.py#L346
self._dataset = dataset
I tried to print the value out with this line
print('enumerate_epochs self._dataset', list(self._dataset))
and I got
<_OptionsDataset shapes: ((None, 2), (None,)), types: (tf.float32, tf.float32)>
which seems to indicate that the dataset hasn't yet been actually loaded.
At the very begining of the enumerate_epochs method
https://github.com/tensorflow/tensorflow/blob/r2.5/tensorflow/python/keras/engine/data_adapter.py#L1196
I added this line
def enumerate_epochs(self):
print('enumerate_epochs self._dataset', list(self._dataset))
and I got 3 (I set epoch=3) of the actual dataset, which means the dataset has been initialized and randomized somewhere before.
I went through the whole data_adapter.py but failed to locate where the dataset is actually initialized.
highlight
I also tried this line
print('data_handler._dataset', data_handler._dataset)
for epoch, iterator in data_handler.enumerate_epochs():
and I got
data_handler._dataset <_OptionsDataset shapes: ((None, 2), (None,)), types: (tf.float32, tf.float32)>
However, this line
def _truncate_execution_to_epoch(self):
print('_truncate_execution_to_epoch self._dataset', list(self._dataset))
gives 3 (epoch=3) of the actual dataset, which means somewhere just in between the dataset is actually initialized though I couldn't imagine where it could be!
I also tried class DataHandler
print('DataHandler self._dataset', list(self._dataset))
self._configure_dataset_and_inferred_steps(strategy, x, steps_per_epoch,
class_weight, distribute)
and I got this error
AttributeError: 'DataHandler' object has no attribute '_dataset'
Could someone help me to see the light at the end of the tunnel.

Keras : ValueError: `y` argument is not supported when using `keras.utils.Sequence` as input

I wanted to do a simple classification task but when I try to run it, I am getting this error:
`ValueError: `y` argument is not supported when using `keras.utils.Sequence` as input.`
Use the keyword validation_data. If you pass your validation dataset without the keyword, Keras thinks validation_dataset is your labels.
model.fit(train_dataset, validation_data=validation_dataset, batch_size=32, epochs=1)
Let's imagine that you want to predict if it's a Dog or a Cat picture, you have two labels Dog and Cat. Now when it comes to CNN dataset the way most of peoples structure it (which is what is expecting image_generator.flow_from_directory) looks like this :
Train/
.........Cat/
...............Image1.jpg
...............Image2.jpg
.........Dog/
...............Image3.jpg
...............Image4.jpg
Same for your label directory.
Btw your directory Label should be renamed validation ( it's not really important but it makes more sense)

What is the need to return a function object while creating a data set using tensorflow

I am new to Machine Learning and I am trying to create a Machine Learning Model using the Tensorflow API from the tutorial in the Tensorflow documentation from here
But I am having trouble understanding this part of the code
def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function(): # inner function, this will be returned
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df)) # create tf.data.Dataset object with data and its label
if shuffle:
ds = ds.shuffle(1000) # randomize order of data
ds = ds.batch(batch_size).repeat(num_epochs) # split dataset into batches of 32 and repeat process for number of epochs
return ds # return a batch of the dataset
return input_function # return a function object for use
Then storing the output of the function in a variable
train_input_fn = make_input_fn(dftrain, y_train)
And at last training the model with the data set
linear_est.train(train_input_fn)
I failed to realize what we are trying to do when by just returning the function name of the inner-function in make_input_function instead of just returning our data set and passing it to train the model.
I am a beginner in Python and just started to learn Machine Learning and I am unable to find a proper answer to my question so if anyone can kindly explain it in a beginner friendly way I would be very much obliged.
I failed to realize what we are trying to do when by just returning the function name of the inner-function in make_input_function instead of just returning our data set and passing it to train the model.
In python programming, this is called Currying, It is used to transform multiple-argument function into single argument function by evaluating incremental nesting of function arguments. Currying also mends one argument to another forms a relative pattern while execution.
In tensorflow, based on the documentation (https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier#train).
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
The method train of the estimator is expecting a parameter input_fn. The reason is that everytime you call the Estimator.train() it will create a new graph by invoking either input_fn and model_fn and connecting them together. If you supply either a tensor or a dataset it will lead to different errors.

How to use layer normalization in tensorflow 1.12?

I am stuck with tensorflow 1.12, and I need to use layer normalization. I can't find some examples of this, and as I am new to tensorflow I am unable to figure out where I am going wrong.
tf.contrib.layers.layer_norm is the function that I want to include in my tf.keras.Sequential() like this -
self.module = K.Sequential([
tf.contrib.layers.layer_norm(trainable=True),
K.layers.Activation(self.activation),
K.layers.Dense(units=self.output_size, activation=None, kernel_initializer=self.initializer)
])
I also tried using
self.ln = tf.contrib.layers.layer_norm(trainable=True)
### and in call()
self.ln(self.module)
In all the cases, it throws the error at the line defining tf.contrib.layers.layer_norm(trainable=True)-
TypeError: layer_norm() missing 1 required positional argument: 'inputs'
I understand that the inputs need to be given as the argument to layernorm, but if I want it to trainable, it can only be defined in __init__(). Where am I going wrong?
I use mainly PyTorch, so it is quite obvious that I am not able to grasp the ideology of tf. Any suggestions will be very helpful!
Sequential needs to be initialized by a list of Layer instances, such as tf.keras.layers.Activation, tf.keras.layers.Dense. tf.contrib.layers.layer_norm is functional instead of Layer instance.
There is a third party implementation of layer normalization in keras style - keras-layer-normalization. But I haven't tested in tensorflow.

What is the difference between these two ways of building a model in keras?

I am new to Keras and after going through a few tutorials i started building a model and found these two styles of implementations. However i am getting an error in the first one and second one works fine. Can someone explain the difference between the two?
First Method:
visible = Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True)
encoder = LSTM(100,activation='relu')(visible)
Second Method:
model = Sequential()
model.add(Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True))
model.add(LSTM(100,activation ='relu'))
This is the error I get:
ValueError: Layer lstm_59 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.embeddings.Embedding'>. Full input: [<keras.layers.embeddings.Embedding object at 0x00000207BC7DBCC0>]. All inputs to the layer should be tensors.
They're two ways of creating DL models in Keras. The first code snippet follows functional style. This style is used for creating complex models like multi-input/output, shared layers etc.
https://keras.io/getting-started/functional-api-guide/
The second code snippet is Sequential style. Simple models can be created which involves just stacking of layers.
https://keras.io/getting-started/sequential-model-guide/
If you read the functional API guide, you'll notice the following point:
'A layer instance is callable (on a tensor), and it returns a tensor'
Now the error you're seeing would make sense. This line only creates the layer and doesn't invoke it by passing a tensor.
visible = Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True)
Subsequently, passing this Embedding object to LSTM layer throws an error as it is expecting a Tensor.
This is an example from the functional API guide. Notice the output tensors getting passed from one layer to another.
main_input = Input(shape=(100,), dtype='int32', name='main_input')
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
lstm_out = LSTM(32)(x)

Categories

Resources