Keras remove layers after model.fit() - python

I'm using Keras to do the modelling works and I wonder is it possible to remove certain layers by index or name? Currently I only know the model.pop() could do this work but it just removes the most recently added layers. In addition, layers is the type of tensorvariable and I have no idea how to remove certain element which can be done in numpy array or list. BTW I'm using Theano backend.

It is correct that model.pop() just removes the last added layer and there is no other documented way to delete intermediate layers.
You can always get the output of any intermediate layer like so:
base_model = VGG19(weights='imagenet')
model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
Example taken from here: https://keras.io/applications/
Than add your new layers on top of that.

Related

How to resume prediction(testing) after modifying a hidden layer in TensorFlow?

What I want to test is what is the difference of the output before and after I modify the last hidden layer values.
So I can divide my question in two parts
First
last_hidden_layer_values=[1,2,3,4,5]
And I want to set it into:
my_hidden_layer_values=[1,1,1,1,1]
Second:
After modifying last_hidden_layer_values into my_hidden_layer_values, how can I assume the model prediction?
I'm using tensorflow to evaluate this, but I am new to it. I have searched this a little bit. Is tf.train.Checkpoint the answer? But it seems like it is only for training?
Let's say you have two different models: one with the original hidden layer values, and one with the altered hidden layer values. If you want to compare the different predictions of these two models on some prediction, you can use the predict method here.
i.e. something like
model_1.predict(input_x)
# and
model_2.predict(input_x)
Hope this helps!

Creating a Text Classifier with Other data features in Tensorflow 2.0/Keras

Main question: How do I create a neural network that can classify text data along with numerical features?
It sounds simple, but I must not be understanding something correctly.
Background
I'm trying to build a text classifier (for the first time) using TensorFlow 2/Keras to look through app store reviews and classify them into the following categories: happy, pricingIssue, techIssue, productIssue, miscIssue
I have a data set that contains: star_rating, review_text and the associated labels.
Problem
My understanding from this tutorial from TensorFlow is that I need to use the tensorflow hub layer to embed the sentences as as a fixed shape output.
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[], dtype=tf.string, trainable=True)
And then I would build the model using that as my input layer.
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
So my question is, where do I insert the numerical rating into the data into the model?
Potential Solutions?
Use two input layers and merge them somehow? I would think that I would want to use the hub layer to embed the data, another input layer for numerical data, and then pipe them both into the next layer?
Do I embed the string first and then append the rating to that? I could also see creating a function that preprocesses the data into the array, and appends the rating onto the end of the embedded string, and just use the whole thing as the input object.
I'm stumped and any guidance is helpful!!
After consulting with an expert, both of the above solutions can work, but have different trade offs:
Using two input layers: You can do this, but not using a sequential model, since this is no longer in sequence. It's a more traditional graph
Append the string first: Because the embedded layer is pre-trained, it doesn't need to happen inside the model, and the text can be embedded and then added into a tensor along with the numerical rating.
Since I'm the most familiar with Tensorflow 2 and Keras, I opted for the 2nd choice, so I can continue to use a sequential model.
There’s another option for adding in non-text data to text models: make the data textual. The exact way you do this depends on the tokenizer you are using, and how your model handles words it hasn’t seen before (OOV words). But, similar to how you might see special tokens like __EOS__ to tell the model that one sentence ended and the next is beginning, you could prepend a text version of the rating to the review string: review_string = “_5_stars_ “ + review_string.
This sounds like such a hack it can’t possibly work, but I’ve talked to someone at AWS using it in production to pass metadata to a text model.

Keras LSTM use softmax on every unit

I am creating a model somewhat similar to the one mentioned below:
model
I am using Keras to create such model but have struck a dead end as I have not been able find a way to add SoftMax to outputs of the LSTM units. So far all the tutorials and helping material provides with information about outputting a single class even like in the case of image captioning as provided in this link.
So is it possible to apply SoftMax to every unit of LSTM (where return sequence is true) or do I have to move to pytorch.
The answer is: yes, it is possible to apply to each unit of LSTM and no, you do not have to move to PyTorch.
While in Keras 1.X you needed to explicitly state that you add a TimeDistributed layer, in Keras 2.X you can just write:
model.add(LSTM(50,activation='relu',return_sequences=False))
model.add(Dense(number_of_classes,activation='softmax'))

Wrong version of function getting called in Python

I am working on retrieving Inception V3 model's top layer in Keras/Tensorflow (in Jupyter Notebook).
I could retrieve the Inception V3 model and its weights correctly.
Now, I am trying to get Fully Connected layer (top layer) using following code snippet.
base_model = InceptionV3(weights=weights)
base_model.get_layer('flatten')
However, the function is failed saying
"ValueError: No such layer: flatten"
When I looked at the stacktrace, get_layer() function from topology.py is getting called which is under 'keras/engine'.
Rather than this function, get_layer() function from models.py directly under keras should have been called.
What possibly can be the problem? How can I force Python to call the correct version? Or is there any other way to get the weights from InceptionV3 model?
Just tried enumerating base_model.layers list contents and found that the name of the layers are different and no layer named flatten is found.
So I replaced flatten with the last presumably FC layer named 'mixed10' and the code worked.
Is this the right thing to do? or I am doing something improper?
It turned out that name of these layers keep on changing. So best way is to enumerate all the layer names using Model.layers[].name or Model.summary() and use whichever name you want that is listed in the output.
InceptionV3 model has no 'flatten' layer
to get the top Fully connected layer you can just use
base_model = InceptionV3(weights=weights, include_top=False)

Fetch weights of the previous layer

Is it possible to fetch the weights of the previous layer, modify them and set again to the next layer. I want to introduce a custom layer in the network which will modify the weights( as per the desired logic ) and then set the modified weight values to the next layer. Similar to what is depicted in the figure below:
I am not sure if this is possible or not. I know that we can dump the snapshot and then use it to set the new weights. I can also converted the weights using the snapshots. But, I dont know how to do this within the network itself ( without taking or using any snapshot).
Thanks
KK
tl;dr: Load one model (without compiling) and use the weights you want to initialize a model. Create new weights for the layers you want to change.
Full version:
As per this thread and as explained by fchollet himself, The canonical way to do this is to load your weights into the previous Keras model (you don't need to compile it, so it's instant) and use that model as a query-able datastructure to access the weights.
For a sequential model you can do it like this:
weights = model.layers[5].get_weights()
model.layers[5].set_weights(weights)
See also: another discussion on this topic with fchollet.

Categories

Resources