Wrong version of function getting called in Python - python

I am working on retrieving Inception V3 model's top layer in Keras/Tensorflow (in Jupyter Notebook).
I could retrieve the Inception V3 model and its weights correctly.
Now, I am trying to get Fully Connected layer (top layer) using following code snippet.
base_model = InceptionV3(weights=weights)
base_model.get_layer('flatten')
However, the function is failed saying
"ValueError: No such layer: flatten"
When I looked at the stacktrace, get_layer() function from topology.py is getting called which is under 'keras/engine'.
Rather than this function, get_layer() function from models.py directly under keras should have been called.
What possibly can be the problem? How can I force Python to call the correct version? Or is there any other way to get the weights from InceptionV3 model?
Just tried enumerating base_model.layers list contents and found that the name of the layers are different and no layer named flatten is found.
So I replaced flatten with the last presumably FC layer named 'mixed10' and the code worked.
Is this the right thing to do? or I am doing something improper?

It turned out that name of these layers keep on changing. So best way is to enumerate all the layer names using Model.layers[].name or Model.summary() and use whichever name you want that is listed in the output.

InceptionV3 model has no 'flatten' layer
to get the top Fully connected layer you can just use
base_model = InceptionV3(weights=weights, include_top=False)

Related

keras problems loading custom model from yolov2

I've searched around for a couple of answers regarding the load_model from keras but I still have a question.
I am following this model really closely (https://github.com/experiencor/keras-yolo2), and am training on a custom dataset.
I have done the training which gives me a yolov2.h5 file, basically model weights to fit into the keras model. But I am encountering some problems with the loading of the model.
After loading the model (in a separate.py file)
model = load_model('file_dir/yolov2.h5')
First I encounter the issue
NameError: name 'tf' is not defined
Which I then searched up to modify my code to add custom objects as such:
model = load_model('file_dir/yolov2.h5', custom_objects={'tf':tf})
This clears the first error but results in another
ValueError: Unknown loss function : custom_loss
I used the custom_loss function from the yolov2 (https://github.com/experiencor/keras-yolo2/blob/master/frontend.py), so i tried to solve it by
from frontend import YOLO
model = load_model('file_dir/yolov2.h5' custom_objects={'tf':tf, 'custom_loss':YOLO.custom_loss)
But ran into another error:
TypeError: custom_loss() missing 1 required positional argument
I got rather stuck here because I have no idea how to fit in the parameters for custom_loss. Seek some help regarding this (Don't particularly understand this part since I'm loading my model in a different python script separate.py). Thank you so much!
(Edit: This fix doesn't work for me either)
model = load_model('file_dir/yolov2.h5', compile = False)
To resolve this problem, as you already have the network at hand, only save trained weights (like what keras trainer does in callback).
For testing, make model, no need to compile, and then load trained weights using model.load_weights(path/to/saved/weights).
You also can use "by_name=True" if you make the network in a different way, this time you should keep layer names.
Another option id to manually set weights; for this you will load .h5 file bu "h5py" (h5py.File(path/to/weights, mode='r')) for example (have look how keras do that), then try to correspond layer names of the model and loaded weights.

get Embeddings by reloading tensorflow model

I saved a tensorflow model, .pb file, trained using transfer learning taking this as reference with following code added at the end:
tf.train.write_graph(graph_name, saving_dir_path, output.pb, as_text=False)
and it successfully saved. But now after training I want to get Embedding's output. Following is the last layer defined to train in the graph under layer name final_training_ops:
with tf.name_scope('Wx_plus_b'):
logits = tf.add(tf.matmul(bottleneck_input, layer_weights), layer_biases, name='logits')
After reloading saved model I am using tf.get_default_graph().get_tensor_by_name('Wx_plus_b/logits') to access layer so as to pass image to get embeddings but getting error as invalid operation name.
I gave more time to it and found correct syntax is of the form <op_name>:<output_index> -
tf.get_default_graph().get_tensor_by_name('Wx_plus_b/logits:0')

Visualizing the input that maximizes the activation of a layer with dropout in Keras

I'm trying to replicate the code in this blog article How convolutional neural networks see the world
It works well in a CNN where there's no dropout layer but when there's one (or more) dropout layers, I can't directly use the layer.output line because it expects a learning phase.
When I use the recommend way to extract the output of a layer :
get_layer_output = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[layer_index].output])
layer_output = get_3rd_layer_output([input_img, 0])[0]
The problem is that I can't put a placeholder in input_img because it expects "real" data but if I put directly "real" data then the rest of the code doesn't work (creating the loss, gradients and iterating needs a placeholder).
Is there a way I can make this work?
I'm using the Tensorflow backend.
EDIT : I solved my issue by using the K.set_learning_phase() method before doing anything like building my model (I had to start from a new environment and I used the method right after the imports).

Keras - Connecting Functional API models together

I'm trying to connect 2 functional API models together. here's the summary of the 2 models:
The First "Input" Model (It works as a single model just fine):
The Second Model that is supposed to be connected to the first model:
I'm trying to connect them together like this:
model = Model(input=generator.input, output=[discriminator.output[0], discriminator.output[1]])
But I get this error:
Graph disconnected: cannot obtain value for tensor discriminator_input
at layer "discriminator_input". The following previous layers were
accessed without issue: []
I tried to make a model out of them like this:
Model(input=[generator.input, discriminator.input], output=[discriminator.output[0], discriminator.output[1]])
But this code just resulted in the second Model (and not the 2 of them together), or at least this is what I think after getting a summary of the model and plotting it's structure.
can we do this in Keras (connecting functional API models) or is there another way?
Thanks
I think model should accept layer while your are trying to pass a tensor?
You should try following discussions because I too had issues regarding timedistributed layers though. https://github.com/fchollet/keras/issues/4178
and https://github.com/fchollet/keras/issues/2609
I had a similar problem and with help got it fixed. Have a look here: Stacking models in Class Model API.
I asked the question on the Keras Github page and here's the thread about how to solve this problem.

Keras remove layers after model.fit()

I'm using Keras to do the modelling works and I wonder is it possible to remove certain layers by index or name? Currently I only know the model.pop() could do this work but it just removes the most recently added layers. In addition, layers is the type of tensorvariable and I have no idea how to remove certain element which can be done in numpy array or list. BTW I'm using Theano backend.
It is correct that model.pop() just removes the last added layer and there is no other documented way to delete intermediate layers.
You can always get the output of any intermediate layer like so:
base_model = VGG19(weights='imagenet')
model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
Example taken from here: https://keras.io/applications/
Than add your new layers on top of that.

Categories

Resources