Convolution2D vs Conv2D in Keras library, in Python - python

Is there any documentation on whether these are the same or different modules and the differences between these?
keras.layers.Conv2D
keras.layers.Convolution2D
keras.layers.convolutional.Conv2D
keras.layers.convolutional.Convolution2D

They're aliases for the same functionality. The reason behind this is related to the new Keras 2 API, which tries to give users some time to migrate their code to the new one, and to use the shorter one with different parameters. The other aliases eventually are going to be deprecated, since when using old Keras API shows warnings.
See Keras code

Related

How can I "see" the model/network when loading a model from tfhub?

I'm new to this topic, so forgive me my lack of knowledge. There is a very good model called inception resnet v2 that basically works like this, the input is an image and outputs a list of predictions with their positions and bounded rectangles. I find this very useful, and I thought of using the already worked model in order to recognize things that it now can't (for example if a human is wearing a mask or not). Yes, I wanted to add a new recognition class to the model.
import tensorflow as tf
import tensorflow_hub as hub
mod = hub.load("https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1")
mod is an object of type
tensorflow.python.training.tracking.tracking.AutoTrackable, reading the documentation (that was only available on the source code was a bit hard to understand without context)
and I tried to inspect some of it's properties in order to see if I could figure it out by myself.
And well, I didn't. How can I see the network, the layers, the weights? the fit methods, Is it's all abstracted away?. Can I convert it to keras? I want to experiment with it, see if I can modify it, and see if I could export the model to another representation, for example pytorch.
I wanted to do this because I thought it'd be better to modify an already working model instead of creating one from scratch. Also because I'm not good at training models myself.
I've run into this issue too. Tensorflow hub guide says:
This error frequently arises when loading models in TF1 Hub format with the hub.load() API in TF2. Adding the correct signature should fix this problem.
mod = hub.load(handle).signatures['default']
As an example, you can see this notebook.
You can dir the loaded model asset to see what's defined on it
m = hub.load(handle)
dir(model)
As mentioned in the other answer, you can also look at the signatures with print(m.signatures)
Hub models are SavedModel assets and do not have a keras .fit method on them. If you want to train the model from scratch, you'll need to go to the source code.
Some models have more extensive exported interfaces including access to individual layers, but this model does not.

What is the use of tf.keras.backend nowadays, is it safer/more future proof to code w/ or w/o it?

I understand the historical need for keras.backend in the long gone days of multiframework support. But now that we are talking about tf.keras, and since Keras is scheduled to support this toolkit only, I am wondering what is today's use for tf.keras.backend. From what I can see, it exposes only a fraction of the functions available in tf.*, and evolves more slowly.
So, is tf.keras.backend
better be avoided, because it is an obsolete remnant of the past that is likely to be dropped in a future release?
or, a future-proof alternative to tf.* to be preferred whenever possible, because this API changes at a much slower pace than TF itself and is not going down anytime soon?
or something else?
It is difficult to say either is better at this point. Because keras backend offers unique feature(s) (still).
For example, K.rnn is a very valuable function provided by Keras backend. This can be used to iterate the temporal output of a sequential model (LSTM/GRU) on the temporal dimension. This is pretty useful when you have to do a map() like function on each temporal output of a sequential model (e.g. computing attention vector for each LSTM output of the encoder). This is a very convenient functions to achieve the above because, (as far as I know) doing this with tf.* involves tf.gather and can become ugly (especially in TF 1.x). I am not really sure about other functions that might offer a unique advantage over tf.*. But probably there are a few (e.g. K.foldl).
On the other hand, tf.* does offer many more functions than what the Keras backend offers.
In conclusion, I think it's too early to completely avoid Keras backend. But I do feel like the keras backend will get merged to tf.* at some point in order to offer a more consistent API.

tf.contrib.layers.fully_connected() in Tensorflow 2?

I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet?
In TensorFlow 2.0 the package tf.contrib has been removed (and this was a good choice since the whole package was a huge mix of different projects all placed inside the same box), so you can't use it.
In TensorFlow 2.0 we need to use tf.keras.layers.Dense to create a fully connected layer, but more importantly, you have to migrate your codebase to Keras. In fact, you can't define a layer and use it, without creating a tf.keras.Model object that uses it.
tf-slim, as a standalone package, already included tf.contrib.layers.you can install by pip install tf-slim,call it by from tf_slim.layers import layers as _layers; _layers.fully_conntected(..).The same as the original, easy to replace
use: tf.compat.v1.layers.dense
for example, instead of
Z = tf.contrib.layers.fully_connected(F, num_outputs, activation_fn=None)
you can replace it with:
Z = tf.compat.v1.layers.dense(F, num_outputs, activation = None)
tf.contrib.layers.fully_connected() is a perfect mess. It is a very old historical mark(or a prehistory DNN legacy). Google has completely deprecated the function since Google hated it. There is no any direct function in TensoFlow 2.x to replace tf.contrib.layers.fully_connected(). Therefore, it is not worth inquiring and getting to know the function.

Do variables need to be initialized in a session in tflearn?

Maybe this is a stupid question, but I switched from basic TensorFlow recently to tflearn and while I knew little of TensorFlow, I know even less of tflearn as I have just begun to experiment with it. I was able to create a network, train it, and generate a model that achieved a satisfactory metric. I did this all without using a TensorFlow session because a) none of the documentation I was looking at necessarily suggested it and b) I didn't even think to use it.
However, I would like to predict a value for a single input (the model performs regression on images, so I'm trying to get a value for a single image) and now I'm getting an error that the convolutional layers need to be initialized (Specifically "FailedPreconditionError: Attempting to use uninitialized value Conv2D/W").
The only thing I've added, though, are two lines:
model = Evaluator(network)
model.predict(feed_dict={input_placeholder: image_data})
I'm asking this as a general question because my actual code is a bit troublesome to just post here because admittedly I've been very sloppy in writing it. I will mention, however, that even if I start a session and initialize all variables before that second line, then run the line in the session, I get the same error.
Succinctly put, does tflearn require a session if I've not used TensorFlow stuff directly anywhere in my code? If so, does the model need to be trained in the session? And if not, what about those two lines would cause such an error?
I'm hoping it isn't necessary for more code to be posted, but if this isn't a general issue and is actually specific to my code then I can try to format it to be understandable here and then edit the post.

Keras vs TensorFlow - does Keras have any actual benefits?

I have been implementing some deep nets in Keras, but have eventually gotten frustrated with some limitations (for example: setting floatx to float16 fails on batch normalization layers, and the only way to fix it is to actually edit the Keras source; implementing custom layers requires coding them in backend code, which destroys the ability to switch backends), there appear to be no parallel training mechanisms [unlike tf.Estimator], and even vanilla programs run 30% slower in Keras than in tf (if one is to trust the interwebs), and was grumbling about moving to tensorflow, but was pleased to discover that TensorFlow (especially if you use tf.layers stuff) is not actually any longer for anything imaginable you might want to do. Is this a failure of my imagination, or is tf.layers basically a backporting of Keras into core TensorFlow, and is there any actual use case for Keras?
Keras used to have an upper hand on TensorFlow in the past but ever since the author is now affiliated with Google all the features that made it attractive are being implemented into TensorFlow you can check version 1.8, like you rightfully pointed out tf.layers is one such example.

Categories

Resources