How to use Tensorflow Keras API - python

Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing..
First, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks.
Second, what's the purpose of having ton of duplicate API, really what's the purpose behind using high level API like Keras when you have low level to build model like Lego blocks?
Finally, what's the true purpose of using eager execution?

You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share variables, etc.
Eager execution helps you for fast debugging, it evaluates tensors directly without a need of invoking a session.

There are different "levels" of APIs (high-level APIs such as keras and estimators, and low level APIs such as Variables, etc) to suit different developer needs.
For the average industry developer, who already knows approximately what ML model you intend to use, keras is a good fit. For example, if you know you want to implement a sequential model with two dense layers with softmax activation, you need only do something like:
model = keras.Sequential([
keras.layers.Dense(128, activation=tf.nn.softmax),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
Using keras is generally simpler as you don't have to think about low-level implementation details such as tf.Variables. For more complete examples, check out the keras tutorials on tensorflow.org.
The low-level API allows users finer control over the models you're developing. These APIs are more commonly used by developers and researchers developing novel ML methods; for example, if you need a specialized layer that does something different from canonical ML methods, you can manually define a layer using low level APIs.
Finally, eager execution is an imperative programming style. It enables faster debugging, and has a gentler learning curve for those new to tensorflow, since it is more "pythonic"/intuitive. Check out the eager guide for more.

Related

TensorFlow Federated: Keras model with custom learning algorithm

This tutorial describes how to build a TFF computation from keras model.
This tutorial describes how to build a custom TFF computation from scratch, possibly with a custom federated learning algorithm.
What I need is a combination of these: I want to build a custom federated learning algorithm, and I want to use an existing keras model. Q. How can it be done?
The second tutorial requires MODEL_TYPE which is based on MODEL_SPEC, but I don't know how to get it. I can see some variables in model.trainable_variables (where model = tff.learning.from_keras_model(keras_model, ...), but I doubt it's what I need.
Of course, I can implement the model by hand (as in the second tutorial), but I want to avoid it.
I think you have the correct pointers for writing a custom federated computation, as well as converting a Keras model to a tff.learning.Model. So we'll focus on pulling a TFF type signature from an existing tff.learning.Model.
Once you have your hands on such a model, you should be able to use tff.learning.framework.weights_type_from_model to pull out the appropriate TFF type to use for your custom algorithm.
There is an interesting caveat here: how precisely you use a tff.learning.Model in your custom algorithm is pretty much up to you, and this could affect your desired model weights type. This is unlikely to be the case (likely you will simply be assigning values from incoming tensors to the model variables), so I think we should prefer to avoid going deeper into this caveat.
Finally, a few pointers of end-to-end custom algorithm implementations in TFF:
One of the simplest complete examples TFF has is simple_fedavg, which is totally self-contained and contains instructions for running.
The code for a paper on Adaptive Federated Optimization contains a handwritten implementation of learning rate decay on the clients in TFF.
A similar implementation of adaptive learning rate decay (think Keras' functions to decay learning rate on plateaus) is right next door to the code for AFO.

Neural network NOT organized in layers with TensorFlow or Keras

I need to implement a neural network which is NOT layer based, meaning that ANY neuron may be connected to any other neuron, and that there's no way to logically organize them in consecutive layers.
What I'm asking for is an example or a reference to proper and clear documentation about how to implement the following:
Originally I had my own implementation in matlab, however, I've been using TensorFlow and Keras to test simple models and it allows to tune your networks very fast and the implementations are pretty efficient, so I decided to try out more complex models, however, I just got stuck creating this type of network.
HINT: It MAY be OK to create single-neuron layers, as long as you can connect a layer to ANY layer (without caring if it is not adjacent) and to MORE THAN ONE LAYER.
I'm new to Tf and Keras, so a simple python example would be appreciated, althought, pointing me in the right direction would be OK.
This is an example network (¡loops are intentional!):
I dont need to train at the moment, just to evaluate models, however, keep in mind that evaluation of this kind of network is different too, one possible way is to keep with the signal sending until output stabilices, but it is just an example.

Tensorflow2.0 training: model.compile vs GradientTape

I am starting to learn Tensorflow2.0 and one major source of my confusion is when to use the keras-like model.compile vs tf.GradientTape to train a model.
On the Tensorflow2.0 tutorial for MNIST classification they train two similar models. One with model.compile and the other with tf.GradientTape.
Apologies if this is trivial, but when do you use one over the other?
This is really a case-specific thing and it's difficult to give a definite answer here (it might border on "too opinion-based). But in general, I would say
The "classic" Keras interface (using compile, fitetc.) allows for quick and easy building, training & evaluation of standard models. However, it is very high-level/abstract and as such doesn't give you much low-level control. If you are implementing models with non-trivial control flow, this can be hard to accommodate.
GradientTape gives you full low-level control over all aspects of training/running your model, allowing easier debugging as well as more complex architectures etc., but you will need to write more boilerplate code for many things that a compiled model will hide from you (e.g. training loops). Still, if you do research in deep learning you will probably be working on this level most of the time.

Multiple outputs in keras Sequential models

As I am reading the Keras Code for Sequential models I see that it only allows for a single output for any defined layer within the Sequential model. I am aware how to do this using the functional API (Model class).
However, I don't see why the Sequential model is limited to layers with a single output. Is there a design limitation for enforcing such constraint?
Not actually. Sequential model is here to make things simpler, when designing smaller and straight-forward Neural Networks. As noted here, they can be useful for most problems.
The Sequential API allows you to create models layer-by-layer for most
problems. It is limited in that it does not allow you to create models
that share layers or have multiple inputs or outputs.
But if you need more complex design, with multiple input/output as well as models that share layers, you can use the Functional API to achieve your goal.

Difference between different tensorflow fully connected layers

What is the difference between the different fully connected layers available in tensorflow. I understand that there could 2 versions: Object oriented and functional, but I was able to find 4 different layers in tensorflow:
tf.keras.layers.Dense
tf.layers.dense
tf.layers.Dense
tf.contrib.layers.fully_connected
The documentation contains examples using all of them. I'd also like to know when to use each layer.
Keras is a deep learning library which functions as a wrapper over 'lower level' languges such as Tensorflow and Theano. It has recently been integrated as a Tensorflow project and is part of the code-base. If you are using 'raw' Tensorflow, you should not use this layer.
Tensorflow defines a functional interface. Layers and operations that are lowercase are typically part of this. These functions are used as building blocks when defining a custom layer or a loss function.
This is the layer you should be using.
This comes from the contrib library - features that are typically more experimental and volatile. Once a feature is deemed stable, you should use its other implementation (3). (4) will still be present in the library to maintain backwards compatability.
Is a Keras wrapper function. Its functionality is same as 3. Checkout Keras.
Its a functional interface for tensorflow.
Commonly used.
Function under development.
Technically speaking first 3 have same functionality (same inputs and outputs).

Categories

Resources