What is extra with Keras functional API? - python

What extra can be done using Keras functional API, which could not be done using keras sequential models?
Apart from the fact that a simple model can be reused for a time bases data using “TimeDistributed” layer wrapper ?

It is much more than model reuse, the functional API allows you to easily define models where layers connect to more than just the previous and next layers. You can connect layers to any other layers as you wish, so siamese networks, densely connected networks and such become possible. The old Graph API allowed the same level of connectivity but it was a PITA due to its use of layer node names to define connectivity.
The sequential model is just a sequential set of layers, and new neural network architectures at this time are moving away from such pattern.

Related

How to start my model using convolutional neural network

i am new to programming, python and all. i am tasked with a work at school that requires me to develop and evaluate abusive language detection models from a given dataset. my proposed model must be a Convolutional Neural Network with an appropriate embedding layer as a first layer. my problem is i don't know how to start as i am very new to this with no prior knowledge
First, you can start reading and have understanding of a CNN first.
https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
Lastly, you can check few sample implementation here
Keras:
https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5
Pytorch:
https://adventuresinmachinelearning.com/convolutional-neural-networks-tutorial-in-pytorch/
Tensorflow
https://www.tensorflow.org/tutorials/images/cnn

What is the sense behind setting weights for the layers in keras?

I have trouble understanding weight transfer in transfer learning like tasks...
I trained two networks and saved the weights using keras with tensorflow backend (two networks are in the same model). I would like to use half of the layers from one network and half of the layers from the other network and concatenate them as a new network. Practically I want to cut two networks and join them in a new network and throw away remaining layers. Since half of the layer are top layers I couldn't do it with .pop() so I decided to transfer weights.
I tried this by setting corresponding weights from each layer (the ones that I needed) in the old model to corresponding layers in my new model like:
new_model.layers[i].set_weights(model.layers[i].get_weights())
This however loads the weights but seems to not work as I expect.
Then I tried get_layer:
new_model.layers[i] = model.get_layer('name').output
This also seems to do a meaningless weight transfer.
What should I transfer from my old network to the new network to carry the sense of actually taking half of the whole network?
Do only weights (and biases) carry all information? What else should I assign to have the theoretically same layers?
What does get_leyer return?
Does get_weight/set_weight do same thing as load_weight?

Neural network NOT organized in layers with TensorFlow or Keras

I need to implement a neural network which is NOT layer based, meaning that ANY neuron may be connected to any other neuron, and that there's no way to logically organize them in consecutive layers.
What I'm asking for is an example or a reference to proper and clear documentation about how to implement the following:
Originally I had my own implementation in matlab, however, I've been using TensorFlow and Keras to test simple models and it allows to tune your networks very fast and the implementations are pretty efficient, so I decided to try out more complex models, however, I just got stuck creating this type of network.
HINT: It MAY be OK to create single-neuron layers, as long as you can connect a layer to ANY layer (without caring if it is not adjacent) and to MORE THAN ONE LAYER.
I'm new to Tf and Keras, so a simple python example would be appreciated, althought, pointing me in the right direction would be OK.
This is an example network (¡loops are intentional!):
I dont need to train at the moment, just to evaluate models, however, keep in mind that evaluation of this kind of network is different too, one possible way is to keep with the signal sending until output stabilices, but it is just an example.

Multiple outputs in keras Sequential models

As I am reading the Keras Code for Sequential models I see that it only allows for a single output for any defined layer within the Sequential model. I am aware how to do this using the functional API (Model class).
However, I don't see why the Sequential model is limited to layers with a single output. Is there a design limitation for enforcing such constraint?
Not actually. Sequential model is here to make things simpler, when designing smaller and straight-forward Neural Networks. As noted here, they can be useful for most problems.
The Sequential API allows you to create models layer-by-layer for most
problems. It is limited in that it does not allow you to create models
that share layers or have multiple inputs or outputs.
But if you need more complex design, with multiple input/output as well as models that share layers, you can use the Functional API to achieve your goal.

DenseNet in Tensorflow

I am fairly new to tensorflow and I am interested in developing a DeseNet Architecture. I have found implementations from scratch on Github. I was wondering if the tensorflow API happen to implement the dense blocks. Is tensorflow's tf.layers.dense the same as the dense blocks in DenseNet?
Thanks!
No, tf.layers.dense implements what is more commonly known as a fully-connected layer, i.e. the basic building block of multilayer perceptrons. If you want dense blocks, you will need to to write your own implementation or use one of those you found on Github.

Categories

Resources