DenseNet in Tensorflow - python

I am fairly new to tensorflow and I am interested in developing a DeseNet Architecture. I have found implementations from scratch on Github. I was wondering if the tensorflow API happen to implement the dense blocks. Is tensorflow's tf.layers.dense the same as the dense blocks in DenseNet?
Thanks!

No, tf.layers.dense implements what is more commonly known as a fully-connected layer, i.e. the basic building block of multilayer perceptrons. If you want dense blocks, you will need to to write your own implementation or use one of those you found on Github.

Related

Compute gradients in a custom layer in Keras

I have written a code that computes Choquet pooling in a Custom Layer in Keras. Below the Colab link to the notebook:
https://colab.research.google.com/drive/1lCrUb2Jm680JRnACPxWpxkOSkP_DlHGj
As you can the code crashes in gradient computation, precisely inside the function custom_grad. This is impossible because I'm returning 0 gradients with the same shape as the previous layer.
So I have 2 questions:
Is in Keras (or in Tensorflow) a way to compute gradient between the layer input and its output?
If I have passed a Tensor with the same shape as the previous layer, but filled with 0s, why the code is not working?
Thanks for your attention and I'm waiting for your help.
Thanks in advance
No one is interested in that question.
After several trials, I have found a solution. The problem is that, as posted by Mainak431 in this GitHub repo:
link to diff and non-diff ops in tensorflow
There are differentiable TensorFlow operations and non-differentiable operations. In the Colab notebook, I used, as an example, scatter_nd_update that is non-differentiable.
So I suggest, if you want to create your own Custom Layer in Keras to take a look at the above lists in order to use operations that allow Keras to auto-differentiate for you.
Anyway, I'm working on it to inform as much as possible on that open research topic. I remember that with the neural network the "LEGO-ing" is borderline, and I know for sure that many of you are interested in adding your operations(aggregation or something else) in a deep neural network model.
Special Thanks to Maniak431, I love you <3

Neural network NOT organized in layers with TensorFlow or Keras

I need to implement a neural network which is NOT layer based, meaning that ANY neuron may be connected to any other neuron, and that there's no way to logically organize them in consecutive layers.
What I'm asking for is an example or a reference to proper and clear documentation about how to implement the following:
Originally I had my own implementation in matlab, however, I've been using TensorFlow and Keras to test simple models and it allows to tune your networks very fast and the implementations are pretty efficient, so I decided to try out more complex models, however, I just got stuck creating this type of network.
HINT: It MAY be OK to create single-neuron layers, as long as you can connect a layer to ANY layer (without caring if it is not adjacent) and to MORE THAN ONE LAYER.
I'm new to Tf and Keras, so a simple python example would be appreciated, althought, pointing me in the right direction would be OK.
This is an example network (¡loops are intentional!):
I dont need to train at the moment, just to evaluate models, however, keep in mind that evaluation of this kind of network is different too, one possible way is to keep with the signal sending until output stabilices, but it is just an example.

Implementing fast dense feature extraction in PyTorch

I am trying to implement this paper in PyTorch Fast Dense Feature Extractor but I am having trouble converting the Torch implementation example they provide into PyTorch.
My attempt thus far has the issue that when adding an additional dimension to the feature map then the convolutional weights don't match the feature shape. How is this managed in Torch (from their implementation it seem that Torch doesn't care about this, but PyTorch does). My code: https://gist.github.com/system123/c4b8ef3824f2230f181f8cfba84f0cfd
Any other solutions to this problem would be great too. Basically, I have a feature extractor that converts a 128x128 patch into an embedding and I'd like to apply this in a dense manner across a larger image without using a for loop to evaluate the CNN on each location as that has a lot of duplicate computation.
It is your lucky day as I have recently uploaded a PyTorch and TF implementation of the paper Fast Dense Feature Extraction with CNNs with Pooling Layers.
An approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once.
See https://github.com/erezposner/Fast_Dense_Feature_Extraction for details.
It contains simple instructions that will explain how to use the Fast Dense Feature Extraction (FDFE) project.
Good luck

Difference between different tensorflow fully connected layers

What is the difference between the different fully connected layers available in tensorflow. I understand that there could 2 versions: Object oriented and functional, but I was able to find 4 different layers in tensorflow:
tf.keras.layers.Dense
tf.layers.dense
tf.layers.Dense
tf.contrib.layers.fully_connected
The documentation contains examples using all of them. I'd also like to know when to use each layer.
Keras is a deep learning library which functions as a wrapper over 'lower level' languges such as Tensorflow and Theano. It has recently been integrated as a Tensorflow project and is part of the code-base. If you are using 'raw' Tensorflow, you should not use this layer.
Tensorflow defines a functional interface. Layers and operations that are lowercase are typically part of this. These functions are used as building blocks when defining a custom layer or a loss function.
This is the layer you should be using.
This comes from the contrib library - features that are typically more experimental and volatile. Once a feature is deemed stable, you should use its other implementation (3). (4) will still be present in the library to maintain backwards compatability.
Is a Keras wrapper function. Its functionality is same as 3. Checkout Keras.
Its a functional interface for tensorflow.
Commonly used.
Function under development.
Technically speaking first 3 have same functionality (same inputs and outputs).

Keras vs TensorFlow - does Keras have any actual benefits?

I have been implementing some deep nets in Keras, but have eventually gotten frustrated with some limitations (for example: setting floatx to float16 fails on batch normalization layers, and the only way to fix it is to actually edit the Keras source; implementing custom layers requires coding them in backend code, which destroys the ability to switch backends), there appear to be no parallel training mechanisms [unlike tf.Estimator], and even vanilla programs run 30% slower in Keras than in tf (if one is to trust the interwebs), and was grumbling about moving to tensorflow, but was pleased to discover that TensorFlow (especially if you use tf.layers stuff) is not actually any longer for anything imaginable you might want to do. Is this a failure of my imagination, or is tf.layers basically a backporting of Keras into core TensorFlow, and is there any actual use case for Keras?
Keras used to have an upper hand on TensorFlow in the past but ever since the author is now affiliated with Google all the features that made it attractive are being implemented into TensorFlow you can check version 1.8, like you rightfully pointed out tf.layers is one such example.

Categories

Resources