How to implement SegNet with preserving max-indexes in Keras - python

I'm trying to implement SegNet in Keras (tf backend) to do semantic segmentation.
The most impressived trick of SgeNet is to pass max-pooling indices to the upsampling layers. However, there are many implementations of SegNet in Keras(e.g.) I find on github just using simple UpSampling (called SegNet-Basic).
I notice that it can be achieved in Tensorflow with " tf.nn.max_pool_with_argmax ". So I want to know is there any similar method to get the max-pooling indices and put them back in upsampling in Keras.
Thanks in advance.

Well, I think I've found the answer.

Related

Does Tensorflow use specific image preprocessing normalization for each keras.application network?

I'm trying to understand what kind of image preprocessing is required when using one of the base networks provided by keras.application whith tensorflow compat.v1 module
In particular, I'm interested about the functions that converts each pixel channel value in the range [-1,1] or similar. I have digged in the code and it seems Tensorflow relies on Keras which, on other hand, should have 3 differents functions: one for tf, one for caffe and the last for torch, meaning not specific ones for each base network
Up until now I have just re-implemented the function for tensorflow (value = value/127.5 - 1) but I also read about others discussing something else (e.g. value = value/255), nothing "official" tho. I have started to have some doubts regarding what I'm doing because, after switching to ResNet50, I can't seem to obtain decent results in contrast to several papers I'm following. I would like to have a definitive idea about the topic, any help would be much appreciated
Tensorflow provides the preprocessing function for models in keras.applications, called preprocess_input. For example, an image can be preprocessed for InceptionV3 using tf.keras.applications.inception_v3.preprocess_input.

Compute gradients in a custom layer in Keras

I have written a code that computes Choquet pooling in a Custom Layer in Keras. Below the Colab link to the notebook:
https://colab.research.google.com/drive/1lCrUb2Jm680JRnACPxWpxkOSkP_DlHGj
As you can the code crashes in gradient computation, precisely inside the function custom_grad. This is impossible because I'm returning 0 gradients with the same shape as the previous layer.
So I have 2 questions:
Is in Keras (or in Tensorflow) a way to compute gradient between the layer input and its output?
If I have passed a Tensor with the same shape as the previous layer, but filled with 0s, why the code is not working?
Thanks for your attention and I'm waiting for your help.
Thanks in advance
No one is interested in that question.
After several trials, I have found a solution. The problem is that, as posted by Mainak431 in this GitHub repo:
link to diff and non-diff ops in tensorflow
There are differentiable TensorFlow operations and non-differentiable operations. In the Colab notebook, I used, as an example, scatter_nd_update that is non-differentiable.
So I suggest, if you want to create your own Custom Layer in Keras to take a look at the above lists in order to use operations that allow Keras to auto-differentiate for you.
Anyway, I'm working on it to inform as much as possible on that open research topic. I remember that with the neural network the "LEGO-ing" is borderline, and I know for sure that many of you are interested in adding your operations(aggregation or something else) in a deep neural network model.
Special Thanks to Maniak431, I love you <3

Keras LSTM use softmax on every unit

I am creating a model somewhat similar to the one mentioned below:
model
I am using Keras to create such model but have struck a dead end as I have not been able find a way to add SoftMax to outputs of the LSTM units. So far all the tutorials and helping material provides with information about outputting a single class even like in the case of image captioning as provided in this link.
So is it possible to apply SoftMax to every unit of LSTM (where return sequence is true) or do I have to move to pytorch.
The answer is: yes, it is possible to apply to each unit of LSTM and no, you do not have to move to PyTorch.
While in Keras 1.X you needed to explicitly state that you add a TimeDistributed layer, in Keras 2.X you can just write:
model.add(LSTM(50,activation='relu',return_sequences=False))
model.add(Dense(number_of_classes,activation='softmax'))

How is the (None,None,1) input working in keras?

I am working on a project using a keras deep learning model that i need to transfer into PyTorch .
The goal of the project is to localize some elements on the images. To train it, I first use patches extracted from my images and then infer on the full image. I read that it was possible with the (None,None,1) input shape for the keras input layer and it is currently working. However, the same training system does not seem to work in pytorch. Therefore i was wondering is the (None,None,1) input layer doing something specific when I start inferring on full images?
Thanks for your answers
As in the discussion in the link and referencing the words of fchollet:
Of course,
it's not always possible to have such free dimensions (for instance it's
not possible to have variable-length sequences with TensorFlow, but it is
with Theano).
One can assume that it's because the architecture of the framework. As you stated, it may be accepted in keras, but not accepted in PyTorch.

Implementing fast dense feature extraction in PyTorch

I am trying to implement this paper in PyTorch Fast Dense Feature Extractor but I am having trouble converting the Torch implementation example they provide into PyTorch.
My attempt thus far has the issue that when adding an additional dimension to the feature map then the convolutional weights don't match the feature shape. How is this managed in Torch (from their implementation it seem that Torch doesn't care about this, but PyTorch does). My code: https://gist.github.com/system123/c4b8ef3824f2230f181f8cfba84f0cfd
Any other solutions to this problem would be great too. Basically, I have a feature extractor that converts a 128x128 patch into an embedding and I'd like to apply this in a dense manner across a larger image without using a for loop to evaluate the CNN on each location as that has a lot of duplicate computation.
It is your lucky day as I have recently uploaded a PyTorch and TF implementation of the paper Fast Dense Feature Extraction with CNNs with Pooling Layers.
An approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once.
See https://github.com/erezposner/Fast_Dense_Feature_Extraction for details.
It contains simple instructions that will explain how to use the Fast Dense Feature Extraction (FDFE) project.
Good luck

Categories

Resources