Tensorflow: Different activation values for same image - python

I'm trying to retrain (read finetune) a MobileNet image Classifier.
The script for retraining given by tensorflow here (from the tutorial), updates only the weights of the newly added fully connected layer. I modified this script to update weights of all the layers of the pre-trained model. I'm using MobileNet architecture with depth multiplier of 0.25 and input size of 128.
However while retraining I obsereved a strange thing, if I give a particular image as input for inference in a batch with some other images, the activation values after some layers are different from those when the image is passed alone. Also activation values for same image from different batches are different. Example - For two batches -
batch_1 : [img1, img2, img3]; batch_2 : [img1, img4, img5]. The activations for img1 are different from both the batches.
Here is the code I use for inference -
for tf.Session(graph=tf.get_default_graph()) as sess:
image_path = '/tmp/images/10dsf00003.jpg'
id_ = gfile.FastGFile(image_path, 'rb').read()
#The line below loads the jpeg using tf.decode_jpeg and does some preprocessing
id = sess.run(decoded_image_tensor, {jpeg_data_tensor: id_})
input_image_tensor = graph.get_tensor_by_name('input')
layerXname='MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu:0' #Name of the layer whose activations to inspect.
layerX = graph.get_tensor_by_name(layerXname)
layerXactivations=sess.run(layerX, {input_image_tensor: id})
The above code is executed once as it is and once with the following change in the last line:
layerXactivations_batch=sess.run(layerX, {input_image_tensor: np.asarray([np.squeeze(id), np.squeeze(id), np.squeeze(id)])})
Following are some nodes in the graph :
[u'input', u'MobilenetV1/Conv2d_0/weights', u'MobilenetV1/Conv2d_0/weights/read', u'MobilenetV1/MobilenetV1/Conv2d_0/convolution', u'MobilenetV1/Conv2d_0/BatchNorm/beta', u'MobilenetV1/Conv2d_0/BatchNorm/beta/read', u'MobilenetV1/Conv2d_0/BatchNorm/gamma', u'MobilenetV1/Conv2d_0/BatchNorm/gamma/read', u'MobilenetV1/Conv2d_0/BatchNorm/moving_mean', u'MobilenetV1/Conv2d_0/BatchNorm/moving_mean/read', u'MobilenetV1/Conv2d_0/BatchNorm/moving_variance', u'MobilenetV1/Conv2d_0/BatchNorm/moving_variance/read', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/add/y', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/add', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/Rsqrt', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_2', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/sub', u'MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/add_1', u'MobilenetV1/MobilenetV1/Conv2d_0/Relu6', u'MobilenetV1/Conv2d_1_depthwise/depthwise_weights', u'MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read', ... ...]
Now when layerXname = 'MobilenetV1/MobilenetV1/Conv2d_0/convolution'
The activations are same in both of the above specified cases. (i.e.
layerxactivations and layerxactivations_batch[0] are same).
But after this layer, all layers have different activation values. I feel that the batchNorm operations after 'MobilenetV1/MobilenetV1/Conv2d_0/convolution' layer behave differently for batch inputs and a single image. Or is the issue caused by something else ?
Any help/pointers would be appreciated.

When you build the mobilenet there is one parameter called is_training. If you don't set it to false the dropout layer and the batch normalization layer will give you different results in different iterations. Batch normalization will probably change very little the values but dropout will change them a lot as it drops some input values.
Take a look to the signature of mobilnet:
def mobilenet_v1(inputs,
num_classes=1000,
dropout_keep_prob=0.999,
is_training=True,
min_depth=8,
depth_multiplier=1.0,
conv_defs=None,
prediction_fn=tf.contrib.layers.softmax,
spatial_squeeze=True,
reuse=None,
scope='MobilenetV1'):
"""Mobilenet v1 model for classification.
Args:
inputs: a tensor of shape [batch_size, height, width, channels].
num_classes: number of predicted classes.
dropout_keep_prob: the percentage of activation values that are retained.
is_training: whether is training or not.
min_depth: Minimum depth value (number of channels) for all convolution ops.
Enforced when depth_multiplier < 1, and not an active constraint when
depth_multiplier >= 1.
depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model.
conv_defs: A list of ConvDef namedtuples specifying the net architecture.
prediction_fn: a function to get predictions out of logits.
spatial_squeeze: if True, logits is of shape is [B, C], if false logits is
of shape [B, 1, 1, C], where B is batch_size and C is number of classes.
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
scope: Optional variable_scope.
Returns:
logits: the pre-softmax activations, a tensor of size
[batch_size, num_classes]
end_points: a dictionary from components of the network to the corresponding
activation.
Raises:
ValueError: Input rank is invalid.
"""

This is due to Batch Normalisation.
How are you running inference. Are you loading it from the checkpoint files or are you using a Frozen Protobuf model. If you use a frozen model you can expect similar results for different formats of inputs.
Check this out. A similar issue for a different application is raised here.

Related

How to increase the rank (ndim) of input of BERT keras hub layer for learning-to-rank

I am trying to implement a learning-to-rank model using a pre-trained BERT available on tensorflow hub. I am using a variation of ListNet loss function, which requires each training instance to be a list of several ranked documents in relation to a query. I need the model to be able to accept data in a shape (batch_size, list_size, sentence_length), where the model loops over the 'list_size' axis in each training instance, returns the ranks and passes them to the loss function. In a simple model that only consists of dense layers, this is easily done by augmenting the dimensions of the input layer. For example:
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras import Model
input = Input([6,10])
x = Dense(20,activation='relu')(input)
output = Dense(1, activation='sigmoid')(x)
model = Model(inputs=input, outputs=output)
...now the model will perform 6 forward passes over vectors of length 10 before calculating the loss and updating gradients.
I am trying to do the same with the BERT model and its preprocessing layer:
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
bert_preprocess_model = hub.KerasLayer('https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1')
bert_model = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3')
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
processed_input = bert_preprocess_model(text_input)
output = bert_model(processed_input)
model = tf.keras.Model(text_input, output)
But when I try to change the shape of 'text_input' to, say, (6), or meddle with it in any way really, it always results in the same type of error:
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (3 total):
* Tensor("inputs:0", shape=(None, 6), dtype=string)
* False
* None
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (3 total):
* TensorSpec(shape=(None,), dtype=tf.string, name='sentences')
* False
* None
Keyword arguments: {}
....
As per https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer, it seems like you can configure the input shape of hub.KerasLayer via tf.keras.layers.InputSpec. In my case, I guess it would be something like this:
bert_preprocess_model.input_spec = tf.keras.layers.InputSpec(ndim=2)
bert_model.input_spec = tf.keras.layers.InputSpec(ndim=2)
When I run the above code, the attributes indeed get changed, but when trying to build the model, the same exact error appears.
Is there any way to easily resolve this without the necessity to create a custom training loop?
Suppose you have a batch of B examples, each with exactly N text strings, which makes a 2-dimensional Tensor of shape [B, N]. With tf.reshape(), you can turn that into a 1-dimensional tensor of shape [B*N], send it through BERT (which preserves the order of inputs) and then reshape it back to [B,N]. (There's also tf.keras.layers.Reshape, but that hides the batch dimension from you.)
If it's not exactly N text strings each time, you'll have to do some bookkeeping on the side (e.g., store inputs in a tf.RaggedTensor, run BERT on its .values, and construct a new RaggedTensor with the same .row_splits from the result.)

Resizing layer in Tensorflow crashes because of different picture shapes

I'm new to TensorFlow. I have a image classification problem with different image sizes. In the documentation I read about how it is beneficial to do the resizing inside the model instead of in the dataset.map() function.
I batch my dataset like this:
ds_train = ds_train\
.batch(BATCH_SIZE)\
.prefetch(tf.data.experimental.AUTOTUNE)
My model is very simple:
base_model = tf.keras.applications.ResNet50V2(
include_top=True, weights=None, input_tensor=None, input_shape=(224,224,3),
pooling=None, classes=NUM_CLASSES, classifier_activation='softmax')
seed = 42
model = tf.keras.Sequential([
tf.keras.Input(shape=(None, None, 3)),
tf.keras.layers.experimental.preprocessing.Resizing(224, 224),
tf.keras.layers.experimental.preprocessing.Rescaling(1./255),
tf.keras.layers.experimental.preprocessing.RandomFlip(mode='horizontal_and_vertical', seed=seed),
base_model
])
This gives me the error:
InvalidArgumentError: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [95,116,3], [batch]: [108,112,3]. How can I use the resize layer with batching?
The error is that you cannot batch elements of different sizes. There's unfortunately no way around that. The documentation specifies that preprocessing inside the model is useful at inference (i.e. when you call model.predict()).
The key benefit to doing this is that it makes your model portable [...] When all data preprocessing is part of the model, other people can load and use your model without having to be aware of how each feature is expected to be encoded & normalized. Your inference model will be able to process raw images or raw structured data, and will not require users of the model to be aware of the details of e.g. the tokenization scheme used for text, the indexing scheme used for categorical features, whether image pixel values are normalized to [-1, +1] or to [0, 1], etc.
During training, if you want to use a batch size of >1, you will need to do the preprocessing yourself, if the images have different sizes. You can do that with tf.data.Dataset.map().

Adding softmax layer to LSTM network "freezes" output

I've been trying to teach myself the basics of RNN's with a personnal project on PyTorch. I want to produce a simple network that is able to predict the next character in a sequence (idea mainly from this article http://karpathy.github.io/2015/05/21/rnn-effectiveness/ but I wanted to do most of the stuff myself).
My idea is this : I take a batch of B input sequences of size n (np array of n integers), one hot encode them and pass them through my network composed of several LSTM layers, one fully connected layers and one softmax unit.
I then compare the output to the target sequences which are the input sequences shifted one step ahead.
My issue is that when I include the softmax layer, the output is the same every single epoch for every single batch. When I don't include it, the network seems to learn appropriately. I can't figure out what's wrong.
My implementation is the following :
class Model(nn.Module):
def __init__(self, one_hot_length, dropout_prob, num_units, num_layers):
super().__init__()
self.LSTM = nn.LSTM(one_hot_length, num_units, num_layers, batch_first = True, dropout = dropout_prob)
self.dropout = nn.Dropout(dropout_prob)
self.fully_connected = nn.Linear(num_units, one_hot_length)
self.softmax = nn.Softmax(dim = 1)
# dim = 1 as the tensor is of shape (batch_size*seq_length, one_hot_length) when entering the softmax unit
def forward_pass(self, input_seq, hc_states):
output, hc_states = self.LSTM (input_seq, hc_states)
output = output.view(-1, self.num_units)
output = self.fully_connected(output)
# I simply comment out the next line when I run the network without the softmax layer
output = self.softmax(output)
return output, hc_states
one_hot_length is the size of my character dictionnary (~200, also the size of a one hot encoded vector)
num_units is the number of hidden units in a LSTM cell, num_layers the number of LSTM layers in the network.
The inside of the training loop (simplified) goes as follows :
input, target = next_batches(data, batch_pointer)
input = nn.functional.one_hot(input_seq, num_classes = one_hot_length).float().
for state in hc_states:
state.detach_()
optimizer.zero_grad()
output, states = net.forward_pass(input, hc_states)
loss = nn.CrossEntropyLoss(output, target)
loss.backward()
nn.utils.clip_grad_norm_(net.parameters(), MaxGradNorm)
optimizer.step()
With hc_states a tuple with the hidden states tensor and the cell states tensor, input, is a tensor of size (B,n,one_hot_length), target is (B,n).
I'm training on a really small dataset (sentences in a .txt of ~400Ko) just to tune my code, and did 4 different runs with different parameters and each time the outcome was the same : the network doesn't learn at all when it has the softmax layer, and trains somewhat appropriately without.
I don't think it is an issue with tensors shapes as I'm almost sure I checked everything.
My understanding of my problem is that I'm trying to do classification, and that the usual is to put a softmax unit at the end to get "probabilities" of each character to appear, but clearly this isn't right.
Any ideas to help me ?
I'm also fairly new to Pytorch and RNN so I apologize in advance if my architecture/implementation is some kind of monstrosity to a knowledgeable person. Feel free to correct me and thanks in advance.

Layers to be used after using a pretrained model: When to add GlobalAveragePooling2D()

I am using pretrained models to classify image. My question is what kind of layers do I have to add after using the pretrained model structure in my model, resp. why these two implementations differ. To be specific:
Consider two examples, one using the cats and dogs dataset:
One implementation can be found here. The crucial point is that the base model:
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
is frozen and a GlobalAveragePooling2D() is added, before a final tf.keras.layers.Dense(1) is added. So the model structure looks like:
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
which is equivalent to:
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D()
tf.keras.layers.Dense(1)
])
So they added not only a final dense(1) layer, but also a GlobalAveragePooling2D() layer before.
The other using the tf flowers dataset:
In this implementation it is different. A GlobalAveragePooling2D() is not added.
feature_extractor_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2"
feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
input_shape=(224,224,3))
feature_extractor_layer.trainable = False
model = tf.keras.Sequential([
feature_extractor_layer,
layers.Dense(image_data.num_classes)
])
Where image_data.num_classes is 5 representing the different flower classification. So in this example a GlobalAveragePooling2D() layer is not added.
I do not understand this. Why is this different? When to add a GlobalAveragePooling2D() or not? And what is better / should I do?
I am not sure if the reason is that in one case the dataset cats and dogs is binary classification and in the other it is a multiclass classifcation problem. Or the difference is that in one case tf.keras.applications.MobileNetV2 was used to load MobileNetV2 and in the other implementation hub.KerasLayer was used to get the feature_extractor. When I check the model in the first implementation:
I can see that the last layer is a relu activation layer.
When I check the feature_extractor:
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(1)
])
model.summary()
I get the output:
So maybe reason is also that I do not understand the difference between tf.keras.applications.MobileNetV2 vs hub.KerasLayer. The hub.KerasLayer just gives me the feature extractor. I know this, but still I think I did not get the difference between these two methods.
I cannot check the layers of the feature_extractor itself. So feature_extractor.summary() or feature_extractor.layers does not work. How can I inspect the layers here? And how can I know I should add GlobalAveragePooling2D or not?
Summary
Why is this different? When to add a GlobalAveragePooling2D() or not? And what is better / should I do?
The first case it outputs 4 dimensional tensors that are raw outputs of the last convolutional layer. So, you need to flatten them somehow, and in this example you are using GlobalAveragePooling2D (but you could use any other strategy). I can't tell which is better: it depends on your problem, and depending on how hub.KerasLayer version implemented the flatten, they could be exactly the same. That said, I'd just pickup one of them and go on: I don't see huge differences among them,
Long answer: understanding Keras implementation
The difference is in the output of both base models: in your keras examples, outputs are of shape (bz, hh, ww, nf) where bz is batch size, hh and ww are height and weight of the last convolutional layer in the model and nf is the number of filters (or convolutions) applied in this last layer.
So: this outputs the output of the last convolutions (or filters) of the base model.
Hence, you need to convert those outputs (which you can think them as images) to vectors of shape (bz, n_feats), where n_feats is the number of features the base model is computing. Once this conversion is done, you can stack your classification layer (or as many layers as you want) because at this point you have vectors.
How to compute this conversion? Some common alternatives are taking the average or maximum among the convolutional output (which reduces the size), or you could just reshape them as a single row, or add more convolutional layers until you get a vector as an output (I strongly suggest to follow usual practices like average or maximum).
In your first example, when calling tf.keras.applications.MobileNetV2, you are using the default police with respect to this last year, and hence, the last convolutional layer is let "as is": some convolutions. You can modify this behavior with the param pooling, as documented here:
pooling: Optional pooling mode for feature extraction when include_top is False.
None (default) means that the output of the model will be the 4D tensor output of the last convolutional block.
avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.
max means that global max pooling will be applied.
In summary, in your first example, you are building the base model without telling explicitly what to do with the last layer, the model keeps returning 4 dimensional tensors that you immediately convert to vectors with the usage of average pooling, so you can avoid this explicit average pooling if you tell Keras to do it:
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
pooling='avg', # Tell keras to average last layer
weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
# global_average_layer, -> not needed any more
prediction_layer
])
TFHub implementation
Finally, when you use the TensorFlow Hub implementation, as you picked up the feature_vector version of the model, it already implements some kind of pooling (which I didn't found yet how) to make sure the model outputs vectors rather than 4 dimensional tensors. So, you don't need to add explicitly the layer to convert them because it is already done.
In my opinion, I prefer Keras implementation since it gives you more freedom to pick the strategy you want (in fact you could keep stacking whatever you want).
Lets say there is a model taking [1, 208, 208, 3] images and has 6 pooling layers with kernels [2, 2, 2, 2, 2, 7] which would result in a feature column for image [1, 1, 1, 2048] for 2048 filters in the last conv layer. Note, how the last pooling layer accepts [1, 7, 7, 2048] inputs
If we relax the constrains for the input image (which is typically the case for object deteciton models) than after same set of pooling layers image of size [1, 104, 208, 3] would produce pre-last-pooling output of [1, 4, 7, 2024] and [1, 256, 408, 3] would yeild [1, 8, 13, 2048]. This maps would have about the same amount information as original [1, 7, 7, 2048] but the original pooling layer would not produce a feature column wiht [1, 1, 1, N]. That is why we switch to global pooling layer.
In short, global pooling layer is important if we don't have strict restriction on the input image size (and don't resize the image as the first op in the model).
I think difference in output of models
"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2" has output is 1d vector * batch_size, you just can't apply Conv2D to it.
Output of tf.keras.applications.MobileNetV2 probably more complex, thus you have more capability to transform one.

pycaffe get gradients/weights/biases

So I have initialized a caffe.Net object with
network = caffe.Net('path/to/lenet.prototxt', caffe.TEST)
and I want to get activation, weights, biases, gradients for each layer with parameters. My current approach is to do a step(100) to go through 100 iterations and then look at every layer:
for layer_name in network._layer_names:
if layer_name in network.params:
x = layer_name
output = np.array(network.blobs[x].data)
weight = np.array(network.params[x][0].data)
bias = np.array(network.params[x][1].data)
this should give me the activation, the weights and biases of each layer. Then I save them. No idea for the gradients though.
Is this approach for weights/biases/activation the right one?
Instead of .data use .diff
Implementation Details
As we are often interested in the values as well as the gradients of the blob, a Blob stores two chunks of memories, data and diff. The former is the normal data that we pass along, and the latter is the gradient computed by the network.
http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html
Note that these will be initialized to zero, unless you have run some training steps.

Categories

Resources