Tensorflow upsampling by zero insertion with multiple dimensions - python

I have a series of 1D time series that, through a series of convolutional layers, end up in the form of:
(batch_size, time_series_length, num_filters)
I would like to manually upsample the tensors by inserting alternating zeros (much like a tranposed convolution), such that the new dimensionality becomes
(batch_size, 2*time_series_length, num_filters)
in order to be able to include an additional step before a convolutional layer. It is simple to do this in numpy, for example, with np.insert, but how does one do it with tensors?
I have looked at a few similar posts such as this, but I don't understand how to do this with multiple dimensions while preserving the other dimensions. Any thoughts?

I was working on a similar problem with images. I wanted to go from batch, height, width, in_channels to batch, 2*height, 2*width, in_channels. Like you said this is very much like a transposed convolution so I ended up using tf.nn.conv2d_transpose with strides=2 and filters=tf.ones([1, 1, 1, 1]):
upsampled_images = tf.nn.conv2d_transpose(images, tf.ones([1, 1, 1, 1]), output_shape, strides=2, padding='VALID')
This worked perfectly so I think the same will be true for 1d by just using tf.nn.conv1d_transpose with filters=tf.ones([1, 1, 1]).
I know this question is old and you probably figured out a way since, but I was looking for the answer for long myself, so it will probably help others.
EDIT
As pointed out by #A Roebel, this answer works only for single-channel images.
Here is an extension to the multi-channel case, with a complete example:
import tensorflow as tf
image = tf.random.normal(shape=[1, 2, 2, 2])
def enlarge_one_channel_images(images):
batch_size, height, width, n_channels = tf.shape(image) # might not work in graph mode
output_shape = [batch_size, 2*height, 2*width, 1]
upsampled_images = tf.nn.conv2d_transpose(images, tf.ones([1, 1, 1, 1]), output_shape, strides=2, padding='VALID')
return upsampled_images
image_reshaped = tf.transpose(image, [3, 0, 1, 2])[..., None]
batch_size, height, width, n_channels = tf.shape(image) # might not work in graph mode
expected_output_shape = [batch_size, 2*height, 2*width, 1]
image_reshaped_enlarged = tf.map_fn(
enlarge_one_channel_images,
image_reshaped,
fn_output_signature=tf.TensorSpec(expected_output_shape)
)
image_enlarged = tf.transpose(image_reshaped_enlarged[..., 0], [1, 2, 3, 0])
As also pointed out by #A Roebel in his answer this might not be the most efficient solution however.
I have not run the tests myself, but I agree that the additional convolution with the identity filter will surely slow things down, although I am not sure exactly what the expected acceleration when using tf.function can be.

The short answer is: use tf.scatter_nd
The tricky part is constructing the indices for this operation.
The following code example shows how you can do this for Tensors with arbitrarily many dimensions.
import itertools
import numpy as np
import tensorflow as tf
def pad_strided(x, strides, name=None):
# Preparatory steps and sanity checks.
input_shape = x.shape.as_list()
# Because life gets easier, we let the consumer specify a striding value for EACH dimension
assert len(strides) == len(input_shape), "Rank of strides and x.shape must be the same"
output_shape = [s_in * s for s_in, s in zip(input_shape, strides)]
"""
Calculate the striding indices for EACH dimension.
"""
index_ranges = [list(range(0, s_out, s)) for s_out, s in zip(output_shape, strides)]
"""
Expand the indices per dimension. The resulting array has shape [n_elements, n_dims].
n_elements is the number of values in the input tensor x. So the product of the input
shape. n_dims is the number of input (and output) dimensions.
"""
indices_flat = np.array(list(itertools.product(*index_ranges)))
"""
Reshape the flat index array to have the same dimensions as the input plus an additional
dimension. If the input had [s0, s1, ..., sn], then indices will have
[s0, s1, ..., sn, n_dims]. I.e. the rank will be 1 higher than that of the input tensor.
"""
indices = np.reshape(indices_flat, input_shape + [-1])
""" Now we simply call the TensorFlow operator """
with tf.variable_scope(name, default_name="pad_strided"):
t_indices = tf.constant(indices, dtype=tf.int32, name="indices")
t_output_shape = tf.constant(output_shape, name="output_shape")
return tf.scatter_nd(t_indices, x, t_output_shape)
session = tf.Session()
batch_size = 1
time_series_length = 6
num_filters = 3
t_in = tf.random.uniform((batch_size, time_series_length, num_filters))
# Specify a stride 2 for the time_series dimension
t_out = pad_strided(t_in, strides=[1, 2, 1])
original, strided = session.run([t_in, t_out])
print(f"Input Tensor:\n{original[:,:,:]}")
print(f"Output Tensor:\n{strided[:,:,:]}")
The output would then be for instance
Input Tensor:
[[[0.0678339 0.07883668 0.49193358]
[0.5029118 0.8639555 0.74302936]
[0.995087 0.6315181 0.11990702]
[0.95606446 0.29059124 0.12656784]
[0.8278991 0.8518325 0.4033165 ]
[0.78434443 0.7894305 0.6251142 ]]]
Output Tensor:
[[[0.0678339 0.07883668 0.49193358]
[0. 0. 0. ]
[0.5029118 0.8639555 0.74302936]
[0. 0. 0. ]
[0.995087 0.6315181 0.11990702]
[0. 0. 0. ]
[0.95606446 0.29059124 0.12656784]
[0. 0. 0. ]
[0.8278991 0.8518325 0.4033165 ]
[0. 0. 0. ]
[0.78434443 0.7894305 0.6251142 ]
[0. 0. 0. ]]]

I just had the same problem and found a problem in the solution shared by zaccharie-ramzi. The given solutions does not work with signals with more than a singe channel. I suggest here a fix for the solution with conXd_transpose together with a more efficient solution by means of reshaping and padding.
If you store the code below in a script named ./upsample_with_padding.py
you can reproduce the following experiments. The script starts with tensor
sig = tf.ones((60,10000,args.n_channels))
that is supposed to be upsampled by a factor upfac by means of inserting 0s in time direction for all channels. Default upfac is 4, default number of channels is 2.
You can run it with argument check to see the shapes and check that the results obtained with the padding solution and the solution using the corrected implementation of the answer with transposed convolution are equivalent.
> ./upsample_with_padding.py --check
upsig_conv (60, 40000, 2)
upsig_pad (60, 40000, 2)
diff: tf.Tensor(0.0, shape=(), dtype=float32)
Comparing the computational speed wee can see that the use of padding is much more efficient
> ./upsample_with_padding.py
timeit conv: 9.84551206199103
timeit pad : 1.459020125999814
This is expected because the convXd_transpose operation will perform padding as well but then has to convolve with a identity filter.
Here the script
#! /usr/bin/env python3
import os
# silence verbose TF feedback
if 'TF_CPP_MIN_LOG_LEVEL' not in os.environ:
os.environ['TF_CPP_MIN_LOG_LEVEL'] = "2"
from argparse import ArgumentParser
import tensorflow as tf
import timeit
def up_pad(sig, upfac):
upsigp = tf.expand_dims(sig, axis=2)
upsigp = tf.pad(upsigp, ((0, 0), (0, 0), (0, upfac-1), (0, 0)))
return tf.reshape(upsigp, shape=(sig.shape[0], sig.shape[1]*upfac, sig.shape[2]))
def up_conv(sig, upfac):
upsigc = tf.expand_dims(sig, axis=-1)
filter = tf.ones([1, 1, 1, 1])
return tf.nn.conv2d_transpose(upsigc, filters=filter, strides=(upfac,1), padding="VALID", data_format="NHWC",
output_shape=(sig.shape[0], sig.shape[1]*upfac, sig.shape[2], 1))[:,:,:,0]
parser=ArgumentParser()
parser.add_argument("--check", action="store_true")
parser.add_argument("--upfac", default=4, type=int)
parser.add_argument("--n_channels", default=2, type=int)
args=parser.parse_args()
sig = tf.ones((60,10000,args.n_channels))
if args.check:
upsig_conv = up_conv(sig, upfac=args.upfac)
upsig_pad = up_pad(sig, upfac=args.upfac)
print(f"upsig_conv {upsig_conv.shape}")
print(f"upsig_pad {upsig_pad.shape}")
print("diff:", tf.reduce_max(tf.abs(upsig_conv - upsig_pad)))
else:
print("timeit conv:",timeit.timeit(f'up_conv(sig, upfac={args.upfac})', globals=globals(), number=3000))
print("timeit pad :",timeit.timeit(f'up_pad(sig, upfac={args.upfac})', globals=globals(), number=3000))

Here is a solution which inserts factor - 1 zeros in between time samples for a tensor of shape (batch_size, time_series_length, num_channels):
def upsample(x, factor):
# x has shape (batch_size, time_series_length, num_channels)
L = tf.shape(x)[1] # time series length
## repeat each sample `factor` times
x = tf.repeat(x, tf.repeat(factor, L), axis=1)
## create a mask in order to replace the inserted samples by zeroes
mask = tf.reshape(tf.repeat([ tf.concat([[factor], tf.zeros(factor-1)], 0) ], L, axis=0), [-1])
# mask looks like [1, 0, 0, 0, 1, 0, 0, 0, 1, ...] (here factor = 4)
## multiply by mask
x = x * mask[tf.newaxis, :, tf.newaxis] # mask is reshaped to broadcast multiplication along axis 1
## low-pass filtering:
# from scipy.signal import firwin2
# filters = tf.convert_to_tensor(firwin2(32*factor, [0.0, 0.95/factor, 1.0/factor, 1.0], [1.0, 1.0, 0.0, 0.0], window="blackman"), tf.float32)[:,tf.newaxis, tf.newaxis]
# x = tf.nn.conv1d(x, filters, 1, 'SAME')
return x

Related

Tensorflow: Keep 10% of the largest entries of a tensor

I want to filter a tensor by keeping 10% of the largest entries. Is there a Tensorflow function to do that? How would a possible implementation look like? I am looking for something that can handle tensors of shape [N,W,H,C] and [N,W*H*C].
By filter I mean that the shape of the tensor remains the same but only the largest 10% are kept. Thus all entries become zero except the 10% largest.
Is that possible?
The correct way of doing this would be computing the 90 percentile, for example with tf.contrib.distributions.percentile:
import tensorflow as tf
images = ... # [N, W, H, C]
n = tf.shape(images)[0]
images_flat = tf.reshape(images, [n, -1])
p = tf.contrib.distributions.percentile(images_flat, 90, axis=1, interpolation='higher')
images_top10 = tf.where(images >= tf.reshape(p, [n, 1, 1, 1]),
images, tf.zeros_like(images))
If you want to be ready for TensorFlow 2.x, where tf.contrib will be removed, you can instead use TensorFlow Probability, which is where the percentile function will be permanently in the future.
EDIT: If you want to do the filtering per channel, you can modify the code slightly like this:
import tensorflow as tf
images = ... # [N, W, H, C]
shape = tf.shape(images)
n, c = shape[0], shape[3]
images_flat = tf.reshape(images, [n, -1, c])
p = tf.contrib.distributions.percentile(images_flat, 90, axis=1, interpolation='higher')
images_top10 = tf.where(images >= tf.reshape(p, [n, 1, 1, c]),
images, tf.zeros_like(images))
I've not found any built-in method yet. Try this workaround:
import numpy as np
import tensorflow as tf
def filter(tensor, ratio):
num_entries = tf.reduce_prod(tensor.shape)
num_to_keep = tf.cast(tf.multiply(ratio, tf.cast(num_entries, tf.float32)), tf.int32)
# Calculate threshold
x = tf.contrib.framework.sort(tf.reshape(tensor, [num_entries]))
threshold = x[-num_to_keep]
# Filter the tensor
mask = tf.cast(tf.greater_equal(tensor, threshold), tf.float32)
return tf.multiply(tensor, mask)
tensor = tf.constant(np.arange(40).reshape(2, 4, 5), dtype=tf.float32)
filtered_tensor = filter(tensor, 0.1)
# Print result
tf.InteractiveSession()
print(tensor.eval())
print(filtered_tensor.eval())

How to implement tf.nn.top_k with Numpy?

How can I implement the tensorflow function tf.nn.top_k with Numpy? Suppose the input is ndarray in format heigh x width x channel?
You can use the answer here with Numpy 1.8 and up.
I spent more time on this than I wanted, because the other answers treated the whole multidimensional array as a single search where top_k only looks at the last dimension. There's more information here, where the partition is used to specifically sort a given axis.
To summarize, based upon the tensorflow signature (without name):
def top_k(input, k=1, sorted=True):
"""Top k max pooling
Args:
input(ndarray): convolutional feature in heigh x width x channel format
k(int): if k==1, it is equal to normal max pooling
sorted(bool): whether to return the array sorted by channel value
Returns:
ndarray: k x (height x width)
ndarray: k
"""
ind = np.argpartition(input, -k)[..., -k:]
def get_entries(input, ind, sorted):
if len(ind.shape) == 1:
if sorted:
ind = ind[np.argsort(-input[ind])]
return input[ind], ind
output, ind = zip(*[get_entries(inp, id, sorted) for inp, id in zip(input, ind)])
return np.array(output), np.array(ind)
return get_entries(input, ind, sorted)
Keep in mind, for your answer, you tested with
arr = np.random.rand(3, 3, 3)
arr1, ind1 = top_k(arr)
arr2 = np.max(arr, axis=(0,1))
arr3, ind3 = tf.nn.top_k(arr)
print(arr1)
print(arr2)
print(arr3.numpy())
but arr2.shape is (3,) and arr3.numpy().shape is (3, 3, 1).
If you really want tf.nn.top_k like functionality, you should use np.array_equal(arr3, np.max(arr, axis=-1, keepdims=True)) as the test. I ran this with tf.enable_eager_execution() executed, hence the .numpy() instead of .eval().
import numpy as np
def top_k(input, k=1):
"""Top k max pooling
Args:
input(ndarray): convolutional feature in heigh x width x channel format
k(int): if k==1, it is equal to normal max pooling
Returns:
ndarray: k x (height x width)
"""
input = np.reshape(input, [-1, input.shape[-1]])
input = np.sort(input, axis=0)[::-1, :][:k, :]
return input
arr = np.random.rand(3, 3, 3)
arr1 = top_k(arr)
arr2 = np.max(arr, axis=(0,1))
print(arr1)
print(arr2)
assert np.array_equal(top_k(arr)[0], np.max(arr, axis=(0,1)))

Changing the scale of a tensor in tensorflow

Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values.
Is there any way to do this?
Thanks!
You are trying to normalize the data. A classic normalization formula is this one:
normalize_value = (value − min_value) / (max_value − min_value)
The implementation on tensorflow will look like this:
tensor = tf.div(
tf.subtract(
tensor,
tf.reduce_min(tensor)
),
tf.subtract(
tf.reduce_max(tensor),
tf.reduce_min(tensor)
)
)
All the values of the tensor will be betweetn 0 and 1.
IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:
tensor = tf.to_float(tensor)
Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used:
tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough
According to the feature scaling in Wikipedia you can also try the Scaling to unit length:
It can be implemented using this segment of code:
In [3]: a = tf.constant([2.0, 4.0, 6.0, 1.0, 0])
In [4]: b = a / tf.norm(a)
In [5]: b.eval()
Out[5]: array([ 0.26490647, 0.52981293, 0.79471946, 0.13245323, 0. ], dtype=float32)
sigmoid(tensor) * 255 should do it.
Let the input be
X = tf.constant([[0.65,0.61, 0.59, 0.62, 0.6 ],[0.25,0.31, 0.89, 0.52, 0.6 ]])
We can define a scaling function
def rescale(X, a=0, b=1):
repeat = X.shape[1]
xmin = tf.repeat(tf.reshape(tf.math.reduce_min(X, axis=1), shape=[-1,1]), repeats=repeat, axis=1)
xmax = tf.repeat(tf.reshape(tf.math.reduce_max(X, axis=1), shape=[-1,1]), repeats=repeat, axis=1)
X = (X - xmin) / (xmax-xmin)
return X * (b - a) + a
This outputs X in range [0,1]
>>rescale(X)
<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[1. , 0.333334 , 0. , 0.5000005 , 0.16666749],
[0. , 0.09375001, 1. , 0.42187497, 0.54687506]],
dtype=float32)>
To scale in range [0, 255]
>> rescale(X, 0, 255)
<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[255. , 85.00017 , 0. , 127.50012 , 42.50021 ],
[ 0. , 23.906252, 255. , 107.57812 , 139.45314 ]],
dtype=float32)>
In some contexts, you need to normalize each image separately - for example adversarial datasets where each image has noise. The following normalizes each image according to its own min and max, assuming the inputs have typical size Batch x YDim x XDim x Channels:
cast_input = tf.cast(inputs,dtype=tf.float32) # e.g. MNIST is integer
input_min = tf.reduce_min(cast_input,axis=[1,2]) # result B x C
input_max = tf.reduce_max(cast_input,axis=[1,2])
ex_min = tf.expand_dims(input_min,axis=1) # put back inner dimensions
ex_max = tf.expand_dims(input_max,axis=1)
ex_min = tf.expand_dims(ex_min,axis=1) # one at a time - better way?
ex_max = tf.expand_dims(ex_max,axis=1) # Now Bx1x1xC
input_range = tf.subtract(ex_max, ex_min)
floored = tf.subtract(cast_input,ex_min) # broadcast
scale_input = tf.divide(floored,input_range)
I would like to expand the dimensions in one short like you can in Numpy, but tf.expand_dims seems to only accept one dimension at a a time - open to suggestions here. Thanks!
If you want the maximum value to be the effective upper bound of the 0-1 range and there's a meaningful zero then using this:
import tensorflow as tf
tensor = tf.constant([0, 1, 5, 10])
tensor = tf.divide(tensor, tf.reduce_max(tensor))
tf.print(tensor)
would result in:
[0 0.1 0.5 1]

What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?

What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.
According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow.
But what is 'SAME' padding of max pool in tensorflow?
If you like ascii art:
"VALID" = without padding:
inputs: 1 2 3 4 5 6 7 8 9 10 11 (12 13)
|________________| dropped
|_________________|
"SAME" = with zero padding:
pad| |pad
inputs: 0 |1 2 3 4 5 6 7 8 9 10 11 12 13|0 0
|________________|
|_________________|
|________________|
In this example:
Input width = 13
Filter width = 6
Stride = 5
Notes:
"VALID" only ever drops the right-most columns (or bottom-most rows).
"SAME" tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).
Edit:
About the name:
With "SAME" padding, if you use a stride of 1, the layer's outputs will have the same spatial dimensions as its inputs.
With "VALID" padding, there's no "made-up" padding inputs. The layer only uses valid input data.
When stride is 1 (more typical with convolution than pooling), we can think of the following distinction:
"SAME": output size is the same as input size. This requires the filter window to slip outside input map, hence the need to pad.
"VALID": Filter window stays at valid position inside input map, so output size shrinks by filter_size - 1. No padding occurs.
I'll give an example to make it clearer:
x: input image of shape [2, 3], 1 channel
valid_pad: max pool with 2x2 kernel, stride 2 and VALID padding.
same_pad: max pool with 2x2 kernel, stride 2 and SAME padding (this is the classic way to go)
The output shapes are:
valid_pad: here, no padding so the output shape is [1, 1]
same_pad: here, we pad the image to the shape [2, 4] (with -inf and then apply max pool), so the output shape is [1, 2]
x = tf.constant([[1., 2., 3.],
[4., 5., 6.]])
x = tf.reshape(x, [1, 2, 3, 1]) # give a shape accepted by tf.nn.max_pool
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
valid_pad.get_shape() == [1, 1, 1, 1] # valid_pad is [5.]
same_pad.get_shape() == [1, 1, 2, 1] # same_pad is [5., 6.]
The TensorFlow Convolution example gives an overview about the difference between SAME and VALID :
For the SAME padding, the output height and width are computed as:
out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))
And
For the VALID padding, the output height and width are computed as:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
Complementing YvesgereY's great answer, I found this visualization extremely helpful:
Padding 'valid' is the first figure. The filter window stays inside the image.
Padding 'same' is the third figure. The output is the same size.
Found it on this article
Visualization credits: vdumoulin#GitHub
Padding is an operation to increase the size of the input data. In case of 1-dimensional data you just append/prepend the array with a constant, in 2-dim you surround matrix with these constants. In n-dim you surround your n-dim hypercube with the constant. In most of the cases this constant is zero and it is called zero-padding.
Here is an example of zero-padding with p=1 applied to 2-d tensor:
You can use arbitrary padding for your kernel but some of the padding values are used more frequently than others they are:
VALID padding. The easiest case, means no padding at all. Just leave your data the same it was.
SAME padding sometimes called HALF padding. It is called SAME because for a convolution with a stride=1, (or for pooling) it should produce output of the same size as the input. It is called HALF because for a kernel of size k
FULL padding is the maximum padding which does not result in a convolution over just padded elements. For a kernel of size k, this padding is equal to k - 1.
To use arbitrary padding in TF, you can use tf.pad()
Quick Explanation
VALID: Don't apply any padding, i.e., assume that all dimensions are valid so that input image fully gets covered by filter and stride you specified.
SAME: Apply padding to input (if needed) so that input image gets fully covered by filter and stride you specified. For stride 1, this will ensure that output image size is same as input.
Notes
This applies to conv layers as well as max pool layers in same way
The term "valid" is bit of a misnomer because things don't become "invalid" if you drop part of the image. Sometime you might even want that. This should have probably be called NO_PADDING instead.
The term "same" is a misnomer too because it only makes sense for stride of 1 when output dimension is same as input dimension. For stride of 2, output dimensions will be half, for example. This should have probably be called AUTO_PADDING instead.
In SAME (i.e. auto-pad mode), Tensorflow will try to spread padding evenly on both left and right.
In VALID (i.e. no padding mode), Tensorflow will drop right and/or bottom cells if your filter and stride doesn't full cover input image.
I am quoting this answer from official tensorflow docs https://www.tensorflow.org/api_guides/python/nn#Convolution
For the 'SAME' padding, the output height and width are computed as:
out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))
and the padding on the top and left are computed as:
pad_along_height = max((out_height - 1) * strides[1] +
filter_height - in_height, 0)
pad_along_width = max((out_width - 1) * strides[2] +
filter_width - in_width, 0)
pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left
For the 'VALID' padding, the output height and width are computed as:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
and the padding values are always zero.
There are three choices of padding: valid (no padding), same (or half), full. You can find explanations (in Theano) here:
http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html
Valid or no padding:
The valid padding involves no zero padding, so it covers only the valid input, not including artificially generated zeros. The length of output is ((the length of input) - (k-1)) for the kernel size k if the stride s=1.
Same or half padding:
The same padding makes the size of outputs be the same with that of inputs when s=1. If s=1, the number of zeros padded is (k-1).
Full padding:
The full padding means that the kernel runs over the whole inputs, so at the ends, the kernel may meet the only one input and zeros else. The number of zeros padded is 2(k-1) if s=1. The length of output is ((the length of input) + (k-1)) if s=1.
Therefore, the number of paddings: (valid) <= (same) <= (full)
VALID padding: this is with zero padding. Hope there is no confusion.
x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
print (valid_pad.get_shape()) # output-->(1, 2, 1, 1)
SAME padding: This is kind of tricky to understand in the first place because we have to consider two conditions separately as mentioned in the official docs.
Let's take input as , output as , padding as , stride as and kernel size as (only a single dimension is considered)
Case 01: :
Case 02: :
is calculated such that the minimum value which can be taken for padding. Since value of is known, value of can be found using this formula .
Let's work out this example:
x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
print (same_pad.get_shape()) # --> output (1, 2, 2, 1)
Here the dimension of x is (3,4). Then if the horizontal direction is taken (3):
If the vertial direction is taken (4):
Hope this will help to understand how actually SAME padding works in TF.
To sum up, 'valid' padding means no padding. The output size of the convolutional layer shrinks depending on the input size & kernel size.
On the contrary, 'same' padding means using padding. When the stride is set as 1, the output size of the convolutional layer maintains as the input size by appending a certain number of '0-border' around the input data when calculating convolution.
Hope this intuitive description helps.
Based on the explanation here and following up on Tristan's answer, I usually use these quick functions for sanity checks.
# a function to help us stay clean
def getPaddings(pad_along_height,pad_along_width):
# if even.. easy..
if pad_along_height%2 == 0:
pad_top = pad_along_height / 2
pad_bottom = pad_top
# if odd
else:
pad_top = np.floor( pad_along_height / 2 )
pad_bottom = np.floor( pad_along_height / 2 ) +1
# check if width padding is odd or even
# if even.. easy..
if pad_along_width%2 == 0:
pad_left = pad_along_width / 2
pad_right= pad_left
# if odd
else:
pad_left = np.floor( pad_along_width / 2 )
pad_right = np.floor( pad_along_width / 2 ) +1
#
return pad_top,pad_bottom,pad_left,pad_right
# strides [image index, y, x, depth]
# padding 'SAME' or 'VALID'
# bottom and right sides always get the one additional padded pixel (if padding is odd)
def getOutputDim (inputWidth,inputHeight,filterWidth,filterHeight,strides,padding):
if padding == 'SAME':
out_height = np.ceil(float(inputHeight) / float(strides[1]))
out_width = np.ceil(float(inputWidth) / float(strides[2]))
#
pad_along_height = ((out_height - 1) * strides[1] + filterHeight - inputHeight)
pad_along_width = ((out_width - 1) * strides[2] + filterWidth - inputWidth)
#
# now get padding
pad_top,pad_bottom,pad_left,pad_right = getPaddings(pad_along_height,pad_along_width)
#
print 'output height', out_height
print 'output width' , out_width
print 'total pad along height' , pad_along_height
print 'total pad along width' , pad_along_width
print 'pad at top' , pad_top
print 'pad at bottom' ,pad_bottom
print 'pad at left' , pad_left
print 'pad at right' ,pad_right
elif padding == 'VALID':
out_height = np.ceil(float(inputHeight - filterHeight + 1) / float(strides[1]))
out_width = np.ceil(float(inputWidth - filterWidth + 1) / float(strides[2]))
#
print 'output height', out_height
print 'output width' , out_width
print 'no padding'
# use like so
getOutputDim (80,80,4,4,[1,1,1,1],'SAME')
Padding on/off. Determines the effective size of your input.
VALID: No padding. Convolution etc. ops are only performed at locations that are "valid", i.e. not too close to the borders of your tensor. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the 8x8 area inside the borders.
SAME: Padding is provided. Whenever your operation references a neighborhood (no matter how big), zero values are provided when that neighborhood extends outside the original tensor to allow that operation to work also on border values. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the full 10x10 area.
Here, W and H are width and height of input,
F are filter dimensions,
P is padding size (i.e., number of rows or columns to be padded)
For SAME padding:
For VALID padding:
Tensorflow 2.0 Compatible Answer: Detailed Explanations have been provided above, about "Valid" and "Same" Padding.
However, I will specify different Pooling Functions and their respective Commands in Tensorflow 2.x (>= 2.0), for the benefit of the community.
Functions in 1.x:
tf.nn.max_pool
tf.keras.layers.MaxPool2D
Average Pooling => None in tf.nn, tf.keras.layers.AveragePooling2D
Functions in 2.x:
tf.nn.max_pool if used in 2.x and tf.compat.v1.nn.max_pool_v2 or tf.compat.v2.nn.max_pool, if migrated from 1.x to 2.x.
tf.keras.layers.MaxPool2D if used in 2.x and
tf.compat.v1.keras.layers.MaxPool2D or tf.compat.v1.keras.layers.MaxPooling2D or tf.compat.v2.keras.layers.MaxPool2D or tf.compat.v2.keras.layers.MaxPooling2D, if migrated from 1.x to 2.x.
Average Pooling => tf.nn.avg_pool2d or tf.keras.layers.AveragePooling2D if used in TF 2.x and
tf.compat.v1.nn.avg_pool_v2 or tf.compat.v2.nn.avg_pool or tf.compat.v1.keras.layers.AveragePooling2D or tf.compat.v1.keras.layers.AvgPool2D or tf.compat.v2.keras.layers.AveragePooling2D or tf.compat.v2.keras.layers.AvgPool2D , if migrated from 1.x to 2.x.
For more information about Migration from Tensorflow 1.x to 2.x, please refer to this Migration Guide.
valid padding is no padding.
same padding is padding in a way the output has the same size as input.

How to implement the Softmax function in Python

From the Udacity's deep learning class, the softmax of y_i is simply the exponential divided by the sum of exponential of the whole Y vector:
Where S(y_i) is the softmax function of y_i and e is the exponential and j is the no. of columns in the input vector Y.
I've tried the following:
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
which returns:
[ 0.8360188 0.11314284 0.05083836]
But the suggested solution was:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
which produces the same output as the first implementation, even though the first implementation explicitly takes the difference of each column and the max and then divides by the sum.
Can someone show mathematically why? Is one correct and the other one wrong?
Are the implementation similar in terms of code and time complexity? Which is more efficient?
They're both correct, but yours is preferred from the point of view of numerical stability.
You start with
e ^ (x - max(x)) / sum(e^(x - max(x))
By using the fact that a^(b - c) = (a^b)/(a^c) we have
= e ^ x / (e ^ max(x) * sum(e ^ x / e ^ max(x)))
= e ^ x / sum(e ^ x)
Which is what the other answer says. You could replace max(x) with any variable and it would cancel out.
(Well... much confusion here, both in the question and in the answers...)
To start with, the two solutions (i.e. yours and the suggested one) are not equivalent; they happen to be equivalent only for the special case of 1-D score arrays. You would have discovered it if you had tried also the 2-D score array in the Udacity quiz provided example.
Results-wise, the only actual difference between the two solutions is the axis=0 argument. To see that this is the case, let's try your solution (your_softmax) and one where the only difference is the axis argument:
import numpy as np
# your solution:
def your_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# correct solution:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0) # only difference
As I said, for a 1-D score array, the results are indeed identical:
scores = [3.0, 1.0, 0.2]
print(your_softmax(scores))
# [ 0.8360188 0.11314284 0.05083836]
print(softmax(scores))
# [ 0.8360188 0.11314284 0.05083836]
your_softmax(scores) == softmax(scores)
# array([ True, True, True], dtype=bool)
Nevertheless, here are the results for the 2-D score array given in the Udacity quiz as a test example:
scores2D = np.array([[1, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
print(your_softmax(scores2D))
# [[ 4.89907947e-04 1.33170787e-03 3.61995731e-03 7.27087861e-02]
# [ 1.33170787e-03 9.84006416e-03 2.67480676e-02 7.27087861e-02]
# [ 3.61995731e-03 5.37249300e-01 1.97642972e-01 7.27087861e-02]]
print(softmax(scores2D))
# [[ 0.09003057 0.00242826 0.01587624 0.33333333]
# [ 0.24472847 0.01794253 0.11731043 0.33333333]
# [ 0.66524096 0.97962921 0.86681333 0.33333333]]
The results are different - the second one is indeed identical with the one expected in the Udacity quiz, where all columns indeed sum to 1, which is not the case with the first (wrong) result.
So, all the fuss was actually for an implementation detail - the axis argument. According to the numpy.sum documentation:
The default, axis=None, will sum all of the elements of the input array
while here we want to sum row-wise, hence axis=0. For a 1-D array, the sum of the (only) row and the sum of all the elements happen to be identical, hence your identical results in that case...
The axis issue aside, your implementation (i.e. your choice to subtract the max first) is actually better than the suggested solution! In fact, it is the recommended way of implementing the softmax function - see here for the justification (numeric stability, also pointed out by some other answers here).
So, this is really a comment to desertnaut's answer but I can't comment on it yet due to my reputation. As he pointed out, your version is only correct if your input consists of a single sample. If your input consists of several samples, it is wrong. However, desertnaut's solution is also wrong. The problem is that once he takes a 1-dimensional input and then he takes a 2-dimensional input. Let me show this to you.
import numpy as np
# your solution:
def your_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# desertnaut solution (copied from his answer):
def desertnaut_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0) # only difference
# my (correct) solution:
def softmax(z):
assert len(z.shape) == 2
s = np.max(z, axis=1)
s = s[:, np.newaxis] # necessary step to do broadcasting
e_x = np.exp(z - s)
div = np.sum(e_x, axis=1)
div = div[:, np.newaxis] # dito
return e_x / div
Lets take desertnauts example:
x1 = np.array([[1, 2, 3, 6]]) # notice that we put the data into 2 dimensions(!)
This is the output:
your_softmax(x1)
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037047]])
desertnaut_softmax(x1)
array([[ 1., 1., 1., 1.]])
softmax(x1)
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037047]])
You can see that desernauts version would fail in this situation. (It would not if the input was just one dimensional like np.array([1, 2, 3, 6]).
Lets now use 3 samples since thats the reason why we use a 2 dimensional input. The following x2 is not the same as the one from desernauts example.
x2 = np.array([[1, 2, 3, 6], # sample 1
[2, 4, 5, 6], # sample 2
[1, 2, 3, 6]]) # sample 1 again(!)
This input consists of a batch with 3 samples. But sample one and three are essentially the same. We now expect 3 rows of softmax activations where the first should be the same as the third and also the same as our activation of x1!
your_softmax(x2)
array([[ 0.00183535, 0.00498899, 0.01356148, 0.27238963],
[ 0.00498899, 0.03686393, 0.10020655, 0.27238963],
[ 0.00183535, 0.00498899, 0.01356148, 0.27238963]])
desertnaut_softmax(x2)
array([[ 0.21194156, 0.10650698, 0.10650698, 0.33333333],
[ 0.57611688, 0.78698604, 0.78698604, 0.33333333],
[ 0.21194156, 0.10650698, 0.10650698, 0.33333333]])
softmax(x2)
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037047],
[ 0.01203764, 0.08894682, 0.24178252, 0.65723302],
[ 0.00626879, 0.01704033, 0.04632042, 0.93037047]])
I hope you can see that this is only the case with my solution.
softmax(x1) == softmax(x2)[0]
array([[ True, True, True, True]], dtype=bool)
softmax(x1) == softmax(x2)[2]
array([[ True, True, True, True]], dtype=bool)
Additionally, here is the results of TensorFlows softmax implementation:
import tensorflow as tf
import numpy as np
batch = np.asarray([[1,2,3,6],[2,4,5,6],[1,2,3,6]])
x = tf.placeholder(tf.float32, shape=[None, 4])
y = tf.nn.softmax(x)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(y, feed_dict={x: batch})
And the result:
array([[ 0.00626879, 0.01704033, 0.04632042, 0.93037045],
[ 0.01203764, 0.08894681, 0.24178252, 0.657233 ],
[ 0.00626879, 0.01704033, 0.04632042, 0.93037045]], dtype=float32)
I would say that while both are correct mathematically, implementation-wise, first one is better. When computing softmax, the intermediate values may become very large. Dividing two large numbers can be numerically unstable. These notes (from Stanford) mention a normalization trick which is essentially what you are doing.
sklearn also offers implementation of softmax
from sklearn.utils.extmath import softmax
import numpy as np
x = np.array([[ 0.50839931, 0.49767588, 0.51260159]])
softmax(x)
# output
array([[ 0.3340521 , 0.33048906, 0.33545884]])
From mathematical point of view both sides are equal.
And you can easily prove this. Let's m=max(x). Now your function softmax returns a vector, whose i-th coordinate is equal to
notice that this works for any m, because for all (even complex) numbers e^m != 0
from computational complexity point of view they are also equivalent and both run in O(n) time, where n is the size of a vector.
from numerical stability point of view, the first solution is preferred, because e^x grows very fast and even for pretty small values of x it will overflow. Subtracting the maximum value allows to get rid of this overflow. To practically experience the stuff I was talking about try to feed x = np.array([1000, 5]) into both of your functions. One will return correct probability, the second will overflow with nan
your solution works only for vectors (Udacity quiz wants you to calculate it for matrices as well). In order to fix it you need to use sum(axis=0)
EDIT. As of version 1.2.0, scipy includes softmax as a special function:
https://scipy.github.io/devdocs/generated/scipy.special.softmax.html
I wrote a function applying the softmax over any axis:
def softmax(X, theta = 1.0, axis = None):
"""
Compute the softmax of each element along an axis of X.
Parameters
----------
X: ND-Array. Probably should be floats.
theta (optional): float parameter, used as a multiplier
prior to exponentiation. Default = 1.0
axis (optional): axis to compute values along. Default is the
first non-singleton axis.
Returns an array the same size as X. The result will sum to 1
along the specified axis.
"""
# make X at least 2d
y = np.atleast_2d(X)
# find axis
if axis is None:
axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
# multiply y against the theta parameter,
y = y * float(theta)
# subtract the max for numerical stability
y = y - np.expand_dims(np.max(y, axis = axis), axis)
# exponentiate y
y = np.exp(y)
# take the sum along the specified axis
ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
# finally: divide elementwise
p = y / ax_sum
# flatten if X was 1D
if len(X.shape) == 1: p = p.flatten()
return p
Subtracting the max, as other users described, is good practice. I wrote a detailed post about it here.
Here you can find out why they used - max.
From there:
"When you’re writing code for computing the Softmax function in practice, the intermediate terms may be very large due to the exponentials. Dividing large numbers can be numerically unstable, so it is important to use a normalization trick."
I was curious to see the performance difference between these
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
def softmaxv2(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
def softmaxv3(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / np.sum(e_x, axis=0)
def softmaxv4(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x - np.max(x)) / np.sum(np.exp(x - np.max(x)), axis=0)
x=[10,10,18,9,15,3,1,2,1,10,10,10,8,15]
Using
print("----- softmax")
%timeit a=softmax(x)
print("----- softmaxv2")
%timeit a=softmaxv2(x)
print("----- softmaxv3")
%timeit a=softmaxv2(x)
print("----- softmaxv4")
%timeit a=softmaxv2(x)
Increasing the values inside x (+100 +200 +500...) I get consistently better results with the original numpy version (here is just one test)
----- softmax
The slowest run took 8.07 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 17.8 µs per loop
----- softmaxv2
The slowest run took 4.30 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23 µs per loop
----- softmaxv3
The slowest run took 4.06 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23 µs per loop
----- softmaxv4
10000 loops, best of 3: 23 µs per loop
Until.... the values inside x reach ~800, then I get
----- softmax
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in exp
after removing the cwd from sys.path.
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in true_divide
after removing the cwd from sys.path.
The slowest run took 18.41 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23.6 µs per loop
----- softmaxv2
The slowest run took 4.18 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 22.8 µs per loop
----- softmaxv3
The slowest run took 19.44 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23.6 µs per loop
----- softmaxv4
The slowest run took 16.82 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 22.7 µs per loop
As some said, your version is more numerically stable 'for large numbers'. For small numbers could be the other way around.
A more concise version is:
def softmax(x):
return np.exp(x) / np.exp(x).sum(axis=0)
To offer an alternative solution, consider the cases where your arguments are extremely large in magnitude such that exp(x) would underflow (in the negative case) or overflow (in the positive case). Here you want to remain in log space as long as possible, exponentiating only at the end where you can trust the result will be well-behaved.
import scipy.special as sc
import numpy as np
def softmax(x: np.ndarray) -> np.ndarray:
return np.exp(x - sc.logsumexp(x))
I needed something compatible with the output of a dense layer from Tensorflow.
The solution from #desertnaut does not work in this case because I have batches of data. Therefore, I came with another solution that should work in both cases:
def softmax(x, axis=-1):
e_x = np.exp(x - np.max(x)) # same code
return e_x / e_x.sum(axis=axis, keepdims=True)
Results:
logits = np.asarray([
[-0.0052024, -0.00770216, 0.01360943, -0.008921], # 1
[-0.0052024, -0.00770216, 0.01360943, -0.008921] # 2
])
print(softmax(logits))
#[[0.2492037 0.24858153 0.25393605 0.24827873]
# [0.2492037 0.24858153 0.25393605 0.24827873]]
Ref: Tensorflow softmax
I would suggest this:
def softmax(z):
z_norm=np.exp(z-np.max(z,axis=0,keepdims=True))
return(np.divide(z_norm,np.sum(z_norm,axis=0,keepdims=True)))
It will work for stochastic as well as the batch.
For more detail see :
https://medium.com/#ravish1729/analysis-of-softmax-function-ad058d6a564d
In order to maintain for numerical stability, max(x) should be subtracted. The following is the code for softmax function;
def softmax(x):
if len(x.shape) > 1:
tmp = np.max(x, axis = 1)
x -= tmp.reshape((x.shape[0], 1))
x = np.exp(x)
tmp = np.sum(x, axis = 1)
x /= tmp.reshape((x.shape[0], 1))
else:
tmp = np.max(x)
x -= tmp
x = np.exp(x)
tmp = np.sum(x)
x /= tmp
return x
Already answered in much detail in above answers. max is subtracted to avoid overflow. I am adding here one more implementation in python3.
import numpy as np
def softmax(x):
mx = np.amax(x,axis=1,keepdims = True)
x_exp = np.exp(x - mx)
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
res = x_exp / x_sum
return res
x = np.array([[3,2,4],[4,5,6]])
print(softmax(x))
Everybody seems to post their solution so I'll post mine:
def softmax(x):
e_x = np.exp(x.T - np.max(x, axis = -1))
return (e_x / e_x.sum(axis=0)).T
I get the exact same results as the imported from sklearn:
from sklearn.utils.extmath import softmax
import tensorflow as tf
import numpy as np
def softmax(x):
return (np.exp(x).T / np.exp(x).sum(axis=-1)).T
logits = np.array([[1, 2, 3], [3, 10, 1], [1, 2, 5], [4, 6.5, 1.2], [3, 6, 1]])
sess = tf.Session()
print(softmax(logits))
print(sess.run(tf.nn.softmax(logits)))
sess.close()
Based on all the responses and CS231n notes, allow me to summarise:
def softmax(x, axis):
x -= np.max(x, axis=axis, keepdims=True)
return np.exp(x) / np.exp(x).sum(axis=axis, keepdims=True)
Usage:
x = np.array([[1, 0, 2,-1],
[2, 4, 6, 8],
[3, 2, 1, 0]])
softmax(x, axis=1).round(2)
Output:
array([[0.24, 0.09, 0.64, 0.03],
[0. , 0.02, 0.12, 0.86],
[0.64, 0.24, 0.09, 0.03]])
The softmax function is an activation function that turns numbers into probabilities which sum to one. The softmax function outputs a vector that represents the probability distributions of a list of outcomes. It is also a core element used in deep learning classification tasks.
Softmax function is used when we have multiple classes.
It is useful for finding out the class which has the max. Probability.
The Softmax function is ideally used in the output layer, where we are actually trying to attain the probabilities to define the class of each input.
It ranges from 0 to 1.
Softmax function turns logits [2.0, 1.0, 0.1] into probabilities [0.7, 0.2, 0.1], and the probabilities sum to 1. Logits are the raw scores output by the last layer of a neural network. Before activation takes place. To understand the softmax function, we must look at the output of the (n-1)th layer.
The softmax function is, in fact, an arg max function. That means that it does not return the largest value from the input, but the position of the largest values.
For example:
Before softmax
X = [13, 31, 5]
After softmax
array([1.52299795e-08, 9.99999985e-01, 5.10908895e-12]
Code:
import numpy as np
# your solution:
def your_softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# correct solution:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
# only difference
This also works with np.reshape.
def softmax( scores):
"""
Compute softmax scores given the raw output from the model
:param scores: raw scores from the model (N, num_classes)
:return:
prob: softmax probabilities (N, num_classes)
"""
prob = None
exponential = np.exp(
scores - np.max(scores, axis=1).reshape(-1, 1)
) # subract the largest number https://jamesmccaffrey.wordpress.com/2016/03/04/the-max-trick-when-computing-softmax/
prob = exponential / exponential.sum(axis=1).reshape(-1, 1)
return prob
I would like to supplement a little bit more understanding of the problem. Here it is correct of subtracting max of the array. But if you run the code in the other post, you would find it is not giving you right answer when the array is 2D or higher dimensions.
Here I give you some suggestions:
To get max, try to do it along x-axis, you will get an 1D array.
Reshape your max array to original shape.
Do np.exp get exponential value.
Do np.sum along axis.
Get the final results.
Follow the result you will get the correct answer by doing vectorization. Since it is related to the college homework, I cannot post the exact code here, but I would like to give more suggestions if you don't understand.
Goal was to achieve similar results using Numpy and Tensorflow. The only change from original answer is axis parameter for np.sum api.
Initial approach : axis=0 - This however does not provide intended results when dimensions are N.
Modified approach: axis=len(e_x.shape)-1 - Always sum on the last dimension. This provides similar results as tensorflow's softmax function.
def softmax_fn(input_array):
"""
| **#author**: Prathyush SP
|
| Calculate Softmax for a given array
:param input_array: Input Array
:return: Softmax Score
"""
e_x = np.exp(input_array - np.max(input_array))
return e_x / e_x.sum(axis=len(e_x.shape)-1)
Here is generalized solution using numpy and comparision for correctness with tensorflow ans scipy:
Data preparation:
import numpy as np
np.random.seed(2019)
batch_size = 1
n_items = 3
n_classes = 2
logits_np = np.random.rand(batch_size,n_items,n_classes).astype(np.float32)
print('logits_np.shape', logits_np.shape)
print('logits_np:')
print(logits_np)
Output:
logits_np.shape (1, 3, 2)
logits_np:
[[[0.9034822 0.3930805 ]
[0.62397 0.6378774 ]
[0.88049906 0.299172 ]]]
Softmax using tensorflow:
import tensorflow as tf
logits_tf = tf.convert_to_tensor(logits_np, np.float32)
scores_tf = tf.nn.softmax(logits_np, axis=-1)
print('logits_tf.shape', logits_tf.shape)
print('scores_tf.shape', scores_tf.shape)
with tf.Session() as sess:
scores_np = sess.run(scores_tf)
print('scores_np.shape', scores_np.shape)
print('scores_np:')
print(scores_np)
print('np.sum(scores_np, axis=-1).shape', np.sum(scores_np,axis=-1).shape)
print('np.sum(scores_np, axis=-1):')
print(np.sum(scores_np, axis=-1))
Output:
logits_tf.shape (1, 3, 2)
scores_tf.shape (1, 3, 2)
scores_np.shape (1, 3, 2)
scores_np:
[[[0.62490064 0.37509936]
[0.4965232 0.5034768 ]
[0.64137274 0.3586273 ]]]
np.sum(scores_np, axis=-1).shape (1, 3)
np.sum(scores_np, axis=-1):
[[1. 1. 1.]]
Softmax using scipy:
from scipy.special import softmax
scores_np = softmax(logits_np, axis=-1)
print('scores_np.shape', scores_np.shape)
print('scores_np:')
print(scores_np)
print('np.sum(scores_np, axis=-1).shape', np.sum(scores_np, axis=-1).shape)
print('np.sum(scores_np, axis=-1):')
print(np.sum(scores_np, axis=-1))
Output:
scores_np.shape (1, 3, 2)
scores_np:
[[[0.62490064 0.37509936]
[0.4965232 0.5034768 ]
[0.6413727 0.35862732]]]
np.sum(scores_np, axis=-1).shape (1, 3)
np.sum(scores_np, axis=-1):
[[1. 1. 1.]]
Softmax using numpy (https://nolanbconaway.github.io/blog/2017/softmax-numpy) :
def softmax(X, theta = 1.0, axis = None):
"""
Compute the softmax of each element along an axis of X.
Parameters
----------
X: ND-Array. Probably should be floats.
theta (optional): float parameter, used as a multiplier
prior to exponentiation. Default = 1.0
axis (optional): axis to compute values along. Default is the
first non-singleton axis.
Returns an array the same size as X. The result will sum to 1
along the specified axis.
"""
# make X at least 2d
y = np.atleast_2d(X)
# find axis
if axis is None:
axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
# multiply y against the theta parameter,
y = y * float(theta)
# subtract the max for numerical stability
y = y - np.expand_dims(np.max(y, axis = axis), axis)
# exponentiate y
y = np.exp(y)
# take the sum along the specified axis
ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
# finally: divide elementwise
p = y / ax_sum
# flatten if X was 1D
if len(X.shape) == 1: p = p.flatten()
return p
scores_np = softmax(logits_np, axis=-1)
print('scores_np.shape', scores_np.shape)
print('scores_np:')
print(scores_np)
print('np.sum(scores_np, axis=-1).shape', np.sum(scores_np, axis=-1).shape)
print('np.sum(scores_np, axis=-1):')
print(np.sum(scores_np, axis=-1))
Output:
scores_np.shape (1, 3, 2)
scores_np:
[[[0.62490064 0.37509936]
[0.49652317 0.5034768 ]
[0.64137274 0.3586273 ]]]
np.sum(scores_np, axis=-1).shape (1, 3)
np.sum(scores_np, axis=-1):
[[1. 1. 1.]]
The purpose of the softmax function is to preserve the ratio of the vectors as opposed to squashing the end-points with a sigmoid as the values saturate (i.e. tend to +/- 1 (tanh) or from 0 to 1 (logistical)). This is because it preserves more information about the rate of change at the end-points and thus is more applicable to neural nets with 1-of-N Output Encoding (i.e. if we squashed the end-points it would be harder to differentiate the 1-of-N output class because we can't tell which one is the "biggest" or "smallest" because they got squished.); also it makes the total output sum to 1, and the clear winner will be closer to 1 while other numbers that are close to each other will sum to 1/p, where p is the number of output neurons with similar values.
The purpose of subtracting the max value from the vector is that when you do e^y exponents you may get very high value that clips the float at the max value leading to a tie, which is not the case in this example. This becomes a BIG problem if you subtract the max value to make a negative number, then you have a negative exponent that rapidly shrinks the values altering the ratio, which is what occurred in poster's question and yielded the incorrect answer.
The answer supplied by Udacity is HORRIBLY inefficient. The first thing we need to do is calculate e^y_j for all vector components, KEEP THOSE VALUES, then sum them up, and divide. Where Udacity messed up is they calculate e^y_j TWICE!!! Here is the correct answer:
def softmax(y):
e_to_the_y_j = np.exp(y)
return e_to_the_y_j / np.sum(e_to_the_y_j, axis=0)
This generalizes and assumes you are normalizing the trailing dimension.
def softmax(x: np.ndarray) -> np.ndarray:
e_x = np.exp(x - np.max(x, axis=-1)[..., None])
e_y = e_x.sum(axis=-1)[..., None]
return e_x / e_y
I used these three simple lines:
x_exp=np.exp(x)
x_sum=np.sum(x_exp, axis = 1, keepdims = True)
s=x_exp / x_sum

Categories

Resources