Why are embeddings necessary in Pytorch? - python

I (think) I understand the basic principle behind embeddings: they're a shortcut to quickly perform the operation (one hot encoded vector) * matrix without actually performing that operation (by utilizing the fact that this operation is equivalent to indexing into the matrix) while still maintaining the same gradient as if that operation was actually performed.
I know in the following example:
e = Embedding(3, 2)
n = e(torch.LongTensor([0, 2]))
n will be a tensor of shape 2, 2.
But, we could also do:
p = nn.Parameter(torch.zeros([3, 2]).normal_(0, 0.01))
p[tensor([0, 2])]
and get the same result without an embedding.
This in and of itself wouldn't be confusing since in the first example n has a grad_fn called EmbeddingBackward whereas in the 2nd example p has a grad_fn called IndexBackward, which is what we expect since we know embeddings simulate a different derivative.
The confusing part is in chapter 8 of the fastbook they use embeddings to compute movie recommendations. But then, they do it without embeddings in basically the same manner & the model still works. I would expect the version without embeddings to fail because the derivative would be incorrect.
Version with embeddings:
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = Embedding(n_users, n_factors)
self.user_bias = Embedding(n_users, 1)
self.movie_factors = Embedding(n_movies, n_factors)
self.movie_bias = Embedding(n_movies, 1)
self.y_range = y_range
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
res = (users * movies).sum(dim=1, keepdim=True)
res += self.user_bias(x[:,0]) + self.movie_bias(x[:,1])
return sigmoid_range(res, *self.y_range)
Version without:
def create_params(size):
return nn.Parameter(torch.zeros(*size).normal_(0, 0.01))
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = create_params([n_users, n_factors])
self.user_bias = create_params([n_users])
self.movie_factors = create_params([n_movies, n_factors])
self.movie_bias = create_params([n_movies])
self.y_range = y_range
def forward(self, x):
users = self.user_factors[x[:,0]]
movies = self.movie_factors[x[:,1]]
res = (users*movies).sum(dim=1)
res += self.user_bias[x[:,0]] + self.movie_bias[x[:,1]]
return sigmoid_range(res, *self.y_range)
Does anyone know what is going on?

Besides the points provided in the comments:
I (think) I understand the basic principle behind embeddings: they're
a shortcut to quickly perform the operation (one hot encoded vector) *
matrix without actually performing that operation
This one is only partially correct. Indexing into matrix is fully differentiable operation, it's just taking part of the data and propagate gradient this path only. Multiplication here would be wasteful and unnecessary.
and get the same result without an embedding.
This one is true
called EmbeddingBackward whereas in the 2nd example p has a grad_fn called IndexBackward, which is what we expect since we know embeddings simulate a different derivative. (emphasis mine)
This one isn't (or often is not). Embedding also chooses part of the data, just like you did, grad_fn is different due to "extra functionalities", but in principle it is the same (choosing some vector(s) from the matrix), think of it like IndexBackward on steroids.
The confusing part is in chapter 8 of the fastbook they use embeddings
to compute movie recommendations. But then, they do it without
embeddings in basically the same manner & the model still works. I
would expect the version without embeddings to fail because the
derivative would be incorrect. (emphasis mine)
In this exact case, both approaches are equivalent (give or take different results from random initialization).
Why nn.Embedding at all?
Easier to understand your intent when reading the code
Common utilities and functionality added if needed (as pointed out in the comments)

Related

What does indexing on a connected layer do?

I have been trying to understand this bit of code. I dont quite understand what a slicing operator does to a fully connected layer. Here's the code for context.
def generate_condition(self, embedding):
conditions = fc(embedding, self.embedd_dim * 2, 'gen_cond/fc',activation_fn=tf.nn.leaky_relu)
mean = conditions[:, :self.embedd_dim]
log_sigma = conditions[:, self.embedd_dim:]
return [mean, log_sigma]
where fc returns:
return tf.contrib.layers.fully_connected(inputs, num_out, activation_fn=activation_fn, weights_initializer=w_init,reuse=reuse, scope=name)
It simply splits your array. The way this layer works is that it has one layer that is supposed to produce 2 things: mu and sigma. You could achieve this by first having one, shared layers, and then 2 separate heads, one for mu, one for sigma. Another way to implement exactly the same thing is to have one head, of the size that is sum of the sizes of mu and sigma, and then just split it back into 2 things. In your case, first self.embedd_dim will be interpreted as mu, and the remaining features/neurons as log_sigma.

Avoid getting nonsensical values (eg. -0.0) when classifying very long texts with Naive Bayes

I've implemented a fairly efficient implementation of a multinomial Naive Bayes classifier and it works like a charm. That is until the classifier encounters very long messages (on the order of 10k words) where the predictions results are nonsensical (e.g. -0.0) and I get a math domain error when using Python's math.log function. The reason why I'm using log is that when working with very small floats, what you get if you multiply them is very small floats and the log helps avoiding infinitely small numbers that would fail the predictions.
Some context
I'm using the Bag of words approach without any sort of vectorization (like TF-IDF, because I can't figure out how to implement it properly and balance 0-occurring words. A snippet for that would be appreciated too ;) ) and I'm using frequency count and Laplace additive smoothing (basically adding 1 to each frequency count so it's never 0).
I could just take the log out, but that would mean that in the case of such long messages the engine would fail to detect them properly anyway, so it's not the point.
There is no multiplication in Naive Bayes if you apply log-sum-exp, only additions, so the underflow is unlikely. And if you use smoothing (as you say yo do), you would never get undefined behavior for log.
This stats stackexchange answer describes the underlying math. For reference implementation I have a snippet of mine lying around for MultinomialNaiveBayes (analogous to sklearn's sklearn.naive_bayes.MultinomialNB and with similar API):
import numpy as np
import scipy
class MultinomialNaiveBayes:
def __init__(self, alpha: float = 1.0):
self.alpha = alpha
def fit(self, X, y):
# Calculate priors from data
self.log_priors_ = np.log(np.bincount(y) / y.shape[0])
# Get indices where data belongs to separate class, creating a slicing mask.
class_indices = np.array(
np.ma.make_mask([y == current for current in range(len(self.log_priors_))])
)
# Divide dataset based on class indices
class_datasets = np.array([X[indices] for indices in class_indices])
# Calculate how often each class occurs and add alpha smoothing.
# Reshape into [n_classes, features]
classes_metrics = (
np.array([dataset.sum(axis=0) for dataset in class_datasets]).reshape(
len(self.log_priors_), -1
)
+ self.alpha
)
# Calculate log likelihoods
self.log_likelihoods_ = np.log(
classes_metrics / classes_metrics.sum(axis=1)[:, np.newaxis]
)
return self
def predict(self, X):
# Return most likely class
return np.argmax(
scipy.sparse.csr_matrix.dot(X, self.log_likelihoods_.T) + self.log_priors_,
axis=1,
)
BTW. -0.0 is exactly the same as 0.0 and is sensical value.

Implementing a trainable generalized Bump function layer in Keras/Tensorflow

I'm trying to code the following variant of the Bump function, applied component-wise:
,
where σ is trainable; but it's not working (errors reported below).
My attempt:
Here's what I've coded up so far (if it helps). Suppose I have two functions (for example):
def f_True(x):
# Compute Bump Function
bump_value = 1-tf.math.pow(x,2)
bump_value = -tf.math.pow(bump_value,-1)
bump_value = tf.math.exp(bump_value)
return(bump_value)
def f_False(x):
# Compute Bump Function
x_out = 0*x
return(x_out)
class trainable_bump_layer(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(trainable_bump_layer, self).__init__(*args, **kwargs)
def build(self, input_shape):
self.threshold_level = self.add_weight(name='threshlevel',
shape=[1],
initializer='GlorotUniform',
trainable=True)
def call(self, input):
# Determine Thresholding Logic
The_Logic = tf.math.less(input,self.threshold_level)
# Apply Logic
output_step_3 = tf.cond(The_Logic,
lambda: f_True(input),
lambda: f_False(input))
return output_step_3
Error Report:
Train on 100 samples
Epoch 1/10
WARNING:tensorflow:Gradients do not exist for variables ['reconfiguration_unit_steps_3_3/threshlevel:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['reconfiguration_unit_steps_3_3/threshlevel:0'] when minimizing the loss.
32/100 [========>.....................] - ETA: 3s
...
tensorflow:Gradients do not exist for variables
Moreover, it does not seem to be applied component-wise (besides the non-trainable problem). What could be the problem?
Unfortunately, no operation to check whether x is within (-σ, σ) will be differentiable and therefore σ cannot be learnt via any gradient descent method. Specifically, it is not possible to compute the gradients with respect to self.threshold_level because tf.math.less is not differentiable with respect to the condition.
Regarding the element-wise conditional, you can instead use tf.where to select elements from f_True(input) or f_False(input) according to the component-wise boolean values of the condition. For example:
output_step_3 = tf.where(The_Logic, f_True(input), f_False(input))
NOTE: I answered based on the provided code, where self.threshold_level is not used in f_True nor f_False. If self.threshold_level is used in those functions as in the provided formula, the function will, of course, be differentiable with respect to self.threshold_level.
Updated 19/04/2020: Thank you #today for the clarification.
I suggest you try a normal distribution instead of a bump.
In my tests here, this bump function is not behaving well (I can't find a bug but don't discard it, but my graph shows two very sharp bumps, which is not good for networks)
With a normal distribution, you would get a regular and differentiable bump whose height, width and center you can control.
So, you may try this function:
y = a * exp ( - b * (x - c)²)
Try it in some graph and see how it behaves.
For this:
class trainable_bump_layer(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(trainable_bump_layer, self).__init__(*args, **kwargs)
def build(self, input_shape):
#suggested shape (has a different kernel for each input feature/channel)
shape = tuple(1 for _ in input_shape[:-1]) + input_shape[-1:]
#for your desired shape of only 1:
shape = tuple(1 for _ in input_shape) #all ones
#height
self.kernel_a = self.add_weight(name='kernel_a ',
shape=shape
initializer='ones',
trainable=True)
#inverse width
self.kernel_b = self.add_weight(name='kernel_b',
shape=shape
initializer='ones',
trainable=True)
#center
self.kernel_c = self.add_weight(name='kernel_c',
shape=shape
initializer='zeros',
trainable=True)
def call(self, input):
exp_arg = - self.kernel_b * K.square(input - self.kernel_c)
return self.kernel_a * K.exp(exp_arg)
I am a bit surprised that no one has mentioned the main (and only) reason for the given warning! As it seems, that code is supposed to implement the generalized variant of Bump function; however, just take a look at the functions implemented again:
def f_True(x):
# Compute Bump Function
bump_value = 1-tf.math.pow(x,2)
bump_value = -tf.math.pow(bump_value,-1)
bump_value = tf.math.exp(bump_value)
return(bump_value)
def f_False(x):
# Compute Bump Function
x_out = 0*x
return(x_out)
The error is evident: there is no usage of the trainable weight of the layer in these functions! So there is no surprise that you get the message saying that no gradient exist for that: you are not using it at all, so no gradient to update it! Rather, this is exactly the original Bump function (i.e. with no trainable weight).
But, you might say that: "at least, I used the trainable weight in the condition of tf.cond, so there must be some gradients?!"; however, it's not like that and let me clear up the confusion:
First of all, as you have noticed as well, we are interested in element-wise conditioning. So instead of tf.cond you need to use tf.where.
The other misconception is to claim that since tf.less is used as the condition, and since it is not differentiable i.e. it has no gradient with respect to its inputs (which is true: there is no defined gradient for a function with boolean output w.r.t. its real-valued inputs!), then that results in the given warning!
That's simply wrong! The derivative here would be taken of the output of the layer w.r.t trainable weight, and the selection condition is NOT present in the output. Rather, it's just a boolean tensor which determines the output branch to be selected. That's it! The derivative of condition is not taken and will never be needed. So that's not the reason for the given warning; the reason is only and only what I mentioned above: no contribution of trainable weight in the output of layer. (Note: if the point about condition is a bit surprising to you, then think about a simple example: the ReLU function, which is defined as relu(x) = 0 if x < 0 else x. If the derivative of condition, i.e. x < 0, is considered/needed, which does not exists, then we would not be able to use ReLU in our models and train them using gradient-based optimization methods at all!)
(Note: starting from here, I would refer to and denote the threshold value as sigma, like in the equation).
All right! We found the reason behind the error in implementation. Could we fix this? Of course! Here is the updated working implementation:
import tensorflow as tf
from tensorflow.keras.initializers import RandomUniform
from tensorflow.keras.constraints import NonNeg
class BumpLayer(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(BumpLayer, self).__init__(*args, **kwargs)
def build(self, input_shape):
self.sigma = self.add_weight(
name='sigma',
shape=[1],
initializer=RandomUniform(minval=0.0, maxval=0.1),
trainable=True,
constraint=tf.keras.constraints.NonNeg()
)
super().build(input_shape)
def bump_function(self, x):
return tf.math.exp(-self.sigma / (self.sigma - tf.math.pow(x, 2)))
def call(self, inputs):
greater = tf.math.greater(inputs, -self.sigma)
less = tf.math.less(inputs, self.sigma)
condition = tf.logical_and(greater, less)
output = tf.where(
condition,
self.bump_function(inputs),
0.0
)
return output
A few points regarding this implementation:
We have replaced tf.cond with tf.where in order to do element-wise conditioning.
Further, as you can see, unlike your implementation which only checked for one side of inequality, we are using tf.math.less, tf.math.greater and also tf.logical_and to find out whether the input values have magnitudes of less than sigma (alternatively, we could do this using just tf.math.abs and tf.math.less; no difference!). And let us repeat it: using boolean-output functions in this way does not cause any problems and have nothing to do with derivatives/gradients.
We are also using a non-negativity constraint on the sigma value learned by layer. Why? Because sigma values less than zero does not make sense (i.e. the range (-sigma, sigma) is ill-defined when sigma is negative).
And considering the previous point, we take care to initialize the sigma value properly (i.e. to a small non-negative value).
And also, please don't do things like 0.0 * inputs! It's redundant (and a bit weird) and it is equivalent to 0.0; and both have a gradient of 0.0 (w.r.t. inputs). Multiplying zero with a tensor does not add anything or solve any existing issue, at least not in this case!
Now, let's test it to see how it works. We write some helper functions to generate training data based on a fixed sigma value, and also to create a model which contains a single BumpLayer with input shape of (1,). Let's see if it could learn the sigma value which is used for generating training data:
import numpy as np
def generate_data(sigma, min_x=-1, max_x=1, shape=(100000,1)):
assert sigma >= 0, 'Sigma should be non-negative!'
x = np.random.uniform(min_x, max_x, size=shape)
xp2 = np.power(x, 2)
condition = np.logical_and(x < sigma, x > -sigma)
y = np.where(condition, np.exp(-sigma / (sigma - xp2)), 0.0)
dy = np.where(condition, xp2 * y / np.power((sigma - xp2), 2), 0)
return x, y, dy
def make_model(input_shape=(1,)):
model = tf.keras.Sequential()
model.add(BumpLayer(input_shape=input_shape))
model.compile(loss='mse', optimizer='adam')
return model
# Generate training data using a fixed sigma value.
sigma = 0.5
x, y, _ = generate_data(sigma=sigma, min_x=-0.1, max_x=0.1)
model = make_model()
# Store initial value of sigma, so that it could be compared after training.
sigma_before = model.layers[0].get_weights()[0][0]
model.fit(x, y, epochs=5)
print('Sigma before training:', sigma_before)
print('Sigma after training:', model.layers[0].get_weights()[0][0])
print('Sigma used for generating data:', sigma)
# Sigma before training: 0.08271004
# Sigma after training: 0.5000002
# Sigma used for generating data: 0.5
Yes, it could learn the value of sigma used for generating data! But, is it guaranteed that it actually works for all different values of training data and initialization of sigma? The answer is: NO! Actually, it is possible that you run the code above and get nan as the value of sigma after training, or inf as the loss value! So what's the problem? Why this nan or inf values might be produced? Let's discuss it below...
Dealing with numerical stability
One of the important things to consider, when building a machine learning model and using gradient-based optimization methods to train them, is the numerical stability of operations and calculations in a model. When extremely large or small values are generated by an operation or its gradient, almost certainly it would disrupt the training process (for example, that's one of the reasons behind normalizing image pixel values in CNNs to prevent this issue).
So, let's take a look at this generalized bump function (and let's discard the thresholdeding for now). It's obvious that this function has singularities (i.e. points where either the function or its gradient is not defined) at x^2 = sigma (i.e. when x = sqrt(sigma) or x=-sqrt(sigma)). The animated diagram below shows the bump function (the solid red line), its derivative w.r.t. sigma (the dotted green line) and x=sigma and x=-sigma lines (two vertical dashed blue lines), when sigma starts from zero and is increased to 5:
As you can see, around the region of singularities the function is not well-behaved for all values of sigma, in the sense that both the function and its derivative take extremely large values at those regions. So given an input value at those regions for a particular value of sigma, exploding output and gradient values would be generated, hence the issue of inf loss value.
Even further, there is a problematic behavior of tf.where which causes the issue of nan values for the sigma variable in the layer: surprisingly, if the produced value in inactive branch of tf.where is extremely large or inf, which with the bump function results in extremely large or inf gradient values, then the gradient of tf.where would be nan, despite the fact that the inf is in inactive branch and is not even selected (see this Github issue which discusses exactly this)!!
So is there any workaround for this behavior of tf.where? Yes, actually there is a trick to somehow resolve this issue which is explained in this answer: basically we can use an additional tf.where in order to prevent the function to be applied on these regions. In other words, instead of applying self.bump_function on any input value, we filter those values which are NOT in the range (-self.sigma, self.sigma) (i.e. the actual range which the function should be applied) and instead feed the function with zero (which is always produce safe values, i.e. is equal to exp(-1)):
output = tf.where(
condition,
self.bump_function(tf.where(condition, inputs, 0.0)),
0.0
)
Applying this fix would entirely resolve the issue of nan values for sigma. Let's evaluate it on training data values generated with different sigma values and see how it would perform:
true_learned_sigma = []
for s in np.arange(0.1, 10.0, 0.1):
model = make_model()
x, y, dy = generate_data(sigma=s, shape=(100000,1))
model.fit(x, y, epochs=3 if s < 1 else (5 if s < 5 else 10), verbose=False)
sigma = model.layers[0].get_weights()[0][0]
true_learned_sigma.append([s, sigma])
print(s, sigma)
# Check if the learned values of sigma
# are actually close to true values of sigma, for all the experiments.
res = np.array(true_learned_sigma)
print(np.allclose(res[:,0], res[:,1], atol=1e-2))
# True
It could learn all the sigma values correctly! That's nice. That workaround worked! Although, there is one caveat: this is guaranteed to work properly and learn any sigma value if the input values to this layer are greater than -1 and less than 1 (i.e. this is the default case of our generate_data function); otherwise, there is still the issue of inf loss value which might happen if the input values have a magnitude of greater than 1 (see point #1 and #2, below).
Here are some foods for thought for the curios and interested mind:
It was just mentioned that if the input values to this layer are greater than 1 or less than -1, then it may cause problems. Can you argue why this is the case? (Hint: use the animated diagram above and consider cases where sigma > 1 and the input value is between sqrt(sigma) and sigma (or between -sigma and -sqrt(sigma).)
Can you provide a fix for the issue in point #1, i.e. such that the layer could work for all input values? (Hint: like the workaround for tf.where, think about how you can further filter-out the unsafe values which the bump function could be applied on and produce exploding output/gradient.)
However, if you are not interested to fix this issue, and would like to use this layer in a model as it is now, then how would you guarantee that the input values to this layer are always between -1 and 1? (Hint: as one solution, there is a commonly-used activation function which produces values exactly in this range and could be potentially used as the activation function of the layer which is before this layer.)
If you take a look at the last code snippet, you will see that we have used epochs=3 if s < 1 else (5 if s < 5 else 10). Why is that? Why large values of sigma need more epochs to be learned? (Hint: again, use the animated diagram and consider the derivative of function for input values between -1 and 1 as sigma value increases. What are their magnitude?)
Do we also need to check the generated training data for any nan, inf or extremely large values of y and filter them out? (Hint: yes, if sigma > 1 and range of values, i.e. min_x and max_x, fall outside of (-1, 1); otherwise, no that's not necessary! Why is that? Left as an exercise!)

Clear Implementation of Softmax and Its Derivative

I'm currently writing my first multilayer neural net with python 3.7 and numpy, and I'm having trouble implementing softmax (I intend to use my network for classification, so having a working implementation of softmax is pretty crucial). I copied this code off of a different thread:
def softmax(x):
return exp(x) / np.sum(exp(x), axis = 0)
I think I have a basic understanding of the intended function of the softmax function; that is, to take a vector and turn its elements into probabilities so that they sum to 1. Please correct my understanding if I'm wrong. I don't quite understand how this code accomplishes that function, but I found similar code on multiple other threads, so I believe it to be correct. Please confirm.
Unfortunately, in none of these threads could I find a clear implementation of the derivative of the softmax function. I understand it to be more complicated than that of most activation functions, and to require more parameters than just x, but I have no idea how to implement it myself. I'm looking for an explanation of what those other parameters are, as well as for an implementation (or mathematical expression) of the derivative of the softmax function.
Answer for how this code accomplishes that function:
Here, we make use of a concept known as broadcasting.
When you use the function exp(x), then, assuming x is a vector, you actually perform an operation similar to what can be accomplished by the following code:
exps = []
for i in x:
exps.append(exp(i))
return exps
The above code is the longer version of what broadcasting does automatically here.
As for the implementation of the derivative, that's a bit more complicated, as you say.
An untested implementation for computing the vector of derivatives with respect to every parameter:
def softmax_derivative(X):
# input : a vector X
# output : a vector containing derivatives of softmax(X) wrt every element in X
# List of derivatives
derivs = []
# denominator after differentiation
denom = np.sum(exp(X), axis=0)
for x in X:
# Function of current element based on differentiation result
comm = -exp(x)/(denom**2)
factor = 0
# Added exp of every element except current element
for other in X:
if other==x:
continue
factor += (exp(other))
derivs.append(comm*factor)
return derivs
You can also use broadcasting in the above function, but I think its more clear in this manner.

pytorch gradient / derivative / difference along axis like numpy.diff

I have been struggling with this for quite some time. All I want is a torch.diff() function. However, many matrix operations do not appear to be easily compatible with tensor operations.
I have tried an enormous amount of various pytorch operation combinations, yet none of them work.
Due to the fact that pytorch hasn't implemented this basic feature, I started by simply trying to subtract the element i+1 from element i along a specific axis.
However, you can't simply do this element-wise (due to the tensor limitations), so I tried to construct another tensor, with the elements shifted along one axis:
ix_plus_one = [0]+list(range(0,prediction.size(1)-1))
ix_differential_tensor = torch.LongTensor(ix_plus_one)
diff_one_tensor = prediction[:,ix_differential_tensor]
But now we have a different problem - indexing doesn't really work to mimic numpy in pytorch as it advertises, so you can't index with a "list-like" Tensor like this. I also tried using the tensor scatter functions
So I'm still stuck with this simple problem of trying to get a gradient on a pytoch tensor.
All of my searching leads to the marvelous capabilities of pytorchs' "autograd" function - which has nothing to do with this problem.
A 1D convolution with a fixed filter should do the trick:
filter = torch.nn.Conv1d(in_channels=1, out_channels=1, kernel_size=2, stride=1, padding=1, groups=1, bias=False)
kernel = np.array([-1.0, 1.0])
kernel = torch.from_numpy(kernel).view(1,1,2)
filter.weight.data = kernel
filter.weight.requires_grad = False
Then use filter like you would any other layer in torch.nn.
Also, you might want to change padding to suit your specific needs.
There appears to be a simpler solution to this (as I needed a similarly), referenced here: https://discuss.pytorch.org/t/equivalent-function-like-numpy-diff-in-pytorch/35327/2
diff = x[1:] - x[:-1]
which can be done along different dimensions such as
diff = polygon[:, 1:] - polygon[:, :-1]
I would recommend writing a unit test that verifies identical behavior though.
For all those running into the question after March 2021
As of torch 1.8 there's torch.diff that works exactly as expected by the OP

Categories

Resources