I am new to Pytorch and am looking for a quick get score function. That, given a bunch of samples and a distribution, outputs a tensor consisting of the corresponding score for each individual sample. For instance, consider the following code:
norm = torch.distributions.multivariate_normal.MultivariateNormal(torch.zeros(2),torch.eye(2))
samples = norm.sample((1000,))
samples.requires_grad_(True)
Using samples I would like to create a score tensor of size [1000,2] where the ith component score[i] is the gradient of log p(samples[i]), where p is the density of the given distribution. The method I have come up with is the following:
def get_score(samples,distribution):
log_probs = distribution.log_prob(samples)
for i in range(log_probs.size()[0]):
log_probs[i].backward(retain_graph = True)
The resulting score tensor is then samples.grad. The issue is that my method is quite slow for larger samples (e.g. for a sample of size [50000,2] it takes about 25-30 seconds on my CPU). Is this as fast as it can get?
The only alternative I can think of is to hard-code the score function for each distribution I will use, this doesn't seem like a good solution!
From experimentation, for 50000 samples, the following is about 50% quicker:
for i in range(50000):
sample = norm.sample((1,))
sample.requires_grad_(True)
log_prob = norm.log_prob(a)
log_prob.backward()
This indicates that there should be a better way!
I'm assuming that log_probs is stored as a pytorch tensor.
You can take advantage of the linearity of differentiation to calculate the derivative for all samples at once: log_probs.sum().backward(retain_graph = True)
At least with GPU acceleration this will be a lot faster.
If log_probs is not a tensor but a list of scalars (represented as pytorch tensors of rank 0), you can use log_probs = torch.stack(log_probs) first.
Related
I am working with REINFORCE algorithm with PyTorch. I noticed that the batch inference/predictions of my simple network with Softmax doesn’t sum to 1 (not even close to 1). I am attaching a minimum working code so that you can reproduce it. What am I missing here?
import numpy as np
import torch
obs_size = 9
HIDDEN_SIZE = 9
n_actions = 2
np.random.seed(0)
model = torch.nn.Sequential(
torch.nn.Linear(obs_size, HIDDEN_SIZE),
torch.nn.ReLU(),
torch.nn.Linear(HIDDEN_SIZE, n_actions),
torch.nn.Softmax(dim=0)
)
state_transitions = np.random.rand(3, obs_size)
state_batch = torch.Tensor(state_transitions)
pred_batch = model(state_batch) # WRONG PREDICTIONS!
print('wrong predictions:\n', *pred_batch.detach().numpy())
# [0.34072137 0.34721774] [0.30972624 0.30191955] [0.3495524 0.3508627]
# DOES NOT SUM TO 1 !!!
pred_batch = [model(s).detach().numpy() for s in state_batch] # CORRECT PREDICTIONS
print('correct predictions:\n', *pred_batch)
# [0.5955179 0.40448207] [0.6574412 0.34255883] [0.624833 0.37516695]
# DOES SUM TO 1 AS EXPECTED
Although PyTorch lets us get away with it, we don’t actually provide an input with the right dimensionality. We have a model that takes one input and produces one output, but PyTorch nn.Module and its subclasses are designed to do so on multiple samples at the same time. To accommodate multiple samples, modules expect the zeroth dimension of the input to be the number of samples in the batch.
Deep Learning with PyTorch
That your model works on each individual sample is an implementation nicety. You have incorrectly specified the dimension for the softmax (across batches instead of across the variables), and hence when given a batch dimension it is computing the softmax across samples instead of within samples:
nn.Softmax requires us to specify the dimension along which the softmax function is applied:
softmax = nn.Softmax(dim=1)
In this case, we have two input vectors in two rows (just like when we work with
batches), so we initialize nn.Softmax to operate along dimension 1.
Change torch.nn.Softmax(dim=0) to torch.nn.Softmax(dim=1) to get appropriate results.
I have two tensors that I am calculating the Spearmans Rank Correlation from, and I would like to be able to have PyTorch automatically adjust the values in these Tensors in a way that increases my Spearmans Rank Correlation number as high as possible.
I have explored autograd but nothing I've found has explained it simply enough.
Initialized tensors:
a=Var(torch.randn(20,1),requires_grad=True)
psfm_s=Var(torch.randn(12,20),requires_grad=True)
How can I have a loop of constant adjustments of the values in these two tensors to get the highest spearmans rank correlation from 2 lists I make from these 2 tensors while having PyTorch do the work? I just need a guide of where to go. Thank you!
I'm not familiar with Spearman's Rank Correlation, but if I understand your question you're asking how to use PyTorch to solve problems other than deep networks?
If that's the case then I'll provide a simple least squares example which I believe should be informative to your effort.
Consider a set of 200 measurements of 10 dimensional vectors x and y. Say we want to find a linear transform from x to y.
The least squares approach dictates we can accomplish this by finding the matrix M and vector b which minimize |(y - (M x+b))²|
The following example code generates some example data and then uses pytorch to perform this minimization. I believe the comments are sufficient to help you understand what is occurring here.
import torch
from torch.nn.parameter import Parameter
from torch import optim
# define some fake data
M_true = torch.randn(10, 10)
b_true = torch.randn(10, 1)
x = torch.randn(200, 10, 1)
noise = torch.matmul(M_true, 0.05 * torch.randn(200, 10, 1))
y = torch.matmul(M_true, x) + b_true + noise
# begin optimization
# define the parameters we want to optimize (using random starting values in this case)
M = Parameter(torch.randn(10, 10))
b = Parameter(torch.randn(10, 1))
# define the optimizer and provide the parameters we want to optimize
optimizer = optim.SGD((M, b), lr=0.1)
for i in range(500):
# compute loss that we want to minimize
y_hat = torch.matmul(M, x) + b
loss = torch.mean((y - y_hat)**2)
# zero the gradients of the parameters referenced by the optimizer (M and b)
optimizer.zero_grad()
# compute new gradients
loss.backward()
# update parameters M and b
optimizer.step()
if (i + 1) % 100 == 0:
# scale learning rate by factor of 0.9 every 100 steps
optimizer.param_groups[0]['lr'] *= 0.9
print('step', i + 1, 'mse:', loss.item())
# final parameter values (data contains a torch.tensor)
print('Resulting parameters:')
print(M.data)
print(b.data)
print('Compare to the "real" values')
print(M_true)
print(b_true)
Of course this problem has a simple closed form solution, but this numerical approach is just to demonstrate how to use PyTorch's autograd to solve problems not necessarily neural network related. I also choose to explicitly define the matrix M and vector b here rather than using an equivalent nn.Linear layer since I think that would just confuse things.
In your case you want to maximize something so make sure to negate your objective function before calling backward.
I write two types of liner-classifier in PyTorch:
torch.manual_seed(0)
fc = []
for i in range(n):
fc.append(nn.Linear(feature_size, 1))
The other:
torch.manual_seed(0)
fc = nn.Linear(feature_size, n)
And different results were obtained using these two types of fc in a
multi-label classification model.
Actually, those fc initialized differently and leads to different results. Which one is correct and what should I do if I want the similar results using two types of fc.
Additional Information:
I find out the reason lead to bad result:
The first type FC not updates in training!
But I don't know why there is no updating, my code as follow:
x = self.features(input)
res = []
for i in range(self.num_classes):
res.append(self.fc[i](x.cpu()))
res = torch.cat(res, 1)
return res.cuda()
Any idea about this?
What happens if you initialize the two types to the exact same values? do they still learn different classifications?
What loss function are you using on top of these classifiers? Is it the same loss function?
In terms of computations, both types are performing the same: they multiply the input feature vector with n weight vectors. Thus, if the weight vectors have the same values, both types should output the same classifications.
I suppose in terms of runtime and efficiency it is better to use one n dimensional classifier as oppose to n 1D: this way, I believe, allows for more hardware acceleration options.
This is my first post here, so I hope it complies to the guidelines and is interesting also for other people except myself.
I am building a CNN autoencoder that takes as input matrixes of fixed sizes with the goal of getting a lower dimensional representation of them (I call them hashes here). I want to make these hashes similar, when the matrixes are similar. Since just a few of my data are labeled, I want to make the loss function a combination of two separate functions. One part will be the reconstruction error of the autoencoder (This part is correctly working). The other part, I want it to be for the labeled data. Since I will have three different classes, I want that on each batch, to calculate the distance between hash values belonging to the same class (I am having trouble implementing this).
My effort so far:
X = tf.placeholder(shape=[None, 512, 128, 1], dtype=tf.float32)
class1_indices = tf.placeholder(shape=[None], dtype=tf.int32)
class2_indices = tf.placeholder(shape=[None], dtype=tf.int32)
hashes, reconstructed_output = self.conv_net(X, weights, biases_enc, biases_dec, keep_prob)
class1_hashes = tf.gather(hashes, class1_indices)
class1_cost = self.calculate_within_class_loss(class1_hashes)
class2_hashes = tf.gather(hashes, class2_indices)
class2_cost = self.calculate_within_class_loss(class2_hashes)
loss_all = tf.reduce_sum(tf.square(reconstructed_output - X))
loss_labeled = class1_cost + class2_cost
loss_op = loss_all + loss_labeled
optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
Where calclulate_within_class_loss is a separate function that I created. I have currently implemented it only for the difference of the first hash of a class with other hashes of that class in the same batch, however, I am not happy with my current implementation and it looks that it is not working.
def calculate_within_class_loss(self, hash_values):
first_hash = tf.slice(hash_values, [0, 0], [1, 256])
total_loss = tf.foldl(lambda d, e: d + tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(e, first_hash)))), hash_values, initializer=0.0)
return total_loss
So, I have two questions / issues:
Is there any easy way to calculate the distance of every raw with all other raws in a tensor?
My current implementation of calculate within class distance, even if it is just for the first element with other elements, will give me a 'nan' when I try to optimize it.
Thanks for your time and help :)
In the sample code, you are calculating the sum of Eucledian distance between the points.
For this, you will have to loop over the entire dataset and do O(n^2 * m) calculations and have O(n^2 * m) space, i.e. Tensorflow graph operations.
Here, n is the number of vectors and m is the size of the hash, i.e. 256.
However, if you could change your object to the following:
Then, you can use the nifty relationship between the squared Euclidean distance and the variance and rewrite the same calculation as
Where mu_k is the average value of the kth coordinate for the cluster.
This will allow you to compute the value in O(n * m) time and O(n * m) Tensorflow operations.
This would be the way to go if you think this change (i.e. from Euclidean distance to squared Euclidean distance) will not adversely effect your loss function.
I am trying to implement a solution to Ridge regression in Python using Stochastic gradient descent as the solver. My code for SGD is as follows:
def fit(self, X, Y):
# Convert to data frame in case X is numpy matrix
X = pd.DataFrame(X)
# Define a function to calculate the error given a weight vector beta and a training example xi, yi
# Prepend a column of 1s to the data for the intercept
X.insert(0, 'intercept', np.array([1.0]*X.shape[0]))
# Find dimensions of train
m, d = X.shape
# Initialize weights to random
beta = self.initializeRandomWeights(d)
beta_prev = None
epochs = 0
prev_error = None
while (beta_prev is None or epochs < self.nb_epochs):
print("## Epoch: " + str(epochs))
indices = range(0, m)
shuffle(indices)
for i in indices: # Pick a training example from a randomly shuffled set
beta_prev = beta
xi = X.iloc[i]
errori = sum(beta*xi) - Y[i] # Error[i] = sum(beta*x) - y = error of ith training example
gradient_vector = xi*errori + self.l*beta_prev
beta = beta_prev - self.alpha*gradient_vector
epochs += 1
The data I'm testing this on is not normalized and my implementation always ends up with all the weights being Infinity, even though I initialize the weights vector to low values. Only when I set the learning rate alpha to a very small value ~1e-8, the algorithm ends up with valid values of the weights vector.
My understanding is that normalizing/scaling input features only helps reduce convergence time. But the algorithm should not fail to converge as a whole if the features are not normalized. Is my understanding correct?
You can check from scikit-learn's Stochastic Gradient Descent documentation that one of the disadvantages of the algorithm is that it is sensitive to feature scaling. In general, gradient based optimization algorithms converge faster on normalized data.
Also, normalization is advantageous for regression methods.
The updates to the coefficients during each step will depend on the ranges of each feature. Also, the regularization term will be affected heavily by large feature values.
SGD may converge without data normalization, but that is subjective to the data at hand. Therefore, your assumption is not correct.
Your assumption is not correct.
It's hard to answer this, because there are so many different methods/environments but i will try to mention some points.
Normalization
When some method is not scale-invariant (i think every linear-regression is not) you really should normalize your data
I take it that you are just ignoring this because of debugging / analyzing
Normalizing your data is not only relevant for convergence-time, the results will differ too (think about the effect within the loss-function; big values might effect in much more loss to small ones)!
Convergence
There is probably much to tell about convergence of many methods on normalized/non-normalized data, but your case is special:
SGD's convergence theory only guarantees convergence to some local-minimum (= global-minimum in your convex-opt problem) for some chosings of hyper-parameters (learning-rate and learning-schedule/decay)
Even optimizing normalized data can fail with SGD when those params are bad!
This is one of the most important downsides of SGD; dependence on hyper-parameters
As SGD is based on gradients and step-sizes, non-normalized data has a possibly huge effect on not achieving this convergence!
In order for sgd to converge in linear regression the step size should be smaller than 2/s where s is the largest singular value of the matrix (see the Convergence and stability in the mean section in https://en.m.wikipedia.org/wiki/Least_mean_squares_filter), in the case of ridge regression it should be less than 2*(1+p/s^2)/s where p is the ridge penalty.
Normalizing rows of the matrix (or gradients) changes the loss function to give each sample an equal weight and it changes the singular values of the matrix such that you can choose a step size near 1 (see the NLMS section in https://en.m.wikipedia.org/wiki/Least_mean_squares_filter). Depending on your data it might require smaller step sizes or allow for larger step sizes. It all depends on whether or not the normalization increases or deacreses the largest singular value of the matrix.
Note that when deciding whether or not to normalize the rows you shouldn't just think about the convergence rate (which is determined by the ratio between the largest and smallest singular values) or stability in the mean, but also about how it changes the loss function and whether or not it fits your needs because of that, sometimes it makes sense to normalize but sometimes (for example when you want to give different importance for different samples or when you think that a larger energy for the signal means better snr) it doesn't make sense to normalize.