I am trying to normalize some data for the last dimensions.
#sample data
x = numpy.random.random((3, 1, 4, 16, 16))
x[1] = x[1]*2
x[2] = x[2]*4
I can get the mean,
m = x.mean((-3, -2, -1))
Now, x.shape is (3, 1, 4, 16, 16) and m.shape is (3, 1), I want to subtract the mean from each sample. So far I have.
for i in range(x.shape[0]):
for j in range(x.shape[1]):
x[i,j] = x[i,j] - m[i,j]
That works, but it has two drawbacks. I'm using explicit loops, and it it requires the shape to have 5 dimensions.
Simply keep dimensions with keepdims arg and then subtract -
m = x.mean((-3, -2, -1),keepdims=True)
x -= m
This would work regardless of the axes that are used for the reduction and should be a clean solution.
Related
I need to calculate dot product of two matrices. Probably tensordot would do the job, however I am struggling to figure out exact solution.
The simple option
res = np.dot(x, fullkernel[:, :-1].transpose())
works fine, where x is of shape (9999,), fullkernel of shape (980,10000), and res is of shape (1, 980).
Now I need to do similar thing with 2 dimensions. Thus my x now has shape (9999, 2), fullkernel (2, 980, 10000).
Literally I want my result "res" to be of 2 dimensions, where each one is dot.product of one column of x and one dimension of fullkernel.
You can do that like this:
res = np.einsum('ki,ijk->ij', x, fullkernel[:, :, :-1])
print(res.shape)
# (2, 980)
If you want to have the additional singleton dimension in the middle just do:
res = np.expand_dims(res, 1)
An equivalent solution with # / np.matmul would be:
res = np.expand_dims(x.T, 1) # np.moveaxis(fullkernel[:, :, :-1], 2, 1)
print(res.shape)
# (2, 1, 980)
I want to vectorize the following code:
def style_noise(self, y, style):
n = torch.randn(y.shape)
for i in range(n.shape[0]):
n[i] = (n[i] - n.mean(dim=(1, 2, 3))[i]) * style.std(dim=(1, 2, 3))[i] / n.std(dim=(1, 2, 3))[i] + style.mean(dim=(1, 2, 3))[i]
noise = Variable(n, requires_grad=False).to(y.device)
return noise
I didn't find a way nice way of doing so.
y and style are 4d tensors, say style.shape = y.shape = [64, 3, 128, 128].
I want to return the noise tensor, noise.shape = [64, 3, 128, 128].
Please let me know in the comments if the question is not clear.
Your use case is exactly why the .mean and .std methods come with a keepdim parameter. You can make use of this to enable broadcasting semantics to vectorize things for you:
def style_noise(self, y, style):
n = torch.randn(y.shape)
n_mean = n.mean(dim=(1, 2, 3), keepdim=True)
n_std = n.std(dim=(1, 2, 3), keepdim=True)
style_mean = style.mean(dim=(1, 2, 3), keepdim=True)
style_std = style.std(dim=(1, 2, 3), keepdim=True)
n = (n - n_mean) * style_std / n_std + style_mean
noise = Variable(n, requires_grad=False).to(y.device)
return noise
To calculate mean and std for the whole tensor you set no arguments
m = t.mean(); print(m) # if you don't set the dim for the whole tensor
s = t.std(); print(s) # if you don't set the dim for the whole tensor
Then if your shape is 2,2,2 for instance, create tensors for broadcasting subtract and division.
ss = torch.empty(2,2,2).fill_(s)
print(ss)
mm = torch.empty(2,2,2).fill_(m)
print(mm)
At the moment keepdim is not working as expected when you don't set the dim.
m = t.mean(); print(m) # for the whole tensor
s = t.std(); print(s) # for the whole tensor
m = t.mean(dim=0); print(m) # 0 means columns mean
s = t.std(dim=0); print(s) # 0 means columns mean
m = t.mean(dim=1); print(m) # 1 means rows mean
s = t.std(dim=1); print(s) # 1 means rows mean
s = t.mean(keepdim=True);print(s) # will not work
m = t.std(keepdim=True);print(m) # will not work
If you set a dim as a tuple, then it will return mean for axes, you asked not for the whole.
When I have a tensor m of shape [12, 10] and a vector s of scalars with shape [12], how can I multiply each row of m with the corresponding scalar in s?
You need to add a corresponding singleton dimension:
m * s[:, None]
s[:, None] has size of (12, 1) when multiplying a (12, 10) tensor by a (12, 1) tensor pytorch knows to broadcast s along the second singleton dimension and perform the "element-wise" product correctly.
You can broadcast a vector to a higher dimensional tensor like so:
def row_mult(input, vector):
extra_dims = (1,)*(input.dim()-1)
return t * vector.view(-1, *extra_dims)
A slighty hard to understand at first, but very powerful technique is to use Einstein summation:
torch.einsum('i,ij->ij', s, m)
Shai's answer works if you know the number of dimensions in advance and can hardcode the correct number of None's. This can be extended to extra dimentions is required:
mask = (torch.rand(12) > 0.5).int()
data = (torch.rand(12, 2, 3, 4))
result = data * mask[:,None,None,None]
result.shape # torch.Size([12, 2, 3, 4])
mask[:,None,None,None].shape # torch.Size([12, 1, 1, 1])
If you are dealing with data of variable or unknown dimensions, then it may require manually extending mask to the correct shape
mask = (torch.rand(12) > 0.5).int()
while mask.dim() < data.dim(): mask.unsqueeze_(1)
result = data * mask
result.shape # torch.Size([12, 2, 3, 4])
mask.shape # torch.Size([12, 1, 1, 1])
This is a bit of an ugly solution, but it does work. There is probably a much more elegant way to correctly reshape the mask tensor inline for a variable number of dimensions
The Problem:
I want to calculate the dot product of a very large set of data. I am able to do this in a nested for-loop, but this is way too slow.
Here is a small example:
import numpy as np
points = np.array([[0.5, 2, 3, 5.5, 8, 11], [1, 2, -1.5, 0.5, 4, 5]])
lines = np.array([[0, 2, 4, 6, 10, 10, 0, 0], [0, 0, 0, 0, 0, 4, 4, 0]])
x1 = lines[0][0:-1]
y1 = lines[1][0:-1]
L1 = np.asarray([x1, y1])
# calculate the relative length of the projection
# of each point onto each line
a = np.diff(lines)
b = points[:,:,None] - L1[:,None,:]
print(a.shape)
print(b.shape)
[rows, cols, pages] = np.shape(b)
Z = np.zeros((cols, pages))
for k in range(cols):
for l in range(pages):
Z[k][l] = a[0][l]*b[0][k][l] + a[1][l]*b[1][k][l]
N = np.linalg.norm(a, axis=0)**2
relativeProjectionLength = np.squeeze(np.asarray(Z/N))
In this example, the first two dimensions of both a and b represent the x- and y-coordinates that I need for the dot product.
The shape of a is (2,7) and b has (2,6,7). Since the dot product reduces the first dimension I would expect the result to be of the shape (6,7). How can I calculate this without the slow loops?
What I have tried:
I think that numpy.dot with correct broadcasting could do the job, however I have trouble setting up the dimensions correctly.
a = a[:, None, :]
Z = np.dot(a,b)
This on gives me the following error:
shapes (2,1,7) and (2,6,7) not aligned: 7 (dim 2) != 6 (dim 1)
You can use np.einsum -
np.einsum('ij,ikj->kj',a,b)
Explanation :
Keep the last axes aligned for the two inputs.
Sum-reduce the first from those.
Let the rest stay, which is the second axis of b.
Usual rules on whether to use einsum or stick to a loopy-dot based method apply here.
numpy.dot does not reduce the first dimension. From the docs:
For N dimensions it is a sum product over the last axis of a and the second-to-last of b:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
That is exactly what the error is telling you: it is attempting to match axis 2 in the first vector to axis 1 in the second.
You can fix this using numpy.rollaxis or better yet numpy.moveaxis. Instead of a = a[:, None, :], do
a = np.movesxis(a, 0, -1)
b = np.moveaxis(b, 0, -2)
Z = np.dot(a, b)
Better yet, you can construct your arrays to have the correct shape up front. For example, transpose lines and do a = np.diff(lines, axis=0).
I have a 4d theano tensor (with the shape (1, 700, 16, 95000) for example) and a 4d 'mask' tensor with the shape (1, 700, 16, 1024) such that every element in the mask is an index that I need from the original tensor. How can I use my mask to index my tensor? Things like sample[mask] or sample[:, :, :, mask] don't really seem to work.
I also tried using a binary mask but since the tensor is rather large I get a 'device out of memory' exception.
Other ideas on how to get my indices from the tensor would also be very appreciated.
Thanks
So in the lack of an answer, I've decided to use the more computationally intensive solution which is unfolding both my data the the indices tensors, adding an offset to the indices to bring them to global positions, indexing the data and reshaping it back to original.
I'm adding here my test code, including a (commented-out) solution for matrices.
def theano_convertion(els, inds, offsets):
els = T.flatten(els)
inds = T.flatten(inds) + offsets
return T.reshape(els[inds], (2, 3, 16, 5))
if __name__ == '__main__':
# command: np.transpose(t[range(2), indices])
# t = np.random.randint(0, 10, (2, 20))
# indices = np.random.randint(0, 10, (5, 2))
t = np.random.randint(0, 10, (2, 3, 16, 20)).astype('int32')
indices = np.random.randint(0, 10, (2, 3, 16, 5)).astype('int32')
offsets = np.asarray(range(1, 2 * 3 * 16 + 1), dtype='int32')
offsets = (offsets * 20) - 20
offsets = np.repeat(offsets, 5)
offsets_tens = T.ivector('offsets')
inds_tens = T.itensor4('inds')
t_tens = T.itensor4('t')
func = theano.function(
[t_tens, inds_tens, offsets_tens],
[theano_convertion(t_tens, inds_tens, offsets_tens)]
)
shaped_elements = []
flattened_elements = []
[tmp] = func(t, indices, offsets)
for i in range(2):
for j in range(3):
for k in range(16):
shaped_elements.append(t[i, j, k, indices[i, j, k, :]])
flattened_elements.append(tmp[i, j, k, :])
print shaped_elements[-1] == flattened_elements[-1]