TL;DR How to scale part of tensor by 2 (row-indices present in a tf list)
Details:
indices_of_scaling_ids: Stores list of row_ids
Tensor("Squeeze:0", dtype=int64, device=/device:GPU:0)
[1, 4, 5, 6, 12]
emb_inputs = tf.nn.embedding_lookup(embedding, self.all_rows)
#tensor with shape (batch_size=4, all_row_len, emb_size=128)
So, for every self.all_rows, the emb_inputs is evaluated.
Question / Challenge faced: I need to scale the emb_inputs by 2.0 for every row_ids mentioned in indices_of_scaling_ids.
I have tried various splicing things, but can't seem to get to a nice solution. Can someone suggest? Thanks
N.B. Beginner at Tensorflow
Try with something like this:
SCALE = 2
emb_inputs = ...
indices_of_scaling_ids = ...
emb_shape = tf.shape(emb_inputs)
# Select indices in boolean array
r = tf.range(emb_shape[1])
mask = tf.reduce_any(tf.equal(r[:, tf.newaxis], indices_of_scaling_ids), axis=1)
# Tile the mask
mask = tf.tile(mask[tf.newaxis, :, tf.newaxis], (emb_shape[0], 1, emb_shape[2]))
# Choose scaled or not depending on indices
result = tf.where(mask, SCALE * emb_inputs, emb_inputs)
Related
Arrays of labels of objects and distances to that objects are given. I want to apply knn to find the label of prediction. I want to use np.bincount for that. However, I don't understand how to use this.
See some example
labels = [[1,1,2,0,0,3,3,3,5,1,3],
[1,1,2,0,0,3,3,3,5,1,3]]
weights= [[0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,1,0,0]]
Imagine 10 nearest neighbors for 2 objects are given and their labels and distances are given above. So I want the output as [5,5], because only neighbours with that label have nonzero weight. I am doing the next thing:
eps = 1e-5
lab_weight = np.array(list(zip(labels, weights)))
predict = np.apply_along_axis(lambda x: np.bincount(x[0], weights=x[1]).argmax(), 2, lab_weight)
I expect that x will correspond to [[1,1,2,0,0,3,3,3,5,1,3], [0,0,0,0,0,0,0,0,1,0,0]], but it won't. Other axis parameters are not working too. How can I achieve the goal? I want to use numpy functions and avoid python loops.
The next solution gives me desired result:
labels = [[1,1,2,0,0,3,3,3,5,1,3],
[1,1,2,0,0,3,3,3,5,1,3]]
weights= [[0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,1,0,0]]
length = len(labels[0])
lab_weight = np.hstack((labels, weights))
predict = np.apply_along_axis(lambda x: np.bincount(x[:length], weights=x[length:]).argmax(), 1, lab_weight)
The problem with your code is that you attempt to use your
function to 2-D slices of your array, whereas apply_along_axis
applies the given function to 1-D slices.
So your code generates an exception: ValueError: object of too small
depth for desired array.
To apply your function to 2-D slices, use a list comprehension based on
np.rollaxis and then create a Numpy array from it:
result = np.array([ np.bincount(x[0], weights=x[1]).argmax()
for x in np.rollaxis(lab_weight, 2) ])
The result, for your array, is:
array([1, 1, 2, 0, 0, 3, 3, 3, 5, 1, 3], dtype=int64)
To trace, for each interation, the source array, intermediate results
and the final result, run:
i = 0
for x in np.rollaxis(lab_weight, 2):
print(f' i: {i}\n{x}'); i += 1
bc = np.bincount(x[0], weights=x[1])
bcm = bc.argmax()
print(bc, bcm)
Let's say I want to create torch.tensor object of size [2,3] filled with random elements, and I intend to use this matrix in the network and optimize it's values. However, I want to update only some of the the values in the matrix.
I know that it can be done for a tensor by setting up parameter requires_grad To True or False. However, the following code
z = torch.rand([2,3], requires_grad=True)
z[-1][-1].requires_grad=False
does not work as expected
RuntimeError: you can only change requires_grad flags of leaf variables. If you want to use a computed variable in a subgraph that doesn't require differentiation use var_no_grad = var.detach().
How to fix this RuntimeError? How to initialize torch tensor and then define which elements there would have requires_grad =True?
If I write code in a similar manner:
z = torch.rand([2,3], requires_grad=False)
z[-1][-1].requires_grad=True
There will be no error, but no change of the requires_grad as well.
It does not really make much sense to have a single tensor which requires_grad for only part of its entries.
Why not have two separate tensors one that us updated (requires_grad=True) and another one fixed (requires_grad=False)? You can then merge them for computational ease:
fixed = torch.rand([2, 3], require_grad=False)
upd = torch.rand([2, 3], require_grad=True)
mask = torch.tensor([[0, 1, 0], [1, 0, 1]], require_grad=False) # how to combine the two
# combine them using fixed "mask":
z = mask * fixed + (1-mask) * upd
You can obviously have other methods of combining fixed and upd other than using a binary mask.
For example, if upd occupies the first two columns of z and fixed the rest, then:
fixed = torch.rand([2, 1], require_grad=False)
upd = torch.rand([2, 2], require_grad=True)
# combine them using concatination
z = torch.cat((upd, fixed),dim=1)
Or, if you know the indices
fidx = torch.tensor([0, 2], dtype=torch.long)
uidx = torch.tensor([1, 3, 4, 5], dtype=torch.long)
fixed = torch.rand([2,], require_grad=False)
upd = torch.rand([4,], require_grad=True)
z = torch.empty([2, 3])
z[fidx] = fixed
z[uidx] = upd
I need to select only the non-zero 3d portions of a 3d binary array (or alternatively the true values of a boolean array). Currently I am able to do so with a series of 'for' loops that use np.any, but this does work but seems awkward and slow, so currently investigating a more direct way to accomplish the task.
I am rather new to numpy, so the approaches that I have tried include a) using
np.nonzero, which returns indices that I am at a loss to understand what to do with for my purposes, b) boolean array indexing, and c) boolean masks. I can generally understand each of those approaches for simple 2d arrays, but am struggling to understand the differences between the approaches, and cannot get them to return the right values for a 3d array.
Here is my current function that returns a 3D array with nonzero values:
def real_size(arr3):
true_0 = []
true_1 = []
true_2 = []
print(f'The input array shape is: {arr3.shape}')
for zero_ in range (0, arr3.shape[0]):
if arr3[zero_].any()==True:
true_0.append(zero_)
for one_ in range (0, arr3.shape[1]):
if arr3[:,one_,:].any()==True:
true_1.append(one_)
for two_ in range (0, arr3.shape[2]):
if arr3[:,:,two_].any()==True:
true_2.append(two_)
arr4 = arr3[min(true_0):max(true_0) + 1, min(true_1):max(true_1) + 1, min(true_2):max(true_2) + 1]
print(f'The nonzero area is: {arr4.shape}')
return arr4
# Then use it on a small test array:
test_array = np.zeros([2, 3, 4], dtype = int)
test_array[0:2, 0:2, 0:2] = 1
#The function call works and prints out as expected:
non_zero = real_size(test_array)
>> The input array shape is: (2, 3, 4)
>> The nonzero area is: (2, 2, 2)
# So, the array is correct, but likely not the best way to get there:
non_zero
>> array([[[1, 1],
[1, 1]],
[[1, 1],
[1, 1]]])
The code works appropriately, but I am using this on much larger and more complex arrays, and don't think this is an appropriate approach. Any thoughts on a more direct method to make this work would be greatly appreciated. I am also concerned about errors and the results if the input array has for example two separate non-zero 3d areas within the original array.
To clarify the problem, I need to return one or more 3D portions as one or more 3d arrays beginning with an original larger array. The returned arrays should not include extraneous zeros (or false values) in any given exterior plane in three dimensional space. Just getting the indices of the nonzero values (or vice versa) doesn't by itself solve the problem.
Assuming you want to eliminate all rows, columns, etc. that contain only zeros, you could do the following:
nz = (test_array != 0)
non_zero = test_array[nz.any(axis=(1, 2))][:, nz.any(axis=(0, 2))][:, :, nz.any(axis=(0, 1))]
An alternative solution using np.nonzero:
i = [np.unique(_) for _ in np.nonzero(test_array)]
non_zero = test_array[i[0]][:, i[1]][:, :, i[2]]
This can also be generalized to arbitrary dimensions, but requires a bit more work (only showing the first approach here):
def real_size(arr):
nz = (arr != 0)
result = arr
axes = np.arange(arr.ndim)
for axis in range(arr.ndim):
zeros = nz.any(axis=tuple(np.delete(axes, axis)))
result = result[(slice(None),)*axis + (zeros,)]
return result
non_zero = real_size(test_array)
My question is in two connected parts:
How do I calculate the max along a certain axis of a tensor? For example, if I have
x = tf.constant([[1,220,55],[4,3,-1]])
I want something like
x_max = tf.max(x, axis=1)
print sess.run(x_max)
output: [220,4]
I know there is a tf.argmax and a tf.maximum, but neither give the maximum value along an axis of a single tensor. For now I have a workaround:
x_max = tf.slice(x, begin=[0,0], size=[-1,1])
for a in range(1,2):
x_max = tf.maximum(x_max , tf.slice(x, begin=[0,a], size=[-1,1]))
But it looks less than optimal. Is there a better way to do this?
Given the indices of an argmax of a tensor, how do I index into another tensor using those indices? Using the example of x above, how do I do something like the following:
ind_max = tf.argmax(x, dimension=1) #output is [1,0]
y = tf.constant([[1,2,3], [6,5,4])
y_ = y[:, ind_max] #y_ should be [2,6]
I know slicing, like the last line, does not exist in TensorFlow yet (#206).
My question is: what is the best workaround for my specific case (maybe using other methods like gather, select, etc.)?
Additional information: I know x and y are going to be two dimensional tensors only!
The tf.reduce_max() operator provides exactly this functionality. By default it computes the global maximum of the given tensor, but you can specify a list of reduction_indices, which has the same meaning as axis in NumPy. To complete your example:
x = tf.constant([[1, 220, 55], [4, 3, -1]])
x_max = tf.reduce_max(x, reduction_indices=[1])
print sess.run(x_max) # ==> "array([220, 4], dtype=int32)"
If you compute the argmax using tf.argmax(), you could obtain the the values from a different tensor y by flattening y using tf.reshape(), converting the argmax indices into vector indices as follows, and using tf.gather() to extract the appropriate values:
ind_max = tf.argmax(x, dimension=1)
y = tf.constant([[1, 2, 3], [6, 5, 4]])
flat_y = tf.reshape(y, [-1]) # Reshape to a vector.
# N.B. Handles 2-D case only.
flat_ind_max = ind_max + tf.cast(tf.range(tf.shape(y)[0]) * tf.shape(y)[1], tf.int64)
y_ = tf.gather(flat_y, flat_ind_max)
print sess.run(y_) # ==> "array([2, 6], dtype=int32)"
As of TensorFlow 1.10.0-dev20180626, tf.reduce_max accepts axis and keepdims keyword arguments offering the similar functionality of numpy.max.
In [55]: x = tf.constant([[1,220,55],[4,3,-1]])
In [56]: tf.reduce_max(x, axis=1).eval()
Out[56]: array([220, 4], dtype=int32)
To have a resultant tensor of the same dimension as the input tensor, use keepdims=True
In [57]: tf.reduce_max(x, axis=1, keepdims=True).eval()Out[57]:
array([[220],
[ 4]], dtype=int32)
If the axis argument is not explicitly specified then the tensor level maximum element is returned (i.e. all axes are reduced).
In [58]: tf.reduce_max(x).eval()
Out[58]: 220
I am trying to get the x and y coordinates of a given value in a numpy image array.
I can do it by running through the rows and columns manually with a for statement, but this seems rather slow and I am possitive there is a better way to do this.
I was trying to modify a solution I found in this post. Finding the (x,y) indexes of specific (R,G,B) color values from images stored in NumPy ndarrays
a = image
c = intensity_value
y_locs = np.where(np.all(a == c, axis=0))
x_locs = np.where(np.all(a == c, axis=1))
return np.int64(x_locs), np.int64(y_locs)
I have the np.int64 to convert the values back to int64.
I was also looking at numpy.where documentation
I don't quite understand the problem. The axis parameter in all() runs over the colour channels (axis 2 or -1) rather than the x and y indices. Then where() will give you the coordinates of the matching values in the image:
>>> # set up data
>>> image = np.zeros((5, 4, 3), dtype=np.int)
>>> image[2, 1, :] = [7, 6, 5]
>>> # find indices
>>> np.where(np.all(image == [7, 6, 5], axis=-1))
(array([2]), array([1]))
>>>
This is really just repeating the answer you linked to. But is a bit too long for a comment. Maybe you could explain a bit more why you need to modify the previous answer? It doesn't seem like you do need to.