Multidimensional Broadcasting in Python / NumPy - or inverse of `numpy.squeeze()` - python

What would be the best way of broadcasting two arrays together when a simple call to np.broadcast_to() would fail?
Consider the following example:
import numpy as np
arr1 = np.arange(2 * 3 * 4 * 5 * 6).reshape((2, 3, 4, 5, 6))
arr2 = np.arange(3 * 5).reshape((3, 5))
arr1 + arr2
# ValueError: operands could not be broadcast together with shapes (2,3,4,5,6) (3,5)
arr2_ = np.broadcast_to(arr2, arr1.shape)
# ValueError: operands could not be broadcast together with remapped shapes
arr2_ = arr2.reshape((1, 3, 1, 5, 1))
arr1 + arr2
# now this works because the singletons trigger the automatic broadcast
This only work if I manually select a shape for which automatic broadcasting is going to work.
What would be the most efficient way of doing this automatically?
Is there an alternative way other than reshape on a cleverly constructed broadcastable shape?
Note the relation to np.squeeze(): this would perform the inverse operation by removing singletons. So what I need is some sort of np.squeeze() inverse.
The official documentation (as of NumPy 1.13.0 suggests that the inverse of np.squeeze() is np.expand_dim(), but this is not nearly as flexible as I'd need it to be, and actually np.expand_dim() is roughly equivalent to np.reshape(array, shape + (1,)) or array[:, None].
This issue is also related to the keepdims keyword accepted by e.g. sum:
import numpy as np
arr1 = np.arange(2 * 3 * 4 * 5 * 6).reshape((2, 3, 4, 5, 6))
# not using `keepdims`
arr2 = np.sum(arr1, (0, 2, 4))
arr2.shape
# : (3, 5)
arr1 + arr2
# ValueError: operands could not be broadcast together with shapes (2,3,4,5,6) (3,5)
# now using `keepdims`
arr2 = np.sum(arr1, (0, 2, 4), keepdims=True)
arr2.shape
# : (1, 3, 1, 5, 1)
arr1 + arr2
# now this works because it has the correct shape
EDIT: Obviously, in cases where np.newaxis or keepdims mechanisms are an appropriate choice, there would be no need for a unsqueeze() function.
Yet, there are use-cases where none of these is an option.
For example, consider the case of the weighted average as implemented in numpy.average() over an arbitrary number of dimensions specified by axis.
Right now the weights parameter must have the same shape as the input.
However, weights there is no need specify the weights over the non-reduced dimensions as they are just repeating and the NumPy's broadcasting mechanism would appropriately take care of them.
So if we would like to have such a functionality, we would need to code something like (where some consistency checks are just omitted for simplicity):
def weighted_average(arr, weights=None, axis=None):
if weights is not None and weights.shape != arr.shape:
weights = unsqueeze(weights, ...)
weights = np.zeros_like(arr) + weights
result = np.sum(arr * weights, axis=axis)
result /= np.sum(weights, axis=axis)
return result
or, equivalently:
def weighted_average(arr, weights=None, axis=None):
if weights is not None and weights.shape != arr.shape:
weights = unsqueeze(weights, ...)
weights = np.zeros_like(arr) + weights
return np.average(arr, weights, axis)
In either of the two, it is not possible to replace unsqueeze() with weights[:, np.newaxis]-like statements because we do not know beforehand where the new axis will be needed, nor we can use the keepdims feature of sum because the code will fail at arr * weights.
This case could be relatively nicely handled if np.expand_dims() would support an iterable of ints for its axis parameter, but as of NumPy 1.13.0 does not.

My way of achieving this is by defining the following unsqueezing() function to handle cases where this can be done automatically and giving a warning when the inputs could be ambiguous (e.g. when some source elements of the source shape may match multiple elements of the target shape):
def unsqueezing(
source_shape,
target_shape):
"""
Generate a broadcasting-compatible shape.
The resulting shape contains *singletons* (i.e. `1`) for non-matching dims.
Assumes all elements of the source shape are contained in the target shape
(excepts for singletons) in the correct order.
Warning! The generated shape may not be unique if some of the elements
from the source shape are present multiple timesin the target shape.
Args:
source_shape (Sequence): The source shape.
target_shape (Sequence): The target shape.
Returns:
shape (tuple): The broadcast-safe shape.
Raises:
ValueError: if elements of `source_shape` are not in `target_shape`.
Examples:
For non-repeating elements, `unsqueezing()` is always well-defined:
>>> unsqueezing((2, 3), (2, 3, 4))
(2, 3, 1)
>>> unsqueezing((3, 4), (2, 3, 4))
(1, 3, 4)
>>> unsqueezing((3, 5), (2, 3, 4, 5, 6))
(1, 3, 1, 5, 1)
>>> unsqueezing((1, 3, 5, 1), (2, 3, 4, 5, 6))
(1, 3, 1, 5, 1)
If there is nothing to unsqueeze, the `source_shape` is returned:
>>> unsqueezing((1, 3, 1, 5, 1), (2, 3, 4, 5, 6))
(1, 3, 1, 5, 1)
>>> unsqueezing((2, 3), (2, 3))
(2, 3)
If some elements in `source_shape` are repeating in `target_shape`,
a user warning will be issued:
>>> unsqueezing((2, 2), (2, 2, 2, 2, 2))
(2, 2, 1, 1, 1)
>>> unsqueezing((2, 2), (2, 3, 2, 2, 2))
(2, 1, 2, 1, 1)
If some elements of `source_shape` are not presente in `target_shape`,
an error is raised.
>>> unsqueezing((2, 3), (2, 2, 2, 2, 2))
Traceback (most recent call last):
...
ValueError: Target shape must contain all source shape elements\
(in correct order). (2, 3) -> (2, 2, 2, 2, 2)
>>> unsqueezing((5, 3), (2, 3, 4, 5, 6))
Traceback (most recent call last):
...
ValueError: Target shape must contain all source shape elements\
(in correct order). (5, 3) -> (2, 3, 4, 5, 6)
"""
shape = []
j = 0
for i, dim in enumerate(target_shape):
if j < len(source_shape):
shape.append(dim if dim == source_shape[j] else 1)
if i + 1 < len(target_shape) and dim == source_shape[j] \
and dim != 1 and dim in target_shape[i + 1:]:
text = ('Multiple positions (e.g. {} and {})'
' for source shape element {}.'.format(
i, target_shape[i + 1:].index(dim) + (i + 1), dim))
warnings.warn(text)
if dim == source_shape[j] or source_shape[j] == 1:
j += 1
else:
shape.append(1)
if j < len(source_shape):
raise ValueError(
'Target shape must contain all source shape elements'
' (in correct order). {} -> {}'.format(source_shape, target_shape))
return tuple(shape)
This can be used to define unsqueeze() as a more flexible inverse of np.squeeze() compared to np.expand_dims() which can only append one singleton at a time:
def unsqueeze(
arr,
axis=None,
shape=None,
reverse=False):
"""
Add singletons to the shape of an array to broadcast-match a given shape.
In some sense, this function implements the inverse of `numpy.squeeze()`.
Args:
arr (np.ndarray): The input array.
axis (int|Iterable|None): Axis or axes in which to operate.
If None, a valid set axis is generated from `shape` when this is
defined and the shape can be matched by `unsqueezing()`.
If int or Iterable, specified how singletons are added.
This depends on the value of `reverse`.
If `shape` is not None, the `axis` and `shape` parameters must be
consistent.
Values must be in the range [-(ndim+1), ndim+1]
At least one of `axis` and `shape` must be specified.
shape (int|Iterable|None): The target shape.
If None, no safety checks are performed.
If int, this is interpreted as the number of dimensions of the
output array.
If Iterable, the result must be broadcastable to an array with the
specified shape.
If `axis` is not None, the `axis` and `shape` parameters must be
consistent.
At least one of `axis` and `shape` must be specified.
reverse (bool): Interpret `axis` parameter as its complementary.
If True, the dims of the input array are placed at the positions
indicated by `axis`, and singletons are placed everywherelse and
the `axis` length must be equal to the number of dimensions of the
input array; the `shape` parameter cannot be `None`.
If False, the singletons are added at the position(s) specified by
`axis`.
If `axis` is None, `reverse` has no effect.
Returns:
arr (np.ndarray): The reshaped array.
Raises:
ValueError: if the `arr` shape cannot be reshaped correctly.
Examples:
Let's define some input array `arr`:
>>> arr = np.arange(2 * 3 * 4).reshape((2, 3, 4))
>>> arr.shape
(2, 3, 4)
A call to `unsqueeze()` can be reversed by `np.squeeze()`:
>>> arr_ = unsqueeze(arr, (0, 2, 4))
>>> arr_.shape
(1, 2, 1, 3, 1, 4)
>>> arr = np.squeeze(arr_, (0, 2, 4))
>>> arr.shape
(2, 3, 4)
The order of the axes does not matter:
>>> arr_ = unsqueeze(arr, (0, 4, 2))
>>> arr_.shape
(1, 2, 1, 3, 1, 4)
If `shape` is an int, `axis` must be consistent with it:
>>> arr_ = unsqueeze(arr, (0, 2, 4), 6)
>>> arr_.shape
(1, 2, 1, 3, 1, 4)
>>> arr_ = unsqueeze(arr, (0, 2, 4), 7)
Traceback (most recent call last):
...
ValueError: Incompatible `[0, 2, 4]` axis and `7` shape for array of\
shape (2, 3, 4)
It is possible to reverse the meaning to `axis` to add singletons
everywhere except where specified (but requires `shape` to be defined
and the length of `axis` must match the array dims):
>>> arr_ = unsqueeze(arr, (0, 2, 4), 10, True)
>>> arr_.shape
(2, 1, 3, 1, 4, 1, 1, 1, 1, 1)
>>> arr_ = unsqueeze(arr, (0, 2, 4), reverse=True)
Traceback (most recent call last):
...
ValueError: When `reverse` is True, `shape` cannot be None.
>>> arr_ = unsqueeze(arr, (0, 2), 10, True)
Traceback (most recent call last):
...
ValueError: When `reverse` is True, the length of axis (2) must match\
the num of dims of array (3).
Axes values must be valid:
>>> arr_ = unsqueeze(arr, 0)
>>> arr_.shape
(1, 2, 3, 4)
>>> arr_ = unsqueeze(arr, 3)
>>> arr_.shape
(2, 3, 4, 1)
>>> arr_ = unsqueeze(arr, -1)
>>> arr_.shape
(2, 3, 4, 1)
>>> arr_ = unsqueeze(arr, -4)
>>> arr_.shape
(1, 2, 3, 4)
>>> arr_ = unsqueeze(arr, 10)
Traceback (most recent call last):
...
ValueError: Axis (10,) out of range.
If `shape` is specified, `axis` can be omitted (USE WITH CARE!) or its
value is used for addiotional safety checks:
>>> arr_ = unsqueeze(arr, shape=(2, 3, 4, 5, 6))
>>> arr_.shape
(2, 3, 4, 1, 1)
>>> arr_ = unsqueeze(
... arr, (3, 6, 8), (2, 5, 3, 2, 7, 2, 3, 2, 4, 5, 6), True)
>>> arr_.shape
(1, 1, 1, 2, 1, 1, 3, 1, 4, 1, 1)
>>> arr_ = unsqueeze(
... arr, (3, 7, 8), (2, 5, 3, 2, 7, 2, 3, 2, 4, 5, 6), True)
Traceback (most recent call last):
...
ValueError: New shape [1, 1, 1, 2, 1, 1, 1, 3, 4, 1, 1] cannot be\
broadcasted to shape (2, 5, 3, 2, 7, 2, 3, 2, 4, 5, 6)
>>> arr = unsqueeze(arr, shape=(2, 5, 3, 7, 2, 4, 5, 6))
>>> arr.shape
(2, 1, 3, 1, 1, 4, 1, 1)
>>> arr = np.squeeze(arr)
>>> arr.shape
(2, 3, 4)
>>> arr = unsqueeze(arr, shape=(5, 3, 7, 2, 4, 5, 6))
Traceback (most recent call last):
...
ValueError: Target shape must contain all source shape elements\
(in correct order). (2, 3, 4) -> (5, 3, 7, 2, 4, 5, 6)
The behavior is consistent with other NumPy functions and the
`keepdims` mechanism:
>>> axis = (0, 2, 4)
>>> arr1 = np.arange(2 * 3 * 4 * 5 * 6).reshape((2, 3, 4, 5, 6))
>>> arr2 = np.sum(arr1, axis, keepdims=True)
>>> arr2.shape
(1, 3, 1, 5, 1)
>>> arr3 = np.sum(arr1, axis)
>>> arr3.shape
(3, 5)
>>> arr3 = unsqueeze(arr3, axis)
>>> arr3.shape
(1, 3, 1, 5, 1)
>>> np.all(arr2 == arr3)
True
"""
# calculate `new_shape`
if axis is None and shape is None:
raise ValueError(
'At least one of `axis` and `shape` parameters must be specified.')
elif axis is None and shape is not None:
new_shape = unsqueezing(arr.shape, shape)
elif axis is not None:
if isinstance(axis, int):
axis = (axis,)
# calculate the dim of the result
if shape is not None:
if isinstance(shape, int):
ndim = shape
else: # shape is a sequence
ndim = len(shape)
elif not reverse:
ndim = len(axis) + arr.ndim
else:
raise ValueError('When `reverse` is True, `shape` cannot be None.')
# check that axis is properly constructed
if any([ax < -ndim - 1 or ax > ndim + 1 for ax in axis]):
raise ValueError('Axis {} out of range.'.format(axis))
# normalize axis using `ndim`
axis = sorted([ax % ndim for ax in axis])
# manage reverse mode
if reverse:
if len(axis) == arr.ndim:
axis = [i for i in range(ndim) if i not in axis]
else:
raise ValueError(
'When `reverse` is True, the length of axis ({})'
' must match the num of dims of array ({}).'.format(
len(axis), arr.ndim))
elif len(axis) + arr.ndim != ndim:
raise ValueError(
'Incompatible `{}` axis and `{}` shape'
' for array of shape {}'.format(axis, shape, arr.shape))
# generate the new shape from axis, ndim and shape
new_shape = []
i, j = 0, 0
for l in range(ndim):
if i < len(axis) and l == axis[i] or j >= arr.ndim:
new_shape.append(1)
i += 1
else:
new_shape.append(arr.shape[j])
j += 1
# check that `new_shape` is consistent with `shape`
if shape is not None:
if isinstance(shape, int):
if len(new_shape) != ndim:
raise ValueError(
'Length of new shape {} does not match '
'expected length ({}).'.format(len(new_shape), ndim))
else:
if not all([new_dim == 1 or new_dim == dim
for new_dim, dim in zip(new_shape, shape)]):
raise ValueError(
'New shape {} cannot be broadcasted to shape {}'.format(
new_shape, shape))
return arr.reshape(new_shape)
Using these, one can write:
import numpy as np
arr1 = np.arange(2 * 3 * 4 * 5 * 6).reshape((2, 3, 4, 5, 6))
arr2 = np.arange(3 * 5).reshape((3, 5))
arr3 = unsqueeze(arr2, (0, 2, 4))
arr1 + arr3
# now this works because it has the correct shape
arr3 = unsqueeze(arr2, shape=arr1.shape)
arr1 + arr3
# this also works because the shape can be expanded unambiguously
So dynamic broadcast can now happen, and this is consistent with the behavior of keepdims:
import numpy as np
axis = (0, 2, 4)
arr1 = np.arange(2 * 3 * 4 * 5 * 6).reshape((2, 3, 4, 5, 6))
arr2 = np.sum(arr1, axis, keepdims=True)
arr3 = np.sum(arr1, axis)
arr3 = unsqueeze(arr3, axis)
np.all(arr2 == arr3)
# : True
Effectively, this extends np.expand_dims() to handle more complex scenarios.
Improvements over this code are obviously more than welcome.

Related

How to reshape matrices using index instead of shape inputs?

Given an array of shape (8, 3, 4, 4), reshape them into an arbitrary new shape (8, 4, 4, 3) by inputting the new indices compared to the old positions (0, 2, 3, 1).
Bonus: perform numpy.dot of one of said array's non-last index and a 1-D second, i.e. numpy.dot(<array with shape (8, 3, 4, 4)>, [1, 2, 3]) # will return shape mismatch as it is
Numpy's transpose "reverses or permutes":
ni = (0, 2, 3, 1)
arr = arr.transpose(ni)
Old solution:
ni = (0, 2, 3, 1)
s = arr.shape
arr = arr.reshape(s[ni[0]], s[ni[1]]...)
Maybe this is what you are looking for:
arr = np.array([[[1, 2], [3, 4], [5, 6]]])
s = arr.shape
new_indexes = (1, 0, 2) # permutation
new_arr = arr.reshape(*[s[index] for index in new_indexes])
print(arr.shape) # (1, 3, 2)
print(new_arr.shape) # (3, 1, 2)

Appending multi-dimensional arrays of different shapes in Python

Arrays C1 and C2 have shapes (1, 2, 2) and (1, 1, 2) respectively. I want to append these arrays into a new array C3. But I am getting an error. The desired output is attached.
import numpy as np
arC1 = []
arC2 = []
C1=np.array([[[0, 1],[0,2]]])
arC1.append(C1)
C2=np.array([[[1,1]]])
C2.shape
arC2.append(C2)
C3=np.append(C1,C2,axis=0)
The error is
<module>
C3=np.append(C1,C2,axis=0)
File "<__array_function__ internals>", line 5, in append
File "C:\Users\USER\anaconda3\lib\site-packages\numpy\lib\function_base.py", line 4745, in append
return concatenate((arr, values), axis=axis)
File "<__array_function__ internals>", line 5, in concatenate
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 2 and the array at index 1 has size 1
The desired output is
C3[0]=np.array([[[0, 1], [0, 2]]])
C3[1]=np.array([[2,3]])
C1 = np.array([[[0, 1],[0,2]]])
print(C1.shape) # (1, 2, 2)
C2 = np.array([[[1,1]]])
print(C2.shape) # (1, 1, 2)
If you run
C3 = np.append(C1,C2,axis=1)
print(C2.shape) # (1, 3, 2)
It means like this. (1, 2, 2) + (1, 1, 2) = (1, 3, 2)
If you run
C3 = np.append(C1,C2,axis=0)
It means like this. (1, 2, 2) + (1, 1, 2) = (2, ??, 2)
Since numpy is a matrix, it can't be added because the lower dimensions are different.
Simply, when you set axis parameter value at append function for append two arrays, the values of all dimensions shall be the same except for the values of the dimensions corresponding to the axis
example 1
C1 = np.array([[[[[0, 1]]],[[[0, 2]]], [[[0, 2]]]]])
C2 = np.array([[[[[1]]],[[[2]]], [[[3]]]]])
C3 = np.append(C1, C2, axis=4)
and if the dimensions are as follows,
(1, 3, 1, 1, 2) # C1
(1, 3, 1, 1, 1) # C2
-->
(1, 3, 1, 1, 3) # C3
Only axis 4 dimmension is different so you can do np.append(C1, C2, axis=4)
example 2
C1 = np.array([[[[[0, 1]]],[[[0, 2]]], [[[0, 2]]]]])
C2 = np.array([[[[[1, 1]]],[[[2, 2]]], [[[3, 3]]]], [[[[1, 1]]],[[[2, 2]]], [[[3, 3]]]]])
C3 = np.append(C1, C2, axis=0)
and if the dimensions are as follows,
(1, 3, 1, 1, 2) # C1
(2, 3, 1, 1, 2) # C2
-->
(3, 3, 1, 1, 2) # C3
Only axis 0 dimmension is different so you can do np.append(C1, C2, axis=0)
example 3
C1 = np.array([[[[[0, 1]]],[[[0, 2]]], [[[0, 2]]]]])
C2 = np.array([[[[[1]]],[[[2]]], [[[3]]]], [[[[1]]],[[[2]]], [[[3]]]]])
and if the dimensions are as follows,
(1, 3, 1, 1, 2) # C1
(2, 3, 1, 1, 1) # C2
-->
ERROR
Except for the Axis value you specify, at least one dimension is more different. So it's impossible to set axis parameter with these arrays.

How to add a new dimension to a PyTorch tensor?

In NumPy, I would do
a = np.zeros((4, 5, 6))
a = a[:, :, np.newaxis, :]
assert a.shape == (4, 5, 1, 6)
How to do the same in PyTorch?
a = torch.zeros(4, 5, 6)
a = a[:, :, None, :]
assert a.shape == (4, 5, 1, 6)
You can add a new axis with torch.unsqueeze() (first argument being the index of the new axis):
>>> a = torch.zeros(4, 5, 6)
>>> a = a.unsqueeze(2)
>>> a.shape
torch.Size([4, 5, 1, 6])
Or using the in-place version: torch.unsqueeze_():
>>> a = torch.zeros(4, 5, 6)
>>> a.unsqueeze_(2)
>>> a.shape
torch.Size([4, 5, 1, 6])
x = torch.tensor([1, 2, 3, 4])
y = torch.unsqueeze(x, 0)
y will be -> tensor([[ 1, 2, 3, 4]])
EDIT: see more details here: https://pytorch.org/docs/stable/generated/torch.unsqueeze.html

numpy.dstack for 3D arrays?

np.dstack works as expected for 2D arrays, but for some reason for 3D arrays it stack not by last dimension.
What is proper way for stack by last dimension for ND arrays?
Example:
import numpy as np
#2D
a = np.zeros((2,2,1))
a.shape
(2, 2, 1)
np.dstack([a] * 3).shape
(2, 2, 3)
#3D
b = np.zeros((8,2,2,1))
b.shape
(8, 2, 2, 1)
np.dstack([b] * 3).shape
(8, 2, 6, 1)
If you want to stack an array against itself, like in your example, you can use np.repeat
b = np.zeros((8,2,2,1))
n_stacks = 3
np.repeat(b, n_stacks, axis=b.ndim-1).shape
(8, 2, 2, 3)
If you want to stack two different arrays along their last dimension you can use np.concatenate
b = np.zeros((8,2,2,1))
c = np.ones((8,2,2,1))
np.concatenate((b,c),axis=b.ndim-1).shape
(8, 2, 2, 2)

How to gather a tensor with unknown first (batch) dimension?

I have a tensor of shape (?, 3, 2, 5). I want to supply pairs of indices to select from the first and second dimensions of that tensor, that have shape (3, 2).
If I supply 4 such pairs, I would expect the resulting shape to be (?, 4, 5). I'd thought this is what what batch_gather is for: to "broadcast" gathering indices over the first (batch) dimension. But this is not what it's doing:
import tensorflow as tf
data = tf.placeholder(tf.float32, (None, 3, 2, 5))
indices = tf.constant([
[2, 1],
[2, 0],
[1, 1],
[0, 1]
], tf.int32)
tf.batch_gather(data, indices)
Which results in <tf.Tensor 'Reshape_3:0' shape=(4, 2, 2, 5) dtype=float32> instead of the shape that I was expecting.
How can I do what I want without explicitly indexing the batches (which have an unknown size)?
I wanted to avoid transpose and Python loops, and I think this works. This was the setup:
import numpy as np
import tensorflow as tf
shape = None, 3, 2, 5
data = tf.placeholder(tf.int32, shape)
idxs_list = [
[2, 1],
[2, 0],
[1, 1],
[0, 1]
]
idxs = tf.constant(idxs_list, tf.int32)
This allows us to gather the results:
batch_size, num_idxs, num_channels = tf.shape(data)[0], tf.shape(idxs)[0], shape[-1]
batch_idxs = tf.math.floordiv(tf.range(0, batch_size * num_idxs), num_idxs)[:, None]
nd_idxs = tf.concat([batch_idxs, tf.tile(idxs, (batch_size, 1))], axis=1)
gathered = tf.reshape(tf.gather_nd(data, nd_idxs), (batch_size, num_idxs, num_channels))
When we run with a batch size of 4, we get a result with shape (4, 4, 5), which is (batch_size, num_idxs, num_channels).
vals_shape = 4, *shape[1:]
vals = np.arange(int(np.prod(vals_shape))).reshape(vals_shape)
with tf.Session() as sess:
result = gathered.eval(feed_dict={data: vals})
Which ties out with numpy indexing:
x, y = zip(*idxs_list)
assert np.array_equal(result, vals[:, x, y])
Essentially, gather_nd wants batch indices in the first dimension, and those have to be repeated once for each index pair (i.e., [0, 0, 0, 0, 1, 1, 1, 1, 2, ...] if there are 4 index pairs).
Since there doesn't seem to be a tf.repeat, I used range and floordiv, and then concated the batch indices with the desired (x, y) indices (which are themselves tiled batch_size times).
Using tf.batch_gather the leading dimensions of the shape of the tensor should match with the leading dimension of the shape of the indice tensor.
import tensorflow as tf
data = tf.placeholder(tf.float32, (2, 3, 2, 5))
print(data.shape) // (2, 3, 2, 5)
# shape of indices, [2, 3]
indices = tf.constant([
[1, 1, 1],
[0, 0, 1]
])
print(tf.batch_gather(data, indices).shape) # (2, 3, 2, 5)
# if shape of indice was (2, 3, 1) the output would be 2, 3, 1, 5
What you rather want is to use tf.gather_nd as the following
data_transpose = tf.transpose(data, perm=[2, 1, 0, 3])
t_transpose = tf.gather_nd(data_transpose, indices)
t = tf.transpose(t_transpose, perm=[1, 0, 2])
print(t.shape) # (?, 4, 5)

Categories

Resources