I would like to select a part of this tensor.
A = tf.constant([[[1,1],[2,2],[3,3]], [[4,4],[5,5],[6,6]]])
The output of A will be
[[[1 1]
[2 2]
[3 3]]
[[4 4]
[5 5]
[6 6]]]
The index I want to select from A is [1, 0]. I mean [2 2] of the first part and [4 4] of the second part of this tensor, so my expected result is
[2 2]
[4 4]
How can I do it with embedding_lookup function?
B = tf.nn.embedding_lookup(A, [1, 0])
I have already tried this
but it's not my expectation.
[[[4 4]
[5 5]
[6 6]]
[[1 1]
[2 2]
[3 3]]]
Can anyone help me and explain how to do it?
Try the following,
A = tf.constant([[[1,1],[2,2],[3,3]], [[4,4],[5,5],[6,6]]])
B = [1,0]
inds = [(a,b) for a,b in zip(np.arange(len(B)), B)]
C = tf.gather_nd(params=A, indices=inds)
Related
Broadcasting is only possible (as far as I know) with matrices matching shape from the end (shape [4,3,2] is broadcastable with shapes [2], [3,2], [4,3,2]). But why?
Consider the following example:
np.zeros([4,3,2])
[[[0 0]
[0 0]
[0 0]]
[[0 0]
[0 0]
[0 0]]
[[0 0]
[0 0]
[0 0]]
[[0 0]
[0 0]
[0 0]]]
Why broadcasting with [1,2,3], or [1,2,3,4] isn't possible?
Adding with [1,2,3] (shape: [3], target shape: [4,3,2]) expected result:
[[[1 1]
[2 2]
[3 3]]
[[1 1]
[2 2]
[3 3]]
[[1 1]
[2 2]
[3 3]]
[[1 1]
[2 2]
[3 3]]]
Adding with [1,2,3,4] (shape: [4], target shape: [4,3,2]) expected result:
[[[1 1]
[1 1]
[1 1]]
[[2 2]
[2 2]
[2 2]]
[[3 3]
[3 3]
[3 3]]
[[4 4]
[4 4]
[4 4]]]
Or, if there would be concerns about multi dimensional broadcasting this way, adding with:
[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
(shape: [4,3], target shape: [4,3,2]) expected result:
[[[ 1 1]
[ 2 2]
[ 3 3]]
[[ 4 4]
[ 5 5]
[ 6 6]]
[[ 7 7]
[ 8 8]
[ 9 9]]
[[10 10]
[11 11]
[12 12]]]
So basically what I'm saying is that I can't see a reason why it couldn't find the matching shape, and do the operations respectively. If there's multiple dimensions matching in the target matrix, just select the last one automatically, or have the option to specify which dimension we want to perform the operation.
Any ideas/suggestions?
The broadcasting rules are simple and unambiguous.
add leading size 1 dimension as needed to match total number of dimensions
adjust all size 1 dimensions as needed to match
With (4,3,2)
(2,) => (1,1,2) => (4,3,2)
(3,2) => (1,3,2) => (4,3,2)
(3,) => (1,1,3) => (4,3,3) ERROR
(4,) => (1,1,4)
(4,3) => (1,4,3)
With reshape or np.newaxis we can add explicit new dimensions in the right place:
(3,1) => (1,3,1) => (4,3,2)
(4,1,1) => (4,3,2)
(4,3,1) => (4,3,2)
Why doesn't it do the last stuff automatically? Potential ambiguity. Without those rules, especially the 'add only leading', it would be possible to add the extra dimension in several different places.
e.g.
(2,3,3) + (3,) => is that (1,1,3) or (1,3,1)?
(2,3,3,3) + (3,3)
From tensor a which size is [2,3,2]
a = [[[1 1]
[2 2]
[3 3]]
[[4 4]
[5 5]
[6 6]]]
I want to select rows with indices = [[0],[0,2]]
The expected output is:
b = [[[1 1]]
[[4 4]
[6 6]]]
I have tried tf.gather_nd but it cannot select rows if the size of indices is different.
Does that mean I cannot have a list of lists with different sizes as a tensor? If there's no way to get the expected result as a tensor b. Is there anyway get the result like this c?
c = [[[1 1]
[0 0]
[0 0]]
[[4 4]
[0 0]
[6 6]]]
In a single line, how can I get the product of the arrays of an array?
I need it to be done for multi columns cases
2 columns example:
X = [[1 4]
[2 3]
[0 2]
[1 5]
[3 1]
[3 6]]
sol = [4 6 0 5 3 18]
4 columns example:
X = [[1 4 2 3]
[2 3 1 5]
[0 2 3 4]
[1 5 2 2]
[3 1 1 6]
[3 6 3 1]]
sol = [24 30 0 20 18 54]
This is a row-wise multiplication. You can perform this with:
X.prod(axis=1)
for example:
>>> X
array([[1, 4],
[2, 3],
[0, 2],
[1, 5],
[3, 1],
[3, 6]])
>>> a.prod(axis=1)
array([ 4, 6, 0, 5, 3, 18])
You can also use numpy.multiply.reduce
np.multiply.reduce(x, axis=1)
Suppose I have the array:
[[2,1,5,2],
[1,4,2,1],
[4,5,5,7],
[1,5,9,3]]
I am trying to transpose the array to shape (16, 3) where the first two elements in the resulting array are the index numbers, and the last is the value. eg:
[[0, 0, 2], [1, 0, 1], [2, 0, 5], [3, 0, 2], [0, 1, 4], ....]
Is this possible with a numpy function or similar? Or I have to do this with my own function?
Working example code:
import numpy as np
src = np.array([[2,1,5,2],
[1,4,2,1],
[4,5,5,7],
[1,5,9,3]])
dst = np.array([])
for x in range(src.shape[0]):
for y in range(src.shape[1]):
dst = np.append(dst, [[y, x, src[x][y]]])
print(dst.reshape(16,3))
I don't know if there is a function in numpy for that, but you can use list comprehension to easily build that array:
import numpy as np
src = np.array([[2,1,5,2],
[1,4,2,1],
[4,5,5,7],
[1,5,9,3]])
dst = np.array([ [y, x, src[x][y]]
for x in range(src.shape[0])
for y in range(src.shape[1])])
print(dst.reshape(16,3))
Hope this can help.
Update: There is a numpy function for that:
You can use numpy.ndenumerate:
dst = np.array([[*reversed(x), y] for x, y in np.ndenumerate(src)])
print(dst)
#[[0 0 2]
# [1 0 1]
# [2 0 5]
# [3 0 2]
# [0 1 1]
# [1 1 4]
# [2 1 2]
# [3 1 1]
# [0 2 4]
# [1 2 5]
# [2 2 5]
# [3 2 7]
# [0 3 1]
# [1 3 5]
# [2 3 9]
# [3 3 3]]
ndenumerate will return an iterator yielding pairs of array coordinates and values. You will first need to reverse the coordinates for your desired output. Next unpack the coordinates into a list1 with the value and use a list comprehension to consume the iterator.
Original Answer
You can try:
dst = np.column_stack(zip(*[*reversed(np.indices(src.shape)), src])).T
print(dst)
#[[0 0 2]
# [1 0 1]
# [2 0 5]
# [3 0 2]
# [0 1 1]
# [1 1 4]
# [2 1 2]
# [3 1 1]
# [0 2 4]
# [1 2 5]
# [2 2 5]
# [3 2 7]
# [0 3 1]
# [1 3 5]
# [2 3 9]
# [3 3 3]]
Explanation
First, use numpy.indices to get an array representing the indices of a grid with the shape of src.
print(np.indices(src.shape))
#[[[0 0 0 0]
# [1 1 1 1]
# [2 2 2 2]
# [3 3 3 3]]
#
# [[0 1 2 3]
# [0 1 2 3]
# [0 1 2 3]
# [0 1 2 3]]]
We can reverse these (since that's the order you want in your final output), and unpack into a list1 that also contains src.
Then zip all of the elements of this list to get the (col, row, val) triples. We can stack these together using numpy.column_stack.
list(zip(*[*reversed(np.indices(src.shape)), src]))
#[(array([0, 1, 2, 3]), array([0, 0, 0, 0]), array([2, 1, 5, 2])),
# (array([0, 1, 2, 3]), array([1, 1, 1, 1]), array([1, 4, 2, 1])),
# (array([0, 1, 2, 3]), array([2, 2, 2, 2]), array([4, 5, 5, 7])),
# (array([0, 1, 2, 3]), array([3, 3, 3, 3]), array([1, 5, 9, 3]))]
Finally transpose (numpy.ndarray.T) to get the final output.
Notes:
Unpacking into a list is only available in python 3.5+
Suppose that I have a 1*3 vector [[1,3,5]] (or a list like [1,3,5] if you with), how do I generate a 9*2 matrix: [[1,1],[1,3],[1,5],[3,1],[3,3],[3,5],[5,1],[5,3],[5,5]]?
Elements in the new matrix is the pairwise combination of elements in the original matrix.
Also, the original matrix could be with zeros, like this [[0,1],[0,3],[0,5]].
The implementation should generalise to vectors of any dimensionalities.
Many thanks!
You can use tf.meshgrid() and tf.transpose() to generate two matrices. Then reshape and concat them.
import tensorflow as tf
a = tf.constant([[1,3,5]])
A,B=tf.meshgrid(a,tf.transpose(a))
result = tf.concat([tf.reshape(B,(-1,1)),tf.reshape(A,(-1,1))],axis=-1)
with tf.Session() as sess:
print(sess.run(result))
[[1 1]
[1 3]
[1 5]
[3 1]
[3 3]
[3 5]
[5 1]
[5 3]
[5 5]]
You can use product from itertools
from itertools import product
np.array([np.array(item) for item in product([1,3,5],repeat =2 )])
array([[1, 1],
[1, 3],
[1, 5],
[3, 1],
[3, 3],
[3, 5],
[5, 1],
[5, 3],
[5, 5]])
I also come up with an answer, similar to #giser_yugang, but not using tf.meshgrid and tf.concat.
import tensorflow as tf
inds = tf.constant([1,3,5])
num = tf.shape(inds)[0]
ind_flat_lower = tf.tile(inds,[num])
ind_mat = tf.reshape(ind_flat_lower,[num,num])
ind_flat_upper = tf.reshape(tf.transpose(ind_mat),[-1])
result = tf.transpose(tf.stack([ind_flat_upper,ind_flat_lower]))
with tf.Session() as sess:
print(sess.run(result))
[[1 1]
[1 3]
[1 5]
[3 1]
[3 3]
[3 5]
[5 1]
[5 3]
[5 5]]