Splitting a 4-dimensional tensor into odd and even lines - python

I want to get the odd and even rows of a 4-dimensional input data into a separate variable.
e.g
tensor = [[[-1.8391453 ],
[ 1.9224693 ]],
[[-0.7931502 ],
[-0.16963768]]]
tensorodd= [[[1.8391453],[-0.7931502]]]
tensoreven=[[[1.9224693],[-0.16963768]]]
I couldn't convert it for 4 dimensional. and I wasn't sure if what I was doing was right.
I'm not sure what I wrote is correct
This is not exactly what I want. I want to get 1,3,5 rows in a separate variable and 0,2,4,6 rows in a separate variable in the tensor. Actually what I want to do is this:
I want to apply mae formula to tensor. so I want to separate the rows in the tensor and take them as y1 and y2 and apply the formula

I'm not sure I've understood what you want to get.
I'm assuming that you have a tensor of shape (3, 128, 128, 3) and you want to take:
The even rows of that tensor, leaving you with a tensor of this shape (2, 128, 128, 3)
The odd rows of that tensor, leaving you with a tensor of this shape (1, 128, 128, 3)
Then you could work with indices:
import tensorflow as tf
import numpy as np
X = tf.convert_to_tensor(np.ones((3, 128, 128, 3)))
# creating a list of indices for 0 axis, in this case [0, 1, 2]
indices = tf.range(start=0, limit=tf.shape(X)[0], dtype=tf.int32)
# separating the even and odd numbers
even_indices = [x for x in indices if x % 2 == 0]
odd_indices = [x for x in indices if x % 2 != 0]
even_X = tf.gather(X, even_indices)
odd_X = tf.gather(X, odd_indices)
print('Even tensors', even_X.shape) # prints (2, 128, 128, 3)
print('Odd tensors', odd_X.shape) # prints (1, 128, 128, 3)
Updated answer after new info:
# your input tensor has shape (3, 2, 3)
tensor = tf.constant([[ [0, 0, 0],
[1, 0, 0] ],
[ [2, 0, 0],
[3, 1, 1] ],
[[4, 1, 1],
[5, 1, 1]]])
even_tensor = [x[0] for x in tensor]
# even_tensor = [<tf.Tensor: [0, 0, 0]>, <tf.Tensor: [2, 0, 0]>, <tf.Tensor: [4, 1, 1]>]
odd_tensor = [x[1] for x in tensor]
# odd_tensor = [<tf.Tensor: [1, 0, 0]>, <tf.Tensor: [3, 1, 1]>, <tf.Tensor: [5, 1, 1]>]
mae = tf.keras.losses.MeanAbsoluteError()
result = mae(even_tensor, odd_tensor).numpy()
Instead of working with tensors you can convert to lists, e.g.:
odd_tensor = [list(x[1].numpy()) for x in tensor]

Related

Update a tensor based on the highest value of another tensor when passed through a mask

I'm learning some transformations in tensorflow and want to know what are my possible ways to achieve the following.
tensor_1 = [0, 1, 2, 3]
tensor_2 = [0, 0, 0, 0]
mask = [True, False, True, False]
Expected outcome tensor_2 = [0, 0, 1, 0]. Essentially, I want to pass tensor_1 through the mask and for values that are True, I want to update the same index in tensor_2 as 1. For example, in the example above, the highest value when passed through the mask is 2, so we update the third index in tensor_2.
Also, I need to do this for a batch of images (in our example that would be tensor_1) of shape (batch_size, 128, 128, 3) where each image has 3 channels. We need to find the maximum in the flattened image (128, 128, 3) and apply the transformation in all the 3 channels of that pixel in tensor_2 such that for that pixel, we have 1 in all 3 channels, final shape being (batch_size, 128, 128, 3). The mask is also of shape (batch_size, 128, 128, 3).
I understand that this is very specific but I want to understand transformations and not sure where to begin but try out some scenarios.
Making some assumptions about your question because there's a bit of contradictory information. I will update this answer if you feel I missed something.
I want to pass tensor_1 through the mask and for values that are True, I want to update the same index in tensor_2 as 1
If you just want to use a preexisting mask to update values, you can use tf.where
tensor_1 = [0, 1, 2, 3]
tensor_2 = [0, 0, 0, 0]
mask = [True, False, True, False]
tf.where(mask,1,tensor_2)
>>>
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([1, 0, 1, 0], dtype=int32)>
For example, in the example above, the highest value when passed through the mask is 2, so we update the third index in tensor_2
The example you provided and your end goal don't match. The highest value in tensor_1 is also 3, not 2. But you can use tf.where and tf.reduce_max directly without having to create a separate mask.
tf.where(tensor_1 == tf.math.reduce_max(tensor_1),1,tensor_2
)
>>>
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([0, 0, 0, 1], dtype=int32)>
You can also do this to your 3D-tensor without needing to flatten it
xx = tf.random.uniform((2,2,3),maxval=10,dtype=tf.int32)
xx
>>>
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[9, 8, 6],
[7, 0, 2]],
[[3, 8, 2],
[2, 6, 7]]], dtype=int32)>
tf.where(xx == tf.math.reduce_max(xx),-5,0
)
>>>
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[-5, 0, 0],
[ 0, 0, 0]],
[[ 0, 0, 0],
[ 0, 0, 0]]], dtype=int32)>

How to gather a tensor with unknown first (batch) dimension?

I have a tensor of shape (?, 3, 2, 5). I want to supply pairs of indices to select from the first and second dimensions of that tensor, that have shape (3, 2).
If I supply 4 such pairs, I would expect the resulting shape to be (?, 4, 5). I'd thought this is what what batch_gather is for: to "broadcast" gathering indices over the first (batch) dimension. But this is not what it's doing:
import tensorflow as tf
data = tf.placeholder(tf.float32, (None, 3, 2, 5))
indices = tf.constant([
[2, 1],
[2, 0],
[1, 1],
[0, 1]
], tf.int32)
tf.batch_gather(data, indices)
Which results in <tf.Tensor 'Reshape_3:0' shape=(4, 2, 2, 5) dtype=float32> instead of the shape that I was expecting.
How can I do what I want without explicitly indexing the batches (which have an unknown size)?
I wanted to avoid transpose and Python loops, and I think this works. This was the setup:
import numpy as np
import tensorflow as tf
shape = None, 3, 2, 5
data = tf.placeholder(tf.int32, shape)
idxs_list = [
[2, 1],
[2, 0],
[1, 1],
[0, 1]
]
idxs = tf.constant(idxs_list, tf.int32)
This allows us to gather the results:
batch_size, num_idxs, num_channels = tf.shape(data)[0], tf.shape(idxs)[0], shape[-1]
batch_idxs = tf.math.floordiv(tf.range(0, batch_size * num_idxs), num_idxs)[:, None]
nd_idxs = tf.concat([batch_idxs, tf.tile(idxs, (batch_size, 1))], axis=1)
gathered = tf.reshape(tf.gather_nd(data, nd_idxs), (batch_size, num_idxs, num_channels))
When we run with a batch size of 4, we get a result with shape (4, 4, 5), which is (batch_size, num_idxs, num_channels).
vals_shape = 4, *shape[1:]
vals = np.arange(int(np.prod(vals_shape))).reshape(vals_shape)
with tf.Session() as sess:
result = gathered.eval(feed_dict={data: vals})
Which ties out with numpy indexing:
x, y = zip(*idxs_list)
assert np.array_equal(result, vals[:, x, y])
Essentially, gather_nd wants batch indices in the first dimension, and those have to be repeated once for each index pair (i.e., [0, 0, 0, 0, 1, 1, 1, 1, 2, ...] if there are 4 index pairs).
Since there doesn't seem to be a tf.repeat, I used range and floordiv, and then concated the batch indices with the desired (x, y) indices (which are themselves tiled batch_size times).
Using tf.batch_gather the leading dimensions of the shape of the tensor should match with the leading dimension of the shape of the indice tensor.
import tensorflow as tf
data = tf.placeholder(tf.float32, (2, 3, 2, 5))
print(data.shape) // (2, 3, 2, 5)
# shape of indices, [2, 3]
indices = tf.constant([
[1, 1, 1],
[0, 0, 1]
])
print(tf.batch_gather(data, indices).shape) # (2, 3, 2, 5)
# if shape of indice was (2, 3, 1) the output would be 2, 3, 1, 5
What you rather want is to use tf.gather_nd as the following
data_transpose = tf.transpose(data, perm=[2, 1, 0, 3])
t_transpose = tf.gather_nd(data_transpose, indices)
t = tf.transpose(t_transpose, perm=[1, 0, 2])
print(t.shape) # (?, 4, 5)

using tf.where() to select 3d tensor by 2d conditions & replacing elements in a 2d indices with keys and values

There are 2 questions in the title. I am confused by both questions because tensorflow is such a static programming language (I really want to go back to either pytorch or chainer).
I give 2 examples. please answer me in tensorflow codes or providing the relevant function links.
1) tf.where()
data0 = tf.zeros([2, 3, 4], dtype = tf.float32)
data1 = tf.ones([2, 3, 4], dtype = tf.float32)
cond = tf.constant([[0, 1, 1], [1, 0, 0]])
# cond.shape == (2, 3)
# tf.where() works for 1d condition with 2d data,
# but not for 2d indices with 3d tensor
# currently, what I am doing is:
# cond = tf.stack([cond] * 4, 2)
data = tf.where(cond > 0, data1, data0)
# data should be [[0., 1., 1.], [1., 0., 0.]]
(I don't know how to broadcast cond to 3d tensor)
2) change element in 2d tensor
# all dtype == tf.int64
t2d = tf.Variable([[0, 1, 2], [3, 4, 5]])
k, v = tf.constant([[0, 2], [1, 0]]), tf.constant([-2, -3])
# TODO: change values at positions k to v
# I cannot do [t2d.copy()[i] = j for i, j in k, v]
t3d == [[[0, 1, -2], [3, 4, 5]],
[[0, 1, 2], [-3, 4, 5]]]
Thank you so much in advance. XD
This are two quite different questions, and they should probably have been posted as such, but anyway.
1)
Yes, you need to manually broadcast all the inputs to [tf.where](https://www.tensorflow.org/api_docs/python/tf/where] if they are different. For what is worth, there is an (old) open issue about it, but so far implicit broadcasting it has not been implemented. You can use tf.stack like you suggest, although tf.tile would probably be more obvious (and may save memory, although I'm not sure how it is implemented really):
cond = tf.tile(tf.expand_dims(cond, -1), (1, 1, 4))
Or simply with tf.broadcast_to:
cond = tf.broadcast_to(tf.expand_dims(cond, -1), tf.shape(data1))
2)
This is one way to do that:
import tensorflow as tf
t2d = tf.constant([[0, 1, 2], [3, 4, 5]])
k, v = tf.constant([[0, 2], [1, 0]]), tf.constant([-2, -3])
# Tile t2d
n = tf.shape(k)[0]
t2d_tile = tf.tile(tf.expand_dims(t2d, 0), (n, 1, 1))
# Add aditional coordinate to index
idx = tf.concat([tf.expand_dims(tf.range(n), 1), k], axis=1)
# Make updates tensor
s = tf.shape(t2d_tile)
t2d_upd = tf.scatter_nd(idx, v, s)
# Make updates mask
upd_mask = tf.scatter_nd(idx, tf.ones_like(v, dtype=tf.bool), s)
# Make final tensor
t3d = tf.where(upd_mask, t2d_upd, t2d_tile)
# Test
with tf.Session() as sess:
print(sess.run(t3d))
Output:
[[[ 0 1 -2]
[ 3 4 5]]
[[ 0 1 2]
[-3 4 5]]]

Tensorflow compute multiplication by binary matrix

I have my data tensor which is of the shape [batch_size,512] and I have a constant matrix with values only of 0 and 1 which has the shape [256,512].
I would like to compute efficiently for each batch the sum of the products of my vector (second dimension of the data tensor) only for the entries which are 1 and not 0.
An explaining example:
let us say I have 1-sized batch: the data tensor has the values [5,4,3,7,8,2] and my constant matrix has the values:
[0,1,1,0,0,0]
[1,0,0,0,0,0]
[1,1,1,0,0,1]
it means that I would like to compute for the first row 4*3, for the second 5 and for the third 5*4*3*2.
and in total for this batch, I get 4*3+5+5*4*3*2 which equals to 137.
Currently, I do it by iterating over all the rows, compute elementwise the product of my data and constant-matrix-row and then sum, which runs pretty slow.
How about something like this:
import tensorflow as tf
# Two-element batch
data = [[5, 4, 3, 7, 8, 2],
[4, 2, 6, 1, 6, 8]]
mask = [[0, 1, 1, 0, 0, 0],
[1, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 1]]
with tf.Graph().as_default(), tf.Session() as sess:
# Data as tensors
d = tf.constant(data, dtype=tf.int32)
m = tf.constant(mask, dtype=tf.int32)
# Tile data as needed
dd = tf.tile(d[:, tf.newaxis], (1, tf.shape(m)[0], 1))
mm = tf.tile(m[tf.newaxis, :], (tf.shape(d)[0], 1, 1))
# Replace values with 1 wherever the mask is 0
w = tf.where(tf.cast(mm, tf.bool), dd, tf.ones_like(dd))
# Multiply row-wise and sum
result = tf.reduce_sum(tf.reduce_prod(w, axis=-1), axis=-1)
print(sess.run(result))
Output:
[137 400]
EDIT:
If you input data is a single vector then you would just have:
import tensorflow as tf
# Two-element batch
data = [5, 4, 3, 7, 8, 2]
mask = [[0, 1, 1, 0, 0, 0],
[1, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 1]]
with tf.Graph().as_default(), tf.Session() as sess:
# Data as tensors
d = tf.constant(data, dtype=tf.int32)
m = tf.constant(mask, dtype=tf.int32)
# Tile data as needed
dd = tf.tile(d[tf.newaxis], (tf.shape(m)[0], 1))
# Replace values with 1 wherever the mask is 0
w = tf.where(tf.cast(m, tf.bool), dd, tf.ones_like(dd))
# Multiply row-wise and sum
result = tf.reduce_sum(tf.reduce_prod(w, axis=-1), axis=-1)
print(sess.run(result))
Output:
137

Creating a tensor of ordered integers of shape (None, 1)

Given an input batch of size (None, 1), is it possible to create a tensor of ordered integers that is the same shape?
ex:
input = [3, 2, 3, 7], output = [0, 1, 2, 3]
ex:
input = [9, 3, 12, 4, 34 .....], output = [0, 1, 2, 3, ....]
tf.range() does what you need, you just need to provide the size based on the size of your input tensor. Because people already told you this, I will show you another approach.
tf.cumsum() on the ones vector:
import tensorflow as tf
x = tf.placeholder(tf.int32, shape=(None))
y = tf.cumsum(tf.ones_like(x)) - 1
with tf.Session() as sess:
print sess.run(y, {x: [4, 3, 2, 6, 3]})
You could try this:
x = tf.placeholder(tf.float32, shape=(None, 1))
op = tf.range(tf.size(x))[:,tf.newaxis]
# test with different sizes
sess.run(op, {x: np.expand_dims(range(10), axis=-1)})
sess.run(op, {x: np.expand_dims(range(3), axis=-1)})

Categories

Resources