Tensorflow multiple X values to one Y value - python

Is it possible to use a list of inputs as X to only one label Y?
I'm working with ECG values and have a time series of 1 second, and for each second I have what emotion was displayed.
So I have something like an array of 100 values and a binary value for the Y.
What can I do?

Difficult to tell if that's what you're looking for without seeing your code so far. But here's an example.
tf.reset_default_graph()
x_len = 3 # length of X, in your case 100
xs = tf.placeholder(shape = [None, x_len], dtype = tf.float32) # feed arbitrary number of X's
ys = tf.placeholder(shape = [None], dtype = tf.float32) # feed Y's corresponding to the X's
outs = tf.reduce_sum(xs, axis = 1) + ys # do something with X's and Y's
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
x = np.array([[1, 2, 3], [4, 5, 6]]) # 2 X's of x_len == 3 each
y = [10, 20] # 2 Y's corresponding to each X
outs = sess.run(outs, feed_dict = { xs: x, ys: y }) # run the graph to get the output
print(outs)
This takes several X's of specified length (3 here, in your case 100), a corresponding Y for each X and feeds it through the graph. The outs operation sums up all values in each X and adds the corresponding Y to the sum.
Output:
[16. 35.]

Related

PyTorch: efficiently interleave two tensors in a custom order

I want to create a new tensor z from two tensors, say x and y with dimensions [N_samples, S, N_feats] and [N_samples, T, N_feats] respectively. The aim is to combine both tensors on the 2nd dim by mixing the elements of the 2nd dim in a specific ordering, which is stored in a variable order with dim [N_samples, U].
The ordering is different for every sample and is basically which index to extract from which tensor. It looks like this for a given sample order[0] - [x_0, x_1, y_0, x_2, y_1, ... ], where the letter indicates the tensor and the number indicates the index of the 2nd dim. So z[0] would be
z[0] = [x[0, 0, :], x[0, 1, :], y[0, 0, :], x[0, 2, :], y[0, 1, :] ... ]
How would I achieve this? I've written something that uses torch.gather that tries to do this.
x = torch.rand((2, 4, 5))
y = torch.rand((2, 3, 5))
# new ordering of second dim
# positive means take (n-1)th element from x
# negative means take (n-1)th element from y
order = [[1, 2, -1, 3, -2, 4, 3],
[1, -1, -2, 2, 3, 4, -3]]
# simple concat for gather
combined = torch.cat([x, y], dim=1)
# add a zero padding on top of combined tensor to ease gather
zero = torch.zeros_like(x)[:, 1:2]
combined = torch.cat([zero, combined], dim=1)
def _create_index_for_gather(index, offset, n_feats):
new_index = [abs(i) + offset if i < 0 else i for i in index]
# need to repeat index for each dim for torch.gather
new_index = [[x] * n_feats for x in new_index]
return new_index
_, offset, n_feats = x.shape
index_for_gather = [_create_index_for_gather(i, offset, n_feats) for i in order]
z = combined.gather(dim=1, index=torch.tensor(index_for_gather))
Is there a more efficient way of doing this?

Multidimensional Tensor slicing

First things first: I'm relatively new to TensorFlow.
I'm trying to implement a custom layer in tensorflow.keras and I'm having relatively hard time when I try to achieve the following:
I've got 3 Tensors (x,y,z) of shape (?,49,3,3,32) [where ? is the batch size]
On each Tensor I compute the sum over the 3rd and 4th axes [thus I end up with 3 Tensors of shape (?,49,32)]
By doing an argmax (A)on the above 3 Tensors (?,49,32) I get a single (?,49,32) Tensor
Now I want to use this tensor to select slices from the initial x,y,z Tensors in the following form:
Each element in the last dimension of A corresponds to the selected Tensor.
(aka: 0 = X, 1 = Y, 2 = Z)
The index of the last dimension of A corresponds to the slice that I would like to extract from the Tensor last dimension.
I've tried to achieve the above using tf.gather but I had no luck. Then I tried using a series of tf.map_fn, which is ugly and computationally costly.
To simplify the above:
let's say we've got an A array of shape (3,3,3,32). Then the numpy equivalent of what I try to achieve is this:
import numpy as np
x = np.random.rand(3,3,32)
y = np.random.rand(3,3,32)
z = np.random.rand(3,3,32)
x_sums = np.sum(np.sum(x,axis=0),0);
y_sums = np.sum(np.sum(y,axis=0),0);
z_sums = np.sum(np.sum(z,axis=0),0);
max_sums = np.argmax([x_sums,y_sums,z_sums],0)
A = np.array([x,y,z])
tmp = []
for i in range(0,len(max_sums)):
tmp.append(A[max_sums[i],:,:,i)
output = np.transpose(np.stack(tmp))
Any suggestions?
ps: I tried tf.gather_nd but I had no luck
This is how you can do something like that with tf.gather_nd:
import tensorflow as tf
# Make example data
tf.random.set_seed(0)
b = 10 # Batch size
x = tf.random.uniform((b, 49, 3, 3, 32))
y = tf.random.uniform((b, 49, 3, 3, 32))
z = tf.random.uniform((b, 49, 3, 3, 32))
# Stack tensors together
data = tf.stack([x, y, z], axis=2)
# Put reduction axes last
data_t = tf.transpose(data, (0, 1, 5, 2, 3, 4))
# Reduce
s = tf.reduce_sum(data_t, axis=(4, 5))
# Find largest sums
idx = tf.argmax(s, 3)
# Make gather indices
data_shape = tf.shape(data_t, idx.dtype)
bb, ii, jj = tf.meshgrid(*(tf.range(data_shape[i]) for i in range(3)), indexing='ij')
# Gather result
output_t = tf.gather_nd(data_t, tf.stack([bb, ii, jj, idx], axis=-1))
# Reorder axes
output = tf.transpose(output_t, (0, 1, 3, 4, 2))
print(output.shape)
# TensorShape([10, 49, 3, 3, 32])

How to change the values of a tensor row-wise based on predefined row indices

Consider I have a tensor called x which has the shape of [1, batch_size]. I want to change the rows of another tensor called my_tensor with the shape of [batch_size, seq_length] if the respected value in x is less than or equal to zero.
I suppose I can explain better by representing a code:
import tensorflow as tf
batch_size = 3
seq_length = 5
x = tf.constant([-1, 4, 0]) # size is [1, batch_size]
# select the indices of the rows to be changed
candidate_rows = tf.where(tf.less_equal(x, 0))
my_tensor = tf.random.uniform(shape=(batch_size, seq_length), minval=10, maxval=30, seed=123)
sess = tf.InteractiveSession()
print(sess.run(candidate_rows))
print(sess.run(my_tensor))
which will produce:
candidate_rows =
[[0]
[2]]
my_tensor =
[[10.816193 14.168425 11.83606 24.044014 24.146267]
[17.929298 11.330187 15.837727 10.592653 29.098463]
[10.122135 16.338099 24.35467 15.236387 10.991222]]
and I would like to change rows [0] and [2] in my tensor to another value, say all equal to 1.
[[1 1 1 1]
[17.929298 11.330187 15.837727 10.592653 29.098463]
[1 1 1 1 1]]
Perhaps all the problem arises when I use the tf.where. I appreciate any assistance :)
One solution to your problem is to use tf.where to select between elements of two tensors.
t = tf.ones(shape=my_tensor.shape, dtype=my_tensor.dtype)
my_tensor = tf.where(x > 0, my_tensor, t)

How to curve fit multiple y vals for single x value?

I'm trying to use numpy to curve fit (polyfit) a data set I have - it's multiple y vals for discrete x vals, i.e.:
data = [[2, 3], [3, 4], [5, 4]]
where the index is x, and the arrays are the y vals.
I tried the average/median of each array, but I get the feeling that's ignoring a lot of useful data.
TLDR:
Need to fit a curve to this scatter plot:
You could flatten your data out:
x = []
y = []
for i,ydata in enumerate(data):
x += [i]*len(ydata)
y += ydata
Now you can fit to x and y and it will account for all points in the set.

Tensorflow: Slice a 3D tensor with list of indices along the second axis

I have a placeholder tensor with shape: [batch_size, sentence_length, word_dim] and a list of indices with shape=[batch_size, num_indices]. Indices are on the second axes and are indices of words in the sentence. Batch_size & sentence_length are only known at runtime.
How do I extract a tensor with shape [batch_size, len(indices), word_dim]?
I was reading about tensorflow.gather but it seems like gather only slices along the first axes. Am I correct?
Edit: I managed to get it work with constant
def tile_repeat(n, repTime):
'''
create something like 111..122..2333..33 ..... n..nn
one particular number appears repTime consecutively.
This is for flattening the indices.
'''
print n, repTime
idx = tf.range(n)
idx = tf.reshape(idx, [-1, 1]) # Convert to a n x 1 matrix.
idx = tf.tile(idx, [1, int(repTime)]) # Create multiple columns, each column has one number repeats repTime
y = tf.reshape(idx, [-1])
return y
def gather_along_second_axis(x, idx):
'''
x has shape: [batch_size, sentence_length, word_dim]
idx has shape: [batch_size, num_indices]
Basically, in each batch, get words from sentence having index specified in idx
However, since tensorflow does not fully support indexing,
gather only work for the first axis. We have to reshape the input data, gather then reshape again
'''
reshapedIdx = tf.reshape(idx, [-1]) # [batch_size*num_indices]
idx_flattened = tile_repeat(tf.shape(x)[0], tf.shape(x)[1]) * tf.shape(x)[1] + reshapedIdx
y = tf.gather(tf.reshape(x, [-1,int(tf.shape(x)[2])]), # flatten input
idx_flattened)
y = tf.reshape(y, tf.shape(x))
return y
x = tf.constant([
[[1,2,3],[3,5,6]],
[[7,8,9],[10,11,12]],
[[13,14,15],[16,17,18]]
])
idx=tf.constant([[0,1],[1,0],[1,1]])
y = gather_along_second_axis(x, idx)
with tf.Session(''):
print y.eval()
print tf.Tensor.get_shape(y)
And the output is:
[[[ 1 2 3]
[ 3 5 6]]
[[10 11 12]
[ 7 8 9]]
[[16 17 18]
[16 17 18]]]
shape: (3, 2, 3)
However, when inputs are placeholder it does not work return error:
idx = tf.tile(idx, [1, int(repTime)])
TypeError: int() argument must be a string or a number, not 'Tensor'
Python 2.7, tensorflow 0.12
Thank you in advance.
Thank to #AllenLavoie's comments, I could eventually come up with the solution:
def tile_repeat(n, repTime):
'''
create something like 111..122..2333..33 ..... n..nn
one particular number appears repTime consecutively.
This is for flattening the indices.
'''
print n, repTime
idx = tf.range(n)
idx = tf.reshape(idx, [-1, 1]) # Convert to a n x 1 matrix.
idx = tf.tile(idx, [1, repTime]) # Create multiple columns, each column has one number repeats repTime
y = tf.reshape(idx, [-1])
return y
def gather_along_second_axis(x, idx):
'''
x has shape: [batch_size, sentence_length, word_dim]
idx has shape: [batch_size, num_indices]
Basically, in each batch, get words from sentence having index specified in idx
However, since tensorflow does not fully support indexing,
gather only work for the first axis. We have to reshape the input data, gather then reshape again
'''
reshapedIdx = tf.reshape(idx, [-1]) # [batch_size*num_indices]
idx_flattened = tile_repeat(tf.shape(x)[0], tf.shape(x)[1]) * tf.shape(x)[1] + reshapedIdx
y = tf.gather(tf.reshape(x, [-1,tf.shape(x)[2]]), # flatten input
idx_flattened)
y = tf.reshape(y, tf.shape(x))
return y
x = tf.constant([
[[1,2,3],[3,5,6]],
[[7,8,9],[10,11,12]],
[[13,14,15],[16,17,18]]
])
idx=tf.constant([[0,1],[1,0],[1,1]])
y = gather_along_second_axis(x, idx)
with tf.Session(''):
print y.eval()
print tf.Tensor.get_shape(y)
#Hoa Vu's answer was very helpful. The code works with the example x and idx which is sentence_length == len(indices), but it gives an error when sentence_length != len(indices).
I slightly changed the code and now it works when sentence_length >= len(indices).
I tested with new x and idx on Python 3.x.
def tile_repeat(n, repTime):
'''
create something like 111..122..2333..33 ..... n..nn
one particular number appears repTime consecutively.
This is for flattening the indices.
'''
idx = tf.range(n)
idx = tf.reshape(idx, [-1, 1]) # Convert to a n x 1 matrix.
idx = tf.tile(idx, [1, repTime]) # Create multiple columns, each column has one number repeats repTime
y = tf.reshape(idx, [-1])
return y
def gather_along_second_axis(x, idx):
'''
x has shape: [batch_size, sentence_length, word_dim]
idx has shape: [batch_size, num_indices]
Basically, in each batch, get words from sentence having index specified in idx
However, since tensorflow does not fully support indexing,
gather only work for the first axis. We have to reshape the input data, gather then reshape again
'''
reshapedIdx = tf.reshape(idx, [-1]) # [batch_size*num_indices]
idx_flattened = tile_repeat(tf.shape(x)[0], tf.shape(idx)[1]) * tf.shape(x)[1] + reshapedIdx
y = tf.gather(tf.reshape(x, [-1,tf.shape(x)[2]]), # flatten input
idx_flattened)
y = tf.reshape(y, [tf.shape(x)[0],tf.shape(idx)[1],tf.shape(x)[2]])
return y
x = tf.constant([
[[1,2,3],[1,2,3],[3,5,6],[3,5,6]],
[[7,8,9],[7,8,9],[10,11,12],[10,11,12]],
[[13,14,15],[13,14,15],[16,17,18],[16,17,18]]
])
idx=tf.constant([[0,1],[1,2],[0,3]])
y = gather_along_second_axis(x, idx)
with tf.Session(''):
print(y.eval())

Categories

Resources