Tensor multiplication in Keras - python

I have two tensors of size
A <tf.Tensor 'sequential_12/my_layer_56/add:0' shape=(?, 300, 2) dtype=float32>
and
B <tf.Tensor 'input_82:0' shape=(?, 2, 2) dtype=float32>
Now, I would like to multiply them in the sense of the usual matrix row-column product to obtain
A * B of size (?, 300, 2), so I would be doing the matrix product only over the second and third dimension. How can I achieve this?
I tried to use tf.tensordot with different axes specifications, but it did not work so far. For example I tried
tf.tensordot(A,B,axes=[[2], [0]])
but this produces a tensor of the following form
<tf.Tensor 'Tensordot_10:0' shape=(?, 300, 2, 2) dtype=float32>

Maybe try tf.matmul:
import tensorflow as tf
samples = 1
A = tf.random.normal((samples, 300, 2))
B = tf.random.normal((samples, 2, 2))
print(tf.matmul(A, B).shape)
# (1, 300, 2)

Related

Addition of unequal sized tensors in Keras

I am working in Keras with a tensor of the form
A = <tf.Tensor 'lambda_87/strided_slice:0' shape=(?, 40, 2) dtype=float32>
Now, I would like to add, for each of the 40 "rows" the index 0 row of the a Tensor with dimensions
B = <tf.Tensor 'lambda_92/mul:0' shape=(?, 2, 2) dtype=float32>
For short, for the second tensor I only need for the present step B[:,0,:]. So, excluding the first dimension, this would be the first "row" of the matrix B.
The Add() layer seems to work only with equally-sized tensors. Any suggestion on how I could specify a Lambda function that does the job?
Thanks for reading!
Maybe try something like this:
import tensorflow as tf
samples = 1
A = tf.random.normal((samples, 40, 2))
B = tf.random.normal((samples, 2, 2))
B = tf.expand_dims(B[:, 0, :], axis=1) # or just B = B[:, 0, :]
C = A + B
print(C.shape)
# (1, 40, 2)
Or with a Lambda layer:
import tensorflow as tf
samples = 1
A = tf.random.normal((samples, 40, 2))
B = tf.random.normal((samples, 2, 2))
lambda_layer = tf.keras.layers.Lambda(lambda x: x[0] + x[1][:, 0, :])
print(lambda_layer([A, B]))

How to make SHAP's DeepExplainer work with deepctr library

I am using DeepCTR (version 0.7.5) keras library to predict ctr (using DeepFM)
https://deepctr-doc.readthedocs.io/en/latest/deepctr.models.deepfm.html.
Here is a small example of the code to fit the model:
#Imports, then feature preperation...
model = DeepFM(linear_feature_columns, dnn_feature_columns, task='binary')
model.compile(optimizer, loss)
train_model_input = [train_df[name] for name in feature_names]
model.fit(x=train_model_input, y=train_df[TARGET].values, validation_split=0.3)
But when I try the following:
e = shap.DeepExplainer(model, test_df.head(50))
I get the following error:
ValueError: Cannot feed value of shape (50, 17) for Tensor 'x:0', which has shape '(?, 1)'
I looked all over google and tried playing alot with the inputs shape and the SHAP API but nothing worked for me.
Additional info:
The model inputs format (17 values) is:
[<tf.Tensor 'x:0' shape=(?, 1) dtype=int32>, <tf.Tensor 'x1:0' shape=(?, 1) dtype=int32>...
And outputs is:
<tf.Tensor 'prediction_layer/Reshape:0' shape=(?, 1) dtype=float32>

TensorFlow metric: top-N accuracy

I'm using add_metric trying to create a custom metric that computes top 3 accuracy for a classifier. Here's as far as I got:
def custom_metrics(labels, predictions):
# labels => Tensor("fifo_queue_DequeueUpTo:422", shape=(?,), dtype=int64)
# predictions => {
# 'logits': <tf.Tensor 'dnn/logits/BiasAdd:0' shape=(?, 26) dtype=float32>,
# 'probabilities': <tf.Tensor 'dnn/head/predictions/probabilities:0' shape=(?, 26) dtype=float32>,
# 'class_ids': <tf.Tensor 'dnn/head/predictions/ExpandDims:0' shape=(?, 1) dtype=int64>,
# 'classes': <tf.Tensor 'dnn/head/predictions/str_classes:0' shape=(?, 1) dtype=string>
# }
Looking at the implementation of existing tf.metrics, everything is implemented using tf ops. How could I implement top 3 accuracy?
If you want to implement it yourself tf.nn.in_top_k is very useful - it returns a boolean array which indicates if target is within the top k predictions. You just have to take the mean of the result:
def custom_metrics(labels, predictions):
return tf.metrics.mean(tf.nn.in_top_k(predictions=predictions, targets=labels, k=3))
You can also import it:
from tf.keras.metrics import top_k_categorical_accuracy

using rnn_cell inside of tf.while getting ValueError: The two structures don't have the same number of elements

Given data = tf.placeholder(tf.float32, [2, None, 3]) (batch_size * time_step * feature_size), Ideally I want do tf.unstack(data, axis = 1) to get a number of tensors each of which has the shape of [2,3] so later feed them to a rnn with a for loop like
for rnn_input in rnn_inputs:
state = rnn_cell(rnn_input, state)
Using high-level API like tf.nn.dynamic_rnn is off the table so I create a work around like
import tensorflow as tf
data = tf.placeholder(tf.float32, [2, None, 3])
step_number = tf.placeholder(tf.int32, None)
loop_counter_inital = tf.constant(0)
initi_state = tf.zeros([2,3], tf.float32)
def while_condition(loop_counter, rnn_states):
return loop_counter < step_number
def while_body(loop_counter, rnn_states):
loop_counter_current = loop_counter
current_states = tf.gather_nd(data, tf.stack([tf.range(0, 2), tf.zeros([2], tf.int32)+loop_counter_current], axis=1))
cell = tf.nn.rnn_cell.BasicRNNCell(3)
rnn_states = cell(current_states, rnn_states)
return [loop_counter_current, rnn_states]
_, _states = tf.while_loop(while_condition, while_body,
loop_vars=[loop_counter_inital, initi_state],
shape_invariants=[loop_counter_inital.shape, tf.TensorShape([2, 3])])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print (sess.run(_states, feed_dict={data:[[[3,1,6],[4,1,2]],[[5,8,1],[0,5,2]]], step_number:2 }))
The idea is to loop through each row in each of the 2D tensor of data to get the features for each time step. I got a error
First structure (2 elements): [<tf.Tensor 'while/Identity:0' shape=() dtype=int32>, <tf.Tensor 'while/Identity_1:0' shape=(2, 3) dtype=float32>]
Second structure (3 elements): [<tf.Tensor 'while/Identity:0' shape=() dtype=int32>, (<tf.Tensor 'while/basic_rnn_cell/Tanh:0' shape=(2, 3) dtype=float32>, <tf.Tensor 'while/basic_rnn_cell/Tanh:0' shape=(2, 3) dtype=float32>)]
There seems to be some related posts. None actually worked. Can anyone help?
You need to know every BasicRNNCell will implement call() with the signature (output, next_state) = call(input, state). This means that your result is a list of shapes ((?,unit),(?,unit)). So you need to do as follows.
rnn_states = cell(current_states, rnn_states)[1]
You also made a mistake here. You forgot to add 1 to loop_counter_current.
return [loop_counter_current+1, rnn_states]
Add
First structure represents the initial value of parameter loop_vars you passed in, which contains the initial values of loop_counter_inital and initi_state. So its structure corresponds to the following.
[
<tf.Tensor 'while/Identity:0' shape=() dtype=int32> #---> loop_counter_inital
, <tf.Tensor 'while/Identity_1:0' shape=(2, 3) dtype=float32> #---> initi_state
]
Second structure represents the parameter loop_vars after a loop. Its results correspond to the following based on previous errors.
[
<tf.Tensor 'while/Identity:0' shape=() dtype=int32> #---> loop_counter_inital
, (<tf.Tensor 'while/basic_rnn_cell/Tanh:0' shape=(2, 3) dtype=float32> #---> output
, <tf.Tensor 'while/basic_rnn_cell/Tanh:0' shape=(2, 3) dtype=float32>) #---> initi_state
]

How to build a tensor from 2 scalars in Tensorflow?

I have two scalars resulting from the following operations:
a = tf.reduce_sum(tensor1), b = tf.matmul(tf.transpose(tensor2), tensor3) this is a dot product since tensor2 and tensor3 have the same dimensions (1-D vectors). Since these tensors have shape [None, dim1] it becomes difficult to deal with the shapes.
I want to build a tensor that has shape (2,1) using a and b.
I tried tf.Tensor([a,b], dtype=tf.float64, value_index=0) but raises the error
TypeError: op needs to be an Operation: [<tf.Tensor 'Sum_5:0' shape=() dtype=float32>, <tf.Tensor 'MatMul_67:0' shape=(?, ?) dtype=float32>]
Any easier way to build that tensor/vector?
This would do probably. Change axis based on what you need
a = tf.constant(1)
b = tf.constant(2)
c = tf.stack([a,b],axis=0)
Output:
array([[1],
[2]], dtype=int32)
You can use concat or stack to achieve this:
import tensorflow as tf
t1 = tf.constant([1])
t2 = tf.constant([2])
c = tf.reshape(tf.concat([t1, t2], 0), (2, 1))
with tf.Session() as sess:
print sess.run(c)
In a similar way you can achieve it with tf.stack.

Categories

Resources