Keras for N-tuple Network (sparse input) - python

I'm trying to train N-tuple Network using keras. N-tuple network is just sparse array of one-hot activated patterns. Imagine chess board with 64 squares, each square containing possible N types of pieces, so there will be always of 64 activated ones, for 64*N possible parameters, and stored as 2d array [64][N]. Or every possible 2x2 squares, so N^4 possible configuration for each such square. Such network is linear and will output 1 value. The training is a good old SGD and the likes.
I successfully trained the network using my code in c++, using lookup tables and summing. But I tried to do it keras, as keras allows for different optimization algorithms, use of GPUs etc. For starters I changed the 2d array into big vector, but soon it became impractical. There are thousands possible parameters, in which there are only handful (fixed) number of ones and the rest are zeros.
I was wondering if in keras (or similar library) it is possible to use training data like this: 13,16,11,11,5,...,3, where those numbers would be indexes, instead of using one big vector of 0,0,0,1,0,0,......,1,0,0,0,....,1,0,0,0,...

You could use, tf.sparse.SparseTensor(...), then set sparse=True, for tf.keras.Input(...).
def sparse_one_hot(y):
m = len(y)
n_classes = len(tf.unique(tf.squeeze(y))[0])
dim2 = tf.range(m, dtype='int64')[:, None]
indices = tf.concat([y, dim2], axis=1)
ones = tf.ones(shape=(m, ), dtype='float32')
sparse_y = tf.sparse.SparseTensor(indices, ones, dense_shape=(m, n_classes))
return sparse_y
import tensorflow as tf
y = tf.random.uniform(shape=(10, 1), minval=0, maxval=4, dtype=tf.int64)
sparse_y = sparse_one_hot(y) # sparse_y.values, sparse_y.indices
# set sparse=True, for Input
# tf.keras.Input(..., sparse=True, ...)

Related

Batch inference of softmax does not sum to 1

I am working with REINFORCE algorithm with PyTorch. I noticed that the batch inference/predictions of my simple network with Softmax doesn’t sum to 1 (not even close to 1). I am attaching a minimum working code so that you can reproduce it. What am I missing here?
import numpy as np
import torch
obs_size = 9
HIDDEN_SIZE = 9
n_actions = 2
np.random.seed(0)
model = torch.nn.Sequential(
torch.nn.Linear(obs_size, HIDDEN_SIZE),
torch.nn.ReLU(),
torch.nn.Linear(HIDDEN_SIZE, n_actions),
torch.nn.Softmax(dim=0)
)
state_transitions = np.random.rand(3, obs_size)
state_batch = torch.Tensor(state_transitions)
pred_batch = model(state_batch) # WRONG PREDICTIONS!
print('wrong predictions:\n', *pred_batch.detach().numpy())
# [0.34072137 0.34721774] [0.30972624 0.30191955] [0.3495524 0.3508627]
# DOES NOT SUM TO 1 !!!
pred_batch = [model(s).detach().numpy() for s in state_batch] # CORRECT PREDICTIONS
print('correct predictions:\n', *pred_batch)
# [0.5955179 0.40448207] [0.6574412 0.34255883] [0.624833 0.37516695]
# DOES SUM TO 1 AS EXPECTED
Although PyTorch lets us get away with it, we don’t actually provide an input with the right dimensionality. We have a model that takes one input and produces one output, but PyTorch nn.Module and its subclasses are designed to do so on multiple samples at the same time. To accommodate multiple samples, modules expect the zeroth dimension of the input to be the number of samples in the batch.
Deep Learning with PyTorch
That your model works on each individual sample is an implementation nicety. You have incorrectly specified the dimension for the softmax (across batches instead of across the variables), and hence when given a batch dimension it is computing the softmax across samples instead of within samples:
nn.Softmax requires us to specify the dimension along which the softmax function is applied:
softmax = nn.Softmax(dim=1)
In this case, we have two input vectors in two rows (just like when we work with
batches), so we initialize nn.Softmax to operate along dimension 1.
Change torch.nn.Softmax(dim=0) to torch.nn.Softmax(dim=1) to get appropriate results.

"Tensor objects are only iterable when eager execution is enabled" error when trying to create custom loss function in Keras

I am trying to do SVD using a neural network. My input is a matrix (let's say 4x4 matrices only) and the output is a vector representing the decomposed form (given that the input is 4x4 this would be a 36 element vector with 16 elements for U, 4 elements for S, and 16 elements for V.T).
I am trying to define a custom loss function instead of using something like MSE on the decomposed form. So instead of comparing the 36 length vectors for loss, I want to compute the loss between the reconstructed matrices. So if A = U * S * V.T (actual) and A' = U' * S' * V.T' (predicted), I want to compute the loss between A and A'.
I am pretty new to tensorflow and keras, so I may be doing some naive things, but here is what I have so far. While the logic seems okay to me, I get a TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn. I am not sure why this is the case and how to fix it? Also, do I need to flatten the output from the reconstruct_matrix, as I am currently doing, or should I just leave it as is?
# This function takes the decomposed matrix (vector of U, S, V.T)
# and reconstructs the original matrix
def reconstruct_matrix(decomposed_vector):
example = decomposed_vector
s = np.zeros((4,4))
for en, i in enumerate(example[16:20]):
s[en, en] = i
u = example[:16].reshape(4,4)
vt = example[20:].reshape(4,4)
orig = np.matmul(u, s)
orig = np.matmul(orig, vt)
return orig.flatten() # Given that matrices are 4x4, this will be a length 16 vector
# Custom loss that essentially computes MSE on reconstructed matrices
def custom_loss(y_true, y_pred):
Y = reconstruct_matrix(y_true)
Y_prime = reconstruct_matrix(y_pred)
return K.mean(K.square(Y - Y_prime))
model.compile(optimizer='adam',
loss=custom_loss)
Note: My keras version is 2.2.4 and my tensorflow version is 1.14.0.
In tf1.x eager execution is disabled by default (it's on for version 2 onwards).
You have to enable it by calling at the top of your script:
import tensorflow as tf
tf.enable_eager_execution()
This mode allows you to use Python-like abstractions for flow control (e.g. if statements and for loop you've been using in your code). If it's disabled you need to use Tensorflow functions (tf.cond and tf.while_loop for if and for respectively).
More information about it in the docs.
BTW. I'm not sure about flatten, but remember your y_true and y_pred need the same shape and samples have to be respective to each other, if that's fulfilled you should be fine.

How to feed input with changing size in Tensorflow

I want to train a network with planar curves, which I represent as numpy arrays with shape (L,2).
The number 2 stands for x,y coordinates and L is the number of points which is changing in my dataset. I treat x,y as 2 different "channels".
I implemented a function, next_batch(batch_size), that provides the next batch as a 1D numpy array with shape (batch_size,), containing elements which are 2D arrays with shape: (L,2). These are my curves, and as mentioned before, L is different between the elements. (I didn't want to confine to fixed number of points in the curve).
My question:
How can I manipulate the output from next_batch() so I will able to feed the network with the input curves, using a scheme similar to what appears in Tensorflow tutorial: https://www.tensorflow.org/get_started/mnist/pros
i.e, using the feed_dict mechanism.
In the given turorial the input size was fixed, in the tutorial's code line:
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
batch[0] has a fixed shape: (50,784) (50 = #samples,784 = #pixels)
I cannot transform my input into numpy array with shape (batch_size,L,2)
since the array should have fixed size in every dimension.
So what can I do?
I already defined a placeholder (that can have unknown size):
#first dimension is the sample dim, second is curve length, third:x,y coordinates
x = tf.placeholder(tf.float32, [None, None,2])
but how can I feed it properly?
Short answer that you're probably looking for: you can't without padding or grouping samples by lenght.
To elaborate a bit: in tensorflow, dimensions must be fixed throughout a batch, and jagged arrays are not natively supported.
Dimensions may be unknown a priori (in which case you set the placeholders' dimensions to None) but are still inferred at runtime, so
your solution of having a placeholder:
x = tf.placeholder(tf.float32, [None, None, 2])
couldn't work because it's semantically equivalent to saying "I don't know the constant length of the curves in a batch a priori, infer it at runtime from the data".
This is not to say that your model in general can't accept inputs of different dimensions, if you structure it accordingly, but the data that you feed it each time you call sess.run() must have fixed dimensions.
Your options, then, are as follows:
Pad your batches along the second dimension.
Say that you have 2 curves of shape (4, 2) and (5, 2) and you know the maximum curve length in you dataset is 6, you could use np.pad as follows:
In [1]: max_len = 6
...: curve1 = np.random.rand(4, 2)
...: curve2 = np.random.rand(5, 2)
...: batch = [curve1, curve2]
In [2]: for b in batch:
...: dim_difference = max_len - b.shape[0]
...: print np.pad(b, [(0, dim_difference), (0,0)], 'constant')
...:
[[ 0.92870128 0.12910409]
[ 0.41894655 0.59203704]
[ 0.3007023 0.52024492]
[ 0.47086336 0.72839691]
[ 0. 0. ]
[ 0. 0. ]]
[[ 0.71349902 0.0967278 ]
[ 0.5429274 0.19889411]
[ 0.69114597 0.28624011]
[ 0.43886002 0.54228625]
[ 0.46894651 0.92786989]
[ 0. 0. ]]
Have your next_batch() function return batches of curves grouped by length.
These are the standard ways of doing things when dealing with jagged arrays.
Another possibility, if your task allows for it, is to concatenate all your points in a single tensor of shape (None, 2) and change your model to operate on single points as if they were samples in a batch. If you save the original sample lengths in a separate array, you can then restore the model outputs by slicing them correctly. This is highly inefficient and requires all sorts of assumptions on your problem, but it's a possibility.
Cheers and good luck!
You can use input with different sizes in TF. just feed the data in the same way as in the tutorial you listed, but make sure to define the changing dimensions in the placeholder as None.
Here's an simple example of feeding a placeholder with different shapes:
import tensorflow as tf
import numpy as np
array1 = np.arange(9).reshape((3,3))
array2 = np.arange(16).reshape((4,4))
array3 = np.arange(25).reshape((5,5))
model_input = tf.placeholder(dtype='float32', shape=[None, None])
sqrt_result = tf.sqrt(model_input)
with tf.Session() as sess:
print sess.run(sqrt_result, feed_dict={model_input:array1})
print sess.run(sqrt_result, feed_dict={model_input:array2})
print sess.run(sqrt_result, feed_dict={model_input:array3})
You can use placeholder with initial the var with [None, ..., None]. Each 'None' means there are input feed data at that dimension for the compiler. For example, [None, None] means a matrix with any row and column length you can feed. However, you should take care about which kind of NN you use. Because when you deal with CNN, at the convolution layer and pool layer you must identify the specific size of the 'tensor'.
Tensorflow Fold might be of interest to you.
From the Tensorflow Fold README:
TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph depends on the structure of the input data.Fold implements dynamic batching. Batches of arbitrarily shaped computation graphs are transformed to produce a static computation graph. This graph has the same structure regardless of what input it receives, and can be executed efficiently by TensorFlow.
The graph structure can be set up so as to accept an arbitrary L value so that any structured input can be read in. This is especially helpful when building architectures such as recursive neural nets. The overall structure is very similar to what you are used to (feed dicts, etc). Since you need a dynamic computational graph for your application, this might be a good move for you in the long run.

Creating dynamic matrix for every element in a batch in TensorFlow

I am working on a siamese CNN with attention in TensorFlow.
The CNN structure consists on a embedding lookup table shared by two CNN sharing weights.
The inputs for the network are two matrices, both containing indices for question and answer to be fed into the network (batch_size x sentence_length):
self.input_q = tf.placeholder(tf.int32, [None, sentence_length], name="input_q")
self.input_a = tf.placeholder(tf.int32, [None, sentence_length], name="input_a")
After embedding each sentence (row from the input matrix) I end up with two tensors (questions and answer) each of them of size: batch_size x sentence_lentgh x embedding_size.
Let's forget for now about the batch dimension to make things easier. This is to say, we have two matrices Qemb and Aemb, both sentence_lentgh x embedding_size.
From this two matrices I would like to construct a third one, an attention matrix A used for a posterior learnable attention feature matrix , using numpy would be defined as follows:
A[i,j] = 1.0 / (1.0 + np.linalg.norm(Qemb[i,:]-Aemb[j,:]))
This matrix is built for each input pair, so should be a part of the graph, but apparently this cannot be done in TensorFlow as there's no asingn operation by index for a Tensor.
Am I right?
I thought I could run the ops for embedding the question and answer, build the A matrix outside the graph given the computedembeddings and then feed the A matrix back to the graph to continue the next operations based on it.
self.attention_matrix = \
tf.placeholder(tf.float32,
[None, sentence_length, sentence_length],
name = "Attention_matrix")
Is there any problem with this approach that I might not be aware of?
(Appart from runing the embeddings ops twice, what doesn't seem optimal, but not a big deal)

TensorFlow Multi-Layer Perceptron

I am learning TensorFlow, and my goal is to implement MultiPerceptron for my needs. I checked the MNIST tutorial with MultiPerceptron implementation and everything was clear to me except this:
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
I guess, x is an image itself(28*28 pixels, so the input is 784 neurons) and y is a label which is an 1x10 array:
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
They feed whole batches (which are packs of data points and labels)! How does tensorflow interpret this "batch" input? And how does it update the weights: simultaneously after each element in a batch, or after running through the whole batch?
And, if I need to input one number (input_shape = [1,1]) and output four numbers (output_shape = [1,4]), how should I change the tf.placeholders and in which form should I feed them into session?
When I ask, how does tensorflow interpret it, I want to know how tensorflow splits the batch into single elements. For example, batch is a 2-D array, right? In which direction does it split an array? Or it uses matrix operations and doesn't split anything?
When I ask, how should I feed my data, I want to know, should it be a 2-D array with samples at its rows and features at its columns, or, maybe, could it be a 2-D list.
When I feed my float numpy array X_train to x, which is :
x = tf.placeholder("float", [1, n_input])
I receive an error:
ValueError: Cannot feed value of shape (1, 18) for Tensor 'Placeholder_10:0', which has shape '(1, 1)'
It appears that I have to create my data as a Tensor too?
When I tried [18x1]:
Cannot feed value of shape (18, 1) for Tensor 'Placeholder_12:0', which has shape '(1, 1)'
They feed whole bathces(which are packs of data points and labels)!
Yes, this is how neural networks are usually trained (due to some nice mathematical properties of having best of two worlds - better gradient approximation than in SGD on one hand and much faster convergence than full GD).
How does tensorflow interpret this "batch" input?
It "interprets" it according to operations in your graph. You probably have reduce mean somewhere in your graph, which calculates average over your batch, thus causing this to be the "interpretation".
And how does it update the weights: 1.simultaniusly after each element in a batch? 2. After running threw the whole batch?.
As in the previous answer - there is nothing "magical" about batch, it is just another dimension, and each internal operation of neural net is well defined for the batch of data, thus there is still a single update in the end. Since you use reduce mean operation (or maybe reduce sum?) you are updating according to mean of the "small" gradients (or sum if there is reduce sum instead). Again - you could control it (up to the agglomerative behaviour, you cannot force it to do per-sample update unless you introduce while loop into the graph).
And, if i need to imput one number(input_shape = [1,1]) and ouput four nubmers (output_shape = [1,4]), how should i change the tf.placeholders and in which form should i feed them into session? THANKS!!
just set the variables, n_input=1 and n_classes=4, and you push your data as before, as [batch, n_input] and [batch, n_classes] arrays (in your case batch=1, if by "1x1" you mean "one sample of dimension 1", since your edit start to suggest that you actually do have a batch, and by 1x1 you meant a 1d input).
EDIT: 1.when i ask, how does tensorflow interpret it, i want to know, how tensorflow split the batch into single elements. For example, batch is a 2-D array, right? In which direction it splits an array. Or it uses matrix operations and doesnt split anything? 2. When i ask, how should i feed my data, i want to know, should it be a 2-D array with samples at its rows and features at its colums, or, maybe, could it be a 2-D list.
It does not split anything. It is just a matrix, and each operation is perfectly well defined for matrices as well. Usually you put examples in rows, thus in first dimension, and this is exactly what [batch, n_inputs] says - that you have batch rows each with n_inputs columns. But again - there is nothing special about it, and you could also create a graph which accepts column-wise batches if you would really need to.

Categories

Resources