Tensorflow - feeding examples with different length - python

Each of my training examples is a list with different length.
I am trying to find a way to feed those examples into the graph.
Below is my attempt to do so by creating a list whose elements are placeholders with unknown dimensions.
graph2 = tf.Graph()
with graph2.as_default():
A = list ()
for i in np.arange(3):
A.append(tf.placeholder(tf.float32 ,shape = [None,None]))
A_size = tf.shape(A)
with tf.Session(graph=graph2) as session:
tf.initialize_all_variables().run()
feed_dict = {A[0]:np.zeros((3,7)) ,A[1] : np.zeros((3,2)) , A[2] : np.zeros((3,2)) }
print ( type(feed_dict))
B = session.run(A_size ,feed_dict=feed_dict)
print type(B)
However I got the following error:
InvalidArgumentError: Shapes of all inputs must match: values[0].shape = [3,7] != values[1].shape = [3,2]
Any idea on how to solve it?

From the documentation of tf.placeholder:
shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape.
You need to write shape=None instead of shape=[None, None]. With your code, Tensorflow doesn't know you are dealing with variable size input.

Related

How do I multiply each vector in tensor by each element of vector

Hi so what I exactly want is if we have matrix W and vector V such as:
V=[1,2,3,4]
W=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1]]
we should got the result:
result=[[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4]]
I found this method on the website:
V = tf.constant([1,2,4], dtype=tf.float32)
W = tf.constant([[1,2,3,4],[1,2,3,4],[1,2,3,4]], dtype=tf.float32)
tf.multiply(tf.expand_dims(V,1),W)
## produce: [[1,2,3,4],[2,4,6,8],[4,8,12,16]]
which is exactly what I want but when I implement this on my model it also include the batch size of the vector in which result in error such
with input shapes: [?,1,297], [?,297,300].
which I assume is the same error which this can produce
V = tf.constant([[1,2,4]], dtype=tf.float32)
W = tf.constant([[[1,2,3,4],[1,2,3,4],[1,2,3,4]]], dtype=tf.float32)
tf.multiply(tf.expand_dims(V,1),W)
I wanted to know what is the standard procedure to get each element from the softmax output vector and multiply them as weight for each vector in the feature tensor
I found that by using
V = tf.constant([[1,2,4]], dtype=tf.float32)
W = tf.constant([[[1,2,3,4],[1,2,3,4],[1,2,3,4]]], dtype=tf.float32)
h2=tf.keras.layers.multiply([W,tf.expand_dims(V,2)])
the keras layer will ignore the batch size part for us but we have to change the parameter of expand dim because we still have to consider the batch size of V before feed to the layer.

Use tf.nn.l2_loss on a collection of differently shaped vectors

I want to calcualte the l2 loss over all my weights and biases in my neural network. Therefor I add all weights and biases to the 'tf.GraphKeys.REGULARIZATION_LOSSES' and want to calculate the l2 loss with the in tensorflow defined function:
W = tf.Variable(tf.truncated_normal([inputDim, outputDim], stddev=0.1), name='W')
b = tf.Variable(tf.ones([outputDim])/10, name='b')
tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, W)
tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, b)
...
and later in the code:
...
vars = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
l2_loss = tf.nn.l2_loss(vars) * config.L2PENALTY
I get this error using the function on a network with 3 layers and couldn't find a solution to it:
ValueError: Tried to convert 't' to a tensor and failed. Error: Shapes must be equal rank, but are 2 and 1
From merging shape 4 with other shapes. for 'l2_loss/L2Loss/packed' (op: 'Pack') with input shapes: [784,512], [512], [512,256], [256], [256,10], [10].
Although tf.nn.l2_loss receives Tensor as its argument, you passed a list of Tensors to tf.nn.l2_loss. So the error message means l2_loss cannot convert the list to a Tensor.
We should calculate L2 loss like
vars = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
list_l2_loss = []
for v in vars:
list_l2_loss.append(tf.nn.l2_loss(v))
total_l2_loss = tf.add_n(list_l2_loss)
Adding Information
You can add n-D Tensor to a collection by tf.add_to_collection. But tf.GraphKeys.REGULARIZATION_LOSSES is the reserved name for regularizers created by tf.get_variable. So we should you an original name like "MyReguralizers".

TensorFlow decode_csv shape error

I read in a *.csv file using tf.data.TextLineDataset and apply map on it:
dataset = tf.data.TextLineDataset(os.path.join(data_dir, subset, 'label.txt'))
dataset = dataset.map(lambda value: parse_record_fn(value, is_training),
num_parallel_calls=num_parallel_calls)
Parse function parse_record_fn looks like this:
def parse_record(raw_record, is_training):
default_record = ["./", -1]
filename, label = tf.decode_csv([raw_record], default_record)
# do something
return image, label
But there raise an ValueError at tf.decode_csv in parse function:
ValueError: Shape must be rank 1 but is rank 0 for 'DecodeCSV' (op: 'DecodeCSV') with input shapes: [1], [], [].
My *.csv file example:
/data/1.png, 5
/data/2.png, 7
Question:
Where goes wrong?
What does shapes: [1], [], [] mean?
Reproduce
This error can be reproduced in this code:
import tensorflow as tf
import os
def parse_record(raw_record, is_training):
default_record = ["./", -1]
filename, label = tf.decode_csv([raw_record], default_record)
# do something
return image, label
with tf.Session() as sess:
csv_path = './labels.txt'
dataset = tf.data.TextLineDataset(csv_path)
dataset = dataset.map(lambda value: parse_record(value, True))
sess.run(dataset)
Looking at the documentation of tf.decode_csv, it says about the default records:
record_defaults: A list of Tensor objects with specific types.
Acceptable types are float32, float64, int32, int64, string. One
tensor per column of the input record, with either a scalar default
value for that column or empty if the column is required.
I believe the error you are getting originates from how you define the tensor default_record. Your default_record certainly is a list of tensor objects (or objects convertible to tensors), but I think the error message is telling that they should be rank-1 tensors, not rank-0 tensors as in your case.
You can fix the issue by making the default records rank 1 tensors. See the following toy example:
import tensorflow as tf
my_line = 'filename.png, 10'
default_record_1 = [['./'], [-1]] # do this!
default_record_2 = ['./', -1] # this is what you do now
decoded_1 = tf.decode_csv(my_line, default_record_1)
with tf.Session() as sess:
d = sess.run(decoded_1)
print(d)
# This will cause an error
decoded_2 = tf.decode_csv(my_line, default_record_2)
The error produced on the last line is familiar:
ValueError: Shape must be rank 1 but is rank 0 for 'DecodeCSV_1' (op:
'DecodeCSV') with input shapes: [], [], [].
In the message, the input shapes, the three brackets [], refer to the shapes of the input arguments records, record_defaults, and field_delim of tf.decode_csv. In your case the first of these shapes is [1] since you input [raw_record]. I agree that the message for this case is not very informative...

How to slice a tensor with None dimension in Tensorflow

I want to slice a tensor in "None" dimension.
For example,
tensor = tf.placeholder(tf.float32, shape=[None, None, 10], name="seq_holder")
sliced_tensor = tensor[:,1:,:] # it works well!
but
# Assume that tensor's shape will be [3,10, 10]
tensor = tf.placeholder(tf.float32, shape=[None, None, 10], name="seq_holder")
sliced_seq = tf.slice(tensor, [0,1,0],[3, 9, 10]) # it doens't work!
It is same that i get a message when i used another place_holder to feed size parameter for tf.slice().
The second methods gave me "Input size (depth of inputs) must be accessible via shape inference" error message.
I'd like to know what's different between two methods and what is more tensorflow-ish way.
[Edited]
Whole code is below
import tensorflow as tf
import numpy as np
print("Tensorflow for tests!")
vec_dim = 5
num_hidden = 10
# method 1
input_seq1 = np.random.random([3,7,vec_dim])
# method 2
input_seq2 = np.random.random([5,10,vec_dim])
shape_seq2 = [5,9,vec_dim]
# seq: [batch, seq_len]
seq = tf.placeholder(tf.float32, shape=[None, None, vec_dim], name="seq_holder")
# Method 1
sliced_seq = seq[:,1:,:]
# Method 2
seq_shape = tf.placeholder(tf.int32, shape=[3])
sliced_seq = tf.slice(seq,[0,0,0], seq_shape)
cell = tf.contrib.rnn.GRUCell(num_units=num_hidden)
init_state = cell.zero_state(tf.shape(seq)[0], tf.float32)
outputs, last_state = tf.nn.dynamic_rnn(cell, sliced_seq, initial_state=init_state)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# method 1
# states = sess.run([sliced_seq], feed_dict={seq:input_seq1})
# print(states[0].shape)
# method 2
states = sess.run([sliced_seq], feed_dict={seq:input_seq2, seq_shape:shape_seq2})
print(states[0].shape)
Your problem is exactly described by issue #4590
The problem is that tf.nn.dynamic_rnn needs to know the size of the last dimension in the input (the "depth"). Unfortunately, as the issue points out, currently tf.slice cannot infer any output size if any of the slice ranges are not fully known at graph construction time; therefore, sliced_seq ends up having a shape (?, ?, ?).
In your case, the first issue is that you are using a placeholder of three elements to determine the size of the slice; this is not the best approach, since the last dimension should never change (even if you later pass vec_dim, it could cause errors). The easiest solution would be to turn seq_shape into a placeholder of size 2 (or even two separate placeholders), and then do the slicing like:
sliced_seq = seq[:seq_shape[0], :seq_shape[1], :]
For some reason, the NumPy-style indexing seems to have better shape inference capabilities, and this will preserve the size of the last dimension in sliced_seq.

How to prevent Tensorflow unpack() method casting to float64

I'm trying to set up a sequential RNN in Tensorflow with seq2seq.rnn_decoder(). The input that rnn_decoder() wants is a list of tensors, so to generate this I've passed in a rank-3 tensor and used tf.unpack() to make it into a list. The problem arises when the float32 array that I pass in in turned into a float64 tensor by tf.unpack(), making it incompatible with the rest of the model. Here's the code I put together to convince me that the culprit is tf.unpack():
inputDat = loader.getSequential(BATCH_SIZE)
print(inputDat.shape)
output (BATCH_SIZE is five, sequence length is ten):
(10, 5, 3)
Then I can load this data in a Tensorflow session:
sess = tf.InteractiveSession()
input_tensor = tf.constant(inputDat.astype('float32'), dtype=tf.float32)
print "Input tensor type: " + str(type(input_tensor.eval()[0,0,0]))
input_tensor = tf.unpack(inputDat)
print "Input tensor shape: " + str(len(input_tensor)) + "x" + str(input_tensor[0].eval().shape)
print "Input tensor type: " + str(type(input_tensor[0].eval()[0,0]))
Output:
Input tensor type: <type 'numpy.float32'>
Input tensor shape: 10x(5, 3)
Input tensor type: <type 'numpy.float64'>
What's going on here? Using a FOR loop to iterate through each of the sequential entries and re-cast it seems like the wrong way to do this, and I can't find a method inside Tensorflow to cast every member of a list.
You don't need a for-loop: you can use tf.cast().
Example:
input_tensor = tf.unpack(inputDat) # result is 64-bit
input_tensor = tf.cast(input_tensor, tf.float32) # now it's 32-bit

Categories

Resources