Variables with dynamic shape TensorFlow - python

I need to create a matrix in TensorFlow to store some values. The trick is the matrix has to support dynamic shape.
I am trying to do the same I would do in numpy:
myVar = tf.Variable(tf.zeros((x,y), validate_shape=False)
where x=(?) and y=2. But this does not work because zeros does not support 'partially known TensorShape', so, How should I do this in TensorFlow?

1) You could use tf.fill(dims, value=0.0) which works with dynamic shapes.
2) You could use a placeholder for the variable dimension, like e.g.:
m = tf.placeholder(tf.int32, shape=[])
x = tf.zeros(shape=[m])
with tf.Session() as sess:
print(sess.run(x, feed_dict={m: 5}))

If you know the shape out of the session, this could help.
import tensorflow as tf
import numpy as np
v = tf.Variable([], validate_shape=False)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(v, feed_dict={v: np.zeros((3,4))}))
print(sess.run(v, feed_dict={v: np.zeros((2,2))}))

Related

Custom layer needs changing tf.keras.input empty tensor into numpy ndarray - 'Tensor' object has no attribute 'numpy' error

I've written a tf.keras custom layer in which I used some functions that work only with numpy arrays, so when I try to use my layer in a model with tf.keras.Input, the functions raise an error: input data must be a numpy ndarray.
tf.keras.backend.eval(x) and x.numpy() both results in error: 'Tensor' object has no attribute 'numpy' , even though eager execution is enabled.
using sess = tf.compat.v1.Session() and sess.run(x) gives: The Session graph is empty. Add operations to the graph before calling run()
Here's a sample model I'm testing with:
inputs = keras.Input(shape=(48,48,1))
x = keras.layers.Conv2D(64, (3,3), padding='same', activation='relu')(inputs)
# y = tf.keras.backend.eval(x)
# init_op = tf.compat.v1.global_variables_initializer()
# with tf.compat.v1.Session() as sess:
# sess.run(init_op)
# y = sess.run(x)
# y = x.numpy()
z = Mylayer.My_custom_layer()(y)
outputs = z
model = keras.Model(inputs=inputs, outputs=outputs)
The commented lines are what I've tested.
Is there anyway that I can convert this tensor input to numpy array before entering my custom layer?
I think you can't use numpy function for creating the computation graph in Keras or TensorFlow. and you should use equal built-in functions from tensorflow or keras. can you give the code for your costumeLayer?

How to generate a static random constant in Tensorflow?

I want to generate a constant tensor in Tensorflow, which will be initialized with a specified mechanism, eg, random_uniform, random_normal.
I know that I can generate a random numpy array according to these mechanisms, say random_uniform, random_normal, etc; Then we feed the resulted numpy array as value argument in tf.constant.
However, the question is that we must give a shape when using numpy version of random mechanism. However, I don't want to pre-specify the shape, and I hope the shape is resilient, just like we write Tensorflow code shape = tf.shape(some_previous_tensor)
Way1 I tried: There is not a must to pre-specify the concrete shape of the constant in the graph construction phase. However, the generated tensor is random rather than static. That is not I expected.
var = tf.random.normal(
[2,2], mean=0.0, stddev=0.5, dtype=tf.float32,
)
with tf.Session() as sess:
print('var:', sess.run(var))
print('var:', sess.run(var))
Output:
var: [[ 0.21260215 0.13721827]
[ 0.7704196 -0.48304045]]
var: [[-0.63397115 -0.0956466 ]
[ 0.0761982 0.54037064]]
Way2 I tried: I can get static constant, but it is necessary to give a size in np.random.normal, which is not I expected.
var_np = np.random.normal(0,0.5, size=(2,2))
var = tf.constant(value=var_np)
with tf.Session() as sess:
print('var:', sess.run(var))
print('var:', sess.run(var))
Output:
var: [[-0.73357953 -0.10277695]
[ 0.57406473 0.32157612]]
var: [[-0.73357953 -0.10277695]
[ 0.57406473 0.32157612]]
You can use tf.Variable / tf.get_variable with trainable=False and validate_shape=False. You can use a value depending on a placeholder for the shape as initial value. Then, when you initialize the variable (either using the initializer attribute or something more common like tf.global_variables_initializer), you just have to give the shape for initialization. After the initialization, the value of the variable will be kept the same for the whole session, as long as it is not initialized again or assigned a different value.
import tensorflow as tf
shape = tf.placeholder(tf.int32, [None])
var_init = tf.random.normal(
shape, mean=0.0, stddev=0.5, dtype=tf.float32,
)
var = tf.Variable(var_init, validate_shape=False, trainable=False, name='Var')
with tf.Session() as sess:
tf.random.set_random_seed(0)
sess.run(var.initializer, feed_dict={shape: [2, 3]})
print('var:', sess.run(var), sep='\n')
print('var:', sess.run(var), sep='\n')
Output:
var:
[[-0.4055751 0.7597851 -0.04810145]
[ 0.92776746 -0.3747548 -0.03715562]]
var:
[[-0.4055751 0.7597851 -0.04810145]
[ 0.92776746 -0.3747548 -0.03715562]]
Just run tf.shape(t) for a tensor t in whose shape you want your static random tensor to be. Feed the output value as size argument to np.random.normal and you're all set.

Tensorflow: Can not convert a function into a Tensor or Operation

I have read through previous strings. My data are in the form of an array fed to a placeholder. Trying to convert the data to a tensor before feeding produces a different (inverse) error message. Other solutions similarly do not seem to work in this situation. Here is minimal code.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.contrib.factorization import KMeans
X = tf.placeholder(tf.float32, shape=[None, 10], name="X")
data = np.random.randn(2,10)
def lump(X):
# Build KMeans graph
kmeans = KMeans(inputs=X, num_clusters=k, distance_metric='cosine',
use_mini_batch=True)
(all_scores, cluster_idx, scores, cluster_centers_initialized, cluster_centers_var, init_op,
train_op) = kmeans.training_graph()
cluster_idx = cluster_idx[0] # fix for cluster_idx being a tuple
avg_distance = tf.reduce_mean(scores)
return cluster_idx, scores
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
idx, d = sess.run(lump,feed_dict={X: data})
Correct, you can't evaluate just lump, because it's a function (returning tensors), not a tensor or an op. You probably meant to do something like this:
cluster_idx, scores = lump(X)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
idx, d = sess.run([cluster_idx, scores], feed_dict={X: data})
Note that lump() is invoked before tf.global_variables_initializer(), because it defines new variables in the graph, so they must be initialized.
The code still fails, because lump is clearly not finished and has issues with dimensions, but it is the right way to evaluate something in a session.

Tensorflow : Shape mismatch issue with one dimensional data

I am trying to pass x_data as feed_dict but getting below error, I am not sure that what is wrong in the code.
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'x_12' with dtype int32 and shape [1000]
[[Node: x_12 = Placeholder[dtype=DT_INT32, shape=[1000], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
My Code:
import tensorflow as tf
import numpy as np
model = tf.global_variables_initializer()
#define x and y
x = tf.placeholder(shape=[1000],dtype=tf.int32,name="x")
y = tf.Variable(5*x**2-3*x+15,name = "y")
x_data = tf.pack(np.random.randint(0,100,size=1000))
print(x_data)
print(x)
with tf.Session() as sess:
sess.run(model)
print(sess.run(y,feed_dict={x:x_data}))
I checked the shape of the x and x_data and it is same
Tensor("pack_8:0", shape=(1000,), dtype=int32)
Tensor("x_14:0", shape=(1000,), dtype=int32)
I am working with one dimensional data.
Any help is appreciated, Thanks!
To make it work I have changed two things, first I changed y to be a Tensor. And secondly I have not changed the x_data to Tensor, as commented here:
The optional feed_dict argument allows the caller to override the value of tensors in the graph. Each key in feed_dict can be one of the following types:
If the key is a Tensor, the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a placeholder, the shape of the value will be checked for compatibility with the placeholder.
The changed code which works for me:
import tensorflow as tf
import numpy as np
model = tf.global_variables_initializer()
#define x and y
x = tf.placeholder(shape=[1000],dtype=tf.int32,name="x")
y = 5*x**2-3*x+15 # without tf.Variable, making it a tf.Tensor
x_data = np.random.randint(0,100,size=1000) # without tf.pack
print(x_data)
print(x)
with tf.Session() as sess:
sess.run(model)
print(sess.run(y,feed_dict={x:x_data}))

Tensorflow slicing based on variable

I've found that indexing still is an open issue in tensorflow (#206), so I'm wondering what I could use as a workaround at the moment. I want to index/slice a row/column of a matrix based on a variable that changes for every training example.
What I've tried so far:
Slicing based on placeholder (doesn't work)
The following (working) code slices based on a fixed number.
import tensorflow as tf
import numpy as np
x = tf.placeholder("float")
y = tf.slice(x,[0],[1])
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(y, feed_dict={x:[1,2,3,4,5]})
print(result)
However, it seems that I can't simply replace one of these fixed numbers with a tf.placeholder. The following code gives me the error "TypeError: List of Tensors when single Tensor expected."
import tensorflow as tf
import numpy as np
x = tf.placeholder("float")
i = tf.placeholder("int32")
y = tf.slice(x,[i],[1])
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(y, feed_dict={x:[1,2,3,4,5],i:0})
print(result)
This sounds like the brackets around [i] are too much, but removing them doesn't help either. How to use a placeholder/variable as index?
Slicing based on python variable (doesn't backprop/update properly)
I've also tried using a normal python variable as index. This does not lead to an error, but the network doesn't learn anything while training. I suppose because the changing variable is not properly registered, the graph is malformed and updates don't work?
Slicing via one-hot vector + multiplication (works, but is slow)
One workaround I found is using a one-hot vector. Making a one-hot vector in numpy, passing this using a placeholder, then doing the slicing via matrix multiplication. This works, but is quite slow.
Any ideas how to efficiently slice/index based on a variable?
Slicing based on a placeholder should work just fine. It looks like you are running into a type error, due to some subtle issues of shapes and types. Where you have the following:
x = tf.placeholder("float")
i = tf.placeholder("int32")
y = tf.slice(x,[i],[1])
...you should instead have:
x = tf.placeholder("float")
i = tf.placeholder("int32")
y = tf.slice(x,i,[1])
...and then you should feed i as [0] in the call to sess.run().
To make this a little clearer, I would recommend rewriting the code as follows:
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=[None]) # 1-D tensor
i = tf.placeholder(tf.int32, shape=[1])
y = tf.slice(x, i, [1])
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(y, feed_dict={x: [1, 2, 3, 4, 5], i: [0]})
print(result)
The additional shape arguments to the tf.placeholder op help to ensure that the values you feed have the appropriate shapes, and also that TensorFlow will raise an error if the shapes are not correct.
If you have an extra dimension, this works.
import tensorflow as tf
import numpy as np
def reorder0(e, i, length):
'''
e: a two dimensional tensor
i: a one dimensional int32 tensor, of shape (e.shape[0])
returns: a tensor of the same shape as e, where the jth entry is entry i[j] from e
'''
return tf.concat(
[ tf.expand_dims( e[i[j],:], axis=0) for j in range(length) ],
axis=0
)
e = tf.placeholder(tf.float32, shape=(2,3,5), name='e' ) # sentences, words, embedding
i = tf.placeholder(tf.int32, shape=(2,3), name='i' ) # for each word, index of parent
p = tf.concat(
[ tf.expand_dims(reorder0(e[k,:,:], i[k,:], 3), axis=0) for k in range(2) ],
axis=0,
name='p'
)
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(p, feed_dict={
e: [
( (1.0,1.1,1.2,1.3,1.4),(2.0,2.1,2.2,2.3,2.4),(3.0,3.1,3.2,3.3,3.4) ),
( (21.0,21.1,21.2,21.3,21.4),(22.0,22.1,22.2,22.3,22.4),(23.0,23.1,23.2,23.3,23.4) ),
],
i: [ (1,1,1), (2,0,2)]
})
print(result)
If the sizes are not known when building the model, use TensorArray.
e = tf.placeholder(tf.float32, shape=(3,5) ) # words, embedding
i = tf.placeholder(tf.int32, shape=(3) ) # for each word, index of parent
#p = reorder0(e, i, 3)
a = tf.TensorArray(
tf.float32,
size=e.get_shape()[0],
dynamic_size=True,
infer_shape= True,
element_shape=e.get_shape()[1],
clear_after_read = False
)
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(
a.unstack(e).gather(i),
feed_dict={
e: ( (1.0,1.1,1.2,1.3,1.4),(2.0,2.1,2.2,2.3,2.4),(3.0,3.1,3.2,3.3,3.4) ),
#( (21.0,21.1,21.2,21.3,21.4),(22.0,22.1,22.2,22.3,22.4),(23.0,23.1,23.2,23.3,23.4) ),
i: (2,0,2)
}
)
print(result)

Categories

Resources