I am using tf 1.14, and following the official guide to pass my own sparse tensor that has a dense shape of 2000 x 2000 as input to a model, like so:
input = layers.Input((2000,), sparse=True)
print(input.get_shape().as_list())
output = some_layers(input)
model = models.Model(inputs=input, outputs=output)
However when I print the input shape, it returns [None, None], and I get the error:
ValueError: Error when checking input: expected input_10 to have 2 dimensions, but got array with shape (None, None, None)
If I then specify a batch input = layers.Input((2000,), batch_size=1, sparse=True), the size is (1, 2000) and it runs without error.
And if I specific input is 2d like so: input = layers.Input((2000,2000), batch_size=1, sparse=True) then i return a size (1,2000,2000), and again it runs without error
Which approach is correct? Ultimately, I want to use the sparse tensor as an adjacency matrix for a GCN layer. Therefore I want it without the batch dimension.
Related
When passing the output of my embedding layer to the LSTM layer I'm running into a ValueError that I cannot figure out. My model is:
def lstm_mod(self, n_cells,batch_size):
input = tf.keras.Input((self.n_seq, self.n_features))
embedding = tf.keras.layers.Embedding(batch_size,self.n_seq,input_length=self.n_clusters)(input)
x= tf.keras.layers.LSTM(n_cells)(embedding)
out = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(input, out,name="LSTM")
model.compile(loss='mse', optimizer='Adam')
return model
The error is:
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 128, 7, 128]
Given that the dimensions passed to the model input and the embedding layer are consistent through the arguments of the model I'm puzzled by this. Any guidance is appreciated.
Keras adds an additional dimension (None) when you feed your data through your model because it processes your data in batches.
In this line :
input = tf.keras.Input((self.n_seq, self.n_features))
You've defined a 2-dimensional input, and Keras adds a 3rd dimension (the batch), hence expected ndim=3.
However, the data that is being passed to the input layer is 4-dimensional, which means that your actual input data shape is 3-dimensional + the batch dimension, not 2-dimensional + batch.
To fix this you need to either re-shape your 3-D input to 2-D, or add an additional dimension to the input shape.
Print out the values for self.n_seq and self.n_features and find out what is missing from the shape 128, 7, 128 and that should guide you as to what you need to add.
I have a simple network:
input_layer = Input(1)
inner_layer = Dense(4, activation='relu')(input_layer)
output_layer = Dense(1, activation='linear')(inner_layer)
model = Model(input_layer, output_layer)
optimizer = Adam(learning_rate=0.01)
model.compile(optimizer=optimizer, loss='mse')
Intuitively, inference for input 0 would simply be model.predict(0). However this generates this error: expected input_2 to have 2 dimensions, but got array with shape ()
I understand it expects the input (which is a single number) to be two dimensional, but I don't understand what Tensorflow accepts as valid input. I tried many different combinations of inputs, some work and some don't, it seems quite inconsistent and the warnings/error are usually not useful:
When calling model.predict():
model.predict(0) - Throws
model.predict([0]) - Works
model.predict([[0]]) - Works
When calling model() (I saw here that's needed to get the gradients):
model(0) - Throws
model([0]) - Throws
model([[0]]) - Throws
When using np.reshape:
model(np.reshape(0,[1,1])) - Works
model(np.reshape([0],[1,1])) - Works
model(np.reshape([[0]],[1,1])) - Works
What seems to be working consistently is using numpy's reshape function. it always works both for model.predict() and for model() on all the inputs as long as they're reshaped to a [1,1] shape.
My questions:
What are the guidelines to feeding inputs into tensorflow models in regards to inputs shapes/types?
What does "shape ()" mean?
What does "(None, 1)" mean?
Why does reshape work but [[0]] does not? Both create a 2-dimensional collection.
Why when calling model(0)/model([0])/model([[0]]) does this warning show: WARNING:tensorflow:Model was constructed with shape Tensor("input_1:0", shape=(None, 1), dtype=float32) for input (None, 1), but it was re-called on a Tensor with incompatible shape ()?
The shape of the tensor inputs = tf.keras.layers.Input(1) is (None, 1) (run inputs.get_shape().as_list()). The None means any size that is determined dynamically (batch size). The 1 is the shape of your data point. For example, this is a tensor of shape (3, 1):
[[1], [2], [1]]
This is a tensor of shape (3,)
[1, 2, 1]
If you define a tensor of shape (None, 1) you must feed the data of same shape.
The [[0]] has correct shape (1, 1) and won't throw any error or warning if you pass it as numpy array of expected data type:
import tensorflow as tf
import numpy as np
input_layer = tf.keras.layers.Input(1)
inner_layer = tf.keras.layers.Dense(4, activation='relu')(input_layer)
output_layer = tf.keras.layers.Dense(1, activation='linear')(inner_layer)
model = tf.keras.models.Model(input_layer, output_layer)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(optimizer=optimizer, loss='mse')
print(model(np.array([[0.]], dtype=np.float32)).numpy()) # [[0.]]
print(model.predict(np.array([[0.], [1]], dtype=np.float32))) # [[0. ]
# [0.08964952]]
np.reshape() works because it automatically converts your list to numpy array. For more about np.reshape refer to the official documentation.
The model.predict() also expects the same shape as model.__call__(), but can perform automatic reshaping (expands dimension on the left, i.e [1] -- > [[1]]).
I have a keras model with output shape of the last layer is (None,574,6), which None is my batch size feed into the model.
I also have a 2D numpy array called anchors with shape (574,6).
What I want is my output of each data minus that numpy array element wise.
import keras.backend as K
anchor_tensor = K.cast(anchors, tf.float32)
print(K.int_shape(anchor_tensor))
#(576, 4)
print(K.int_shape(y_pred))
#(None, 574, 6)
y_pred - anchor_tensor
The above code occured the following error due to the batch_size is unknown.
InvalidArgumentError: Dimensions must be equal, but are 574 and 576
for 'sub_6' (op: 'Sub') with input shapes: [?,574,6], [576,4].
During handling of the above exception, another exception occurred:
How can I repeat anchor_tensor None times to make its shape as the same as y_pred?
Tensorflow will easily do what it calls "broadcasting", which is automatically repeating the missing elements if possible. But for this to happen, it must confirm that the shapes allow that first.
The safest way to assure the shapes are compatible, is to make them have the same length, and have value 1 in the dimension you want it to repeat.
So, it's as simple as:
anchor_tensor = K.expand_dims(anchor_tensor, axis=0) #new shape is (1, 576, 4)
result = y_pred - anchor tensor
Now Tensorflow can match the shapes and will repeat the tensor for the entire batch size.
I want to create a shallow network that would take a vector and pass it through a network.
I have a vector that is of size 6. vec = [0,1,4,5,1,4,5]
My network:
vec_a = Input(shape=(6,))
x_1 = Convolution1D(nb_filter=10, filter_length=1, input_shape=(1, 6), activation='relu')(vec_a)
x_1 = Dense(16, activation='relu')(x_1)
But I keep getting:
ValueError: Input 0 is incompatible with layer conv1d_1: expected
ndim=3, found ndim=2
The shape of the training data to the fit function is:
(36400, 6)
You have to reshape the input data to have the correct input dimension, e.g.:
your_input_array.reshape(-1, 6, 1)
In addition your input layer should look like:
vec_a = Input(shape=(6,1))
The reason is that the 1D in Conv1D relates to the use of a sequence. But this sequence can have a vector of multiple values at each position. In your case it is the same, but you have "only" a vector of length 1 in the last dimension.
Trying to set up a neural network using Keras in python.
I get this error when trying to predict with my neural network:
ValueError: Error when checking : expected input_1 to have shape (12,) but got array with shape (1,)
However if i print(x.shape) it returns as (12,)
This is the code block:
def predict(str):
y = convert(str)
x = data = np.array(y, dtype='int64')
with graph.as_default():
print(x.shape);
#perform the prediction
out = model.predict(x)
print(out)
print(np.argmax(out,axis=1))
print ("debug3")
#convert the response to a string
response = np.array_str(np.argmax(out,axis=1))
return response
Keras models often hide the batch size, so actually it is (samples, 12) and each sample has 12 features. In your case what happens is you have 12 samples each with one feature; hence, it feeds (1,).
Either your data is single data point and you need create a 2D array or change your model input_shape=(1,).