Trying to set up a neural network using Keras in python.
I get this error when trying to predict with my neural network:
ValueError: Error when checking : expected input_1 to have shape (12,) but got array with shape (1,)
However if i print(x.shape) it returns as (12,)
This is the code block:
def predict(str):
y = convert(str)
x = data = np.array(y, dtype='int64')
with graph.as_default():
print(x.shape);
#perform the prediction
out = model.predict(x)
print(out)
print(np.argmax(out,axis=1))
print ("debug3")
#convert the response to a string
response = np.array_str(np.argmax(out,axis=1))
return response
Keras models often hide the batch size, so actually it is (samples, 12) and each sample has 12 features. In your case what happens is you have 12 samples each with one feature; hence, it feeds (1,).
Either your data is single data point and you need create a 2D array or change your model input_shape=(1,).
Related
I am using tf 1.14, and following the official guide to pass my own sparse tensor that has a dense shape of 2000 x 2000 as input to a model, like so:
input = layers.Input((2000,), sparse=True)
print(input.get_shape().as_list())
output = some_layers(input)
model = models.Model(inputs=input, outputs=output)
However when I print the input shape, it returns [None, None], and I get the error:
ValueError: Error when checking input: expected input_10 to have 2 dimensions, but got array with shape (None, None, None)
If I then specify a batch input = layers.Input((2000,), batch_size=1, sparse=True), the size is (1, 2000) and it runs without error.
And if I specific input is 2d like so: input = layers.Input((2000,2000), batch_size=1, sparse=True) then i return a size (1,2000,2000), and again it runs without error
Which approach is correct? Ultimately, I want to use the sparse tensor as an adjacency matrix for a GCN layer. Therefore I want it without the batch dimension.
I followed the code examples for structured data classification at keras.io to build a model for classifying a rather simple model similar to the one in the example.
I wanted to extend the model to handle a second output, but I cannot use this model to train. The dataset is generated like it is done in the example (but with two results):
res1 = dataframe.pop("result1")
res2 = dataframe.pop("result2")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe),(res1,res2)))
The model is also similar to the example but using a two-dimensional output:
x = layers.Dense(32, activation="relu")(all_features)
x = layers.Dropout(0.5)(x)
output = layers.Dense(2, activation="sigmoid")(x)
model = keras.Model(all_inputs, output)
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
It compiles, but when i try to run fit...
model.fit(train_ds,epochs=30)
I get an error message:
ValueError: logits and labels must have the same shape ((None, 2) vs (None, 1))
How can I prepare the dataset to meet the shape constraints?
I believe you should use the zip() function:
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe),zip(res1,res2)))
This way, you are telling from_tensor_slices() to zip labels into a new array of shape (N, 2) instead of concatenating two vectors of shape (N, 1) into (2N, 1).
I have a Keras LSTM model that contains multiple outputs.
The model is defined as follows:
outputs=[]
main_input = Input(shape= (seq_length,feature_cnt), name='main_input')
lstm = LSTM(32,return_sequences=True)(main_input)
for _ in range((output_branches)): #output_branches is the number of output branches of the model
prediction = LSTM(8,return_sequences=False)(lstm)
out = Dense(1)(prediction)
outputs.append(out)
model = Model(inputs=main_input, outputs=outputs)
model.compile(optimizer='rmsprop',loss='mse')
I have a problem when reshaping the output data.
The code for reshaping the output data is:
y=y.reshape((len(y),output_branches,1))
I got the following error:
ValueError: Error when checking model target: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 5 array(s), but instead got the following list of 1
arrays: [array([[[0.29670931],
[0.16652206],
[0.25114482],
[0.36952324],
[0.09429612]],
[[0.16652206],
[0.25114482],
[0.36952324],
[0.09429612],...
How can I correctly reshape the output data?
It depends on how y is structured initially. Here I assume that y is a single-valued label for each sequence in batch.
When there are multiple inputs/outputs model.fit() expects a corresponding list of inputs/outputs to be given. np.split(y, output_branches, axis=-1) in a following fully reproducible example does exactly this - for each batch splits a single list of outputs into a list of separate outputs where each output (in this case) is 1-element list:
import tensorflow as tf
import numpy as np
tf.enable_eager_execution()
batch_size = 100
seq_length = 10
feature_cnt = 5
output_branches = 3
# Say we've got:
# - 100-element batch
# - of 10-element sequences
# - where each element of a sequence is a vector describing 5 features.
X = np.random.random_sample([batch_size, seq_length, feature_cnt])
# Every sequence of a batch is labelled with `output_branches` labels.
y = np.random.random_sample([batch_size, output_branches])
# Here y.shape() == (100, 3)
# Here we split the last axis of y (output_branches) into `output_branches` separate lists.
y = np.split(y, output_branches, axis=-1)
# Here y is not a numpy matrix anymore, but a list of matrices.
# E.g. y[0].shape() == (100, 1); y[1].shape() == (100, 1) etc...
outputs = []
main_input = tf.keras.layers.Input(shape=(seq_length, feature_cnt), name='main_input')
lstm = tf.keras.layers.LSTM(32, return_sequences=True)(main_input)
for _ in range(output_branches):
prediction = tf.keras.layers.LSTM(8, return_sequences=False)(lstm)
out = tf.keras.layers.Dense(1)(prediction)
outputs.append(out)
model = tf.keras.models.Model(inputs=main_input, outputs=outputs)
model.compile(optimizer='rmsprop', loss='mse')
model.fit(X, y)
You might need to play around with axes as you didn't specify how exactly your data look like.
EDIT:
As author is looking for an answer drawing from official sources, it's mentioned here (not explicitly though, it only mentions what the Dataset should yield, hence - what kind of input structure model.fit() expects):
When calling fit with a Dataset object, it should yield either a tuple of lists like ([title_data, body_data, tags_data], [priority_targets, dept_targets]) or a tuple of dictionaries like ({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}).
Since you have an amount of outputs equal to output_branches, your output data must be a list with the same amount of arrays.
Basically, if the output data is in the middle dimension as your reshape suggests:
y = [ y[:,i] for i in range(output_branches)]
I'm new to Keras and am trying to test out a model I've just trained.
I'm using Tensorflow backend and Python 3.
However, the shape my input has and the shape Keras says it has in an error are completely different. Here's my code:
testnote = np.zeros((3,))
testnote[0] = 70
testnote[1] = 70
print(testnote.shape)
pred = model.predict(testnote)
print(pred)
My consistent output is "(3,)" for the shape of testnote and then an error for my predict line: "ValueError: Error when checking input: expected dense_1_input to have shape (3,) but got array with shape (1,)"
How is it that Keras reads testnote as having shape (1,) when I've just confirmed that the shape is (3,)? Is it using some sort of different standard for what "shape" means? I've tried reshaping and adding brackets and a bunch of other things, but I don't really know what the problem is.
For additional context, the model takes in an array with 3 scalar input (representing pitch, velocity, and instrument class) and outputs an array with 1025 scalar outputs. I am carefully not using the word "dimension" since I think this is where I'm getting confused, and technically both are only 1 dimension. I'm sure there are many problems with my model which I will have to fix following this. However, I'd like to just get this prediction function working so I can understand what my output looks like.
Thanks in advance for any help.
A Keras Model implicitly expects that your data (passed as a np array) has a dimension for the batch size. Currently, your model is interpreting testnote as being 3 examples of shape 1. Try adding the batch dimension to 'testnote' as follows:
testnote = testnote.reshape(1,-1)
This will reshape testnote to shape (1, 3), so that you explicitly define the batch size to be 1.
I want to create a shallow network that would take a vector and pass it through a network.
I have a vector that is of size 6. vec = [0,1,4,5,1,4,5]
My network:
vec_a = Input(shape=(6,))
x_1 = Convolution1D(nb_filter=10, filter_length=1, input_shape=(1, 6), activation='relu')(vec_a)
x_1 = Dense(16, activation='relu')(x_1)
But I keep getting:
ValueError: Input 0 is incompatible with layer conv1d_1: expected
ndim=3, found ndim=2
The shape of the training data to the fit function is:
(36400, 6)
You have to reshape the input data to have the correct input dimension, e.g.:
your_input_array.reshape(-1, 6, 1)
In addition your input layer should look like:
vec_a = Input(shape=(6,1))
The reason is that the 1D in Conv1D relates to the use of a sequence. But this sequence can have a vector of multiple values at each position. In your case it is the same, but you have "only" a vector of length 1 in the last dimension.