I'm new to Keras and am trying to test out a model I've just trained.
I'm using Tensorflow backend and Python 3.
However, the shape my input has and the shape Keras says it has in an error are completely different. Here's my code:
testnote = np.zeros((3,))
testnote[0] = 70
testnote[1] = 70
print(testnote.shape)
pred = model.predict(testnote)
print(pred)
My consistent output is "(3,)" for the shape of testnote and then an error for my predict line: "ValueError: Error when checking input: expected dense_1_input to have shape (3,) but got array with shape (1,)"
How is it that Keras reads testnote as having shape (1,) when I've just confirmed that the shape is (3,)? Is it using some sort of different standard for what "shape" means? I've tried reshaping and adding brackets and a bunch of other things, but I don't really know what the problem is.
For additional context, the model takes in an array with 3 scalar input (representing pitch, velocity, and instrument class) and outputs an array with 1025 scalar outputs. I am carefully not using the word "dimension" since I think this is where I'm getting confused, and technically both are only 1 dimension. I'm sure there are many problems with my model which I will have to fix following this. However, I'd like to just get this prediction function working so I can understand what my output looks like.
Thanks in advance for any help.
A Keras Model implicitly expects that your data (passed as a np array) has a dimension for the batch size. Currently, your model is interpreting testnote as being 3 examples of shape 1. Try adding the batch dimension to 'testnote' as follows:
testnote = testnote.reshape(1,-1)
This will reshape testnote to shape (1, 3), so that you explicitly define the batch size to be 1.
Related
I keep getting the error that my input shape should have had 3 dimensions, but it has 2, and I don't know how to shuffle it to make it work. I've checked similar questions but here I'll display my specific problem.
My dataset is a series of .wav audio files, for which I have a path, and I've already matched with the corresponding word and MFCC.
I have a 75859 arrays, in which each array consists of 99 lists, in which each list has 13 values.
Here's my x_train:
x_train = x_train.reshape(x_train.shape[0],coeff, time_step)
len(x_train[1]) = 99
len(x_train[1][0]) = 13
x_train[1][0][0] = a single number i.e. 0.10
x_train.shape[0] = 75859
(I do trust my Conv1D model and so far I have no suspicions about it)
Here's the error I get:
ValueError: Error when checking input: expected conv1d_61_input to have 3 dimensions, but got array with shape (18965, 1)
The input_shape parameter of the first layer of your neural network needs to correspond to the input. Set it to: input_shape=x_train.shape[1:]. If that doesn't work, update your post with you entire model architecture.
I'm playing a little with deep learning and Keras has been my choice due to its simplicity.
I've built a simple multilayer perceptron model for binary classification and fitted it on input data (the same that I'm using for other ML models and which are working ok).
The Following picture displays the Model summary:
The first dense layer was defined as such:
model.add(Dense(18, input_dim=len(X_encoded.columns), activation = "relu", kernel_initializer="uniform"))
When I attempt to predict over a loop like so:
for vals in X_encoded.values:
print("Survives?", model.predict([vals], batch_size=1))
I get the following error:
ValueError: Error when checking input: expected dense_90_input to have shape (35,) but got array with shape (1,)
These are my variable sizes:
print("Shape of vals:", vals.shape, "Number of Columns and First Layer Dimension:", len(X_encoded.columns))
Result:
Shape of vals: (35,) Number of Columns and First Layer Dimension: 35
As you can see, these match in size which is the expected input.
What is going on? When I pass the entire dataframe "predict" it works correctly, but not when I pass a single value...
You need an array, not a list. You only use a list for multiple input tensors.
model.predict(np.array([vals]), batch_size=1)
But why not:
model.predict(X_encoded.values, batch_size=1)
I have an implementation of a Many-to-one RNN with variable sequence length (a sentence classification problem)
I am trying to implement a sampled softmax loss since I have 500 classes and want to speed up the training.
The following are my input parameters shapes
WLast.shape
TensorShape([Dimension(500), Dimension(500)])
bLast.shape
TensorShape([Dimension(500)])
labels.shape
TensorShape([Dimension(None), Dimension(500)])
pred_out.shape
TensorShape([Dimension(None), Dimension(500)])
Pred_out is the last prediction from the RNN
Problem is that when I run:
cost = tf.nn.sampled_softmax_loss(WLast,bLast,labels,pred_out,10,500)
it gives me this error:
InvalidArgumentError: Dimension must be 1 but is 500 for 'sampled_softmax_loss/ComputeAccidentalHits' (op: 'ComputeAccidentalHits') with input shapes: [?,500], [10].
I don't understand, the shapes matches the arguments of the function, does someone know what could I be doing wrong?
Thanks in advance!
I found this implementation: https://github.com/olirice/sampled_softmax_loss
and solved the problem by reshaping the labels
labels = tf.reshape(tf.argmax(labels, 1), [-1,1])
I tried do some customized padding before feeding to a conv1D net as following.
x=tf.placeholder("float",[None,50,1])
padding=tf.constant([0,0],[5,0],[0,0])
y=tf.pad(x,padding)
However, after the above manipulation, y would be a tensor of shape (?,?,?), thus when feeding to tf.layers.conv1d, I get an error that "The channel dimension of the inputs should be defined. Found 'None'".
My question is why does pad result has None shape? It should not be hard to calculate the shape, my guess is this is only calculated in run time, but it is not convenient right? And can I use reshape before passing to conv1d?
I created a CNN whith Python and Keras which compresses 2D input of various length into a single output. All images have a height of 80 pixels, but different lenght, e.g. shape (80, lenght_of_image_i, 2), where 2 is the number of color channels.
I have 5000 images, the shape of the training data array X in numpy is (5000, 1) and the array has dtype object. This is because storing content with different shape is not possible in a single numpy array. Each object in the list has shape (80, lenght_of_image_i, 2).
With this said, when I call the model.fit(X,y) function of the sequential model, I get the following error:
ValueError: Error when checking input: expected conv2d_1_input to have 4
dimensions, but got array with shape (5000, 1)
Converting the numpy array to Python list of numpy arrays also doesn't work:
AttributeError: 'list' object has no attribute 'ndim'
Zero padding or transformations of my data to get all of my images to the same shape is not an option.
My Question now is: How can I call the model.fit(X,y) function when my data has not a fixed shape?
Thank you in advance!
Edit: Note that I do not have a problem with the architecture of my network (since I am not using dense layers). My problem is that I cannot call the fit function, due to problems with the shape of the numpy array.
My model is a replicate of this network: http://machine-listening.eecs.qmul.ac.uk/wp-content/uploads/sites/26/2017/01/sparrow.pdf
You need to pass "numpy arrays" to fit, of type "float". That is the only possibility.
So, you will probably have to group batches of images with the same length, or train each sample individually:
for image, output in zip(images,outputs):
model.train_on_batch(image.reshape((1,80,-1,2), outputs.reshape((1,)+outputs.shape, ....)