self.model = Sequential()
self.model.add(Dense(units=20, input_dim=9))
self.model.add(Activation('relu'))
self.model.add(Dense(units=len(labels)))
self.model.add(Activation('softmax'))
self.model.compile(optimizer='sgd', # rmsprop
loss='categorical_crossentropy',
metrics=['accuracy'])
x = np.array([[0] * 9])
print('x {} {}'.format(x.shape, x))
a = self.model.predict(x)
that gives
TypeError: only integer scalar arrays can be converted to a scalar index
It does not make sense at all.
x = np.array([0] * 9)
ValueError: Error when checking : expected dense_1_input to have shape (None, 9) but got array with shape (9, 1)
Please help the welp
The error message is a bit confusing in my opinion but the solution is simple. If your input array has the shape (x, y, z), the expected input shape for predict is (n, x, y, z) where n is the number of samples (1 in your case).
Just use
self.model.predict(x.reshape((1, ) + x.shape))
Related
I stuck in convert list to numpy. Convert list size is (33, n, 428). N is randomly difference that I don't know how numbers are consist. Here is error.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Users\HILAB_~1\AppData\Local\Temp/ipykernel_22960/872733971.py in <module>
----> 1 X_train = np.array(X_train, dtype=np.float64)
2
3 for epoch in range(EPOCH):
4 X_train_ten, y_train_ten = Variable(torch.from_numpy(X_train)), Variable(torch.tensor(y_train, dtype=torch.float32, requires_grad=True))
5 print(X_train_ten.size())
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (33,) + inhomogeneous part.
and problem code is here.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42, shuffle=True
)
print("[SIZE]\t\tTrain X size : {}, Train y size : {}\n\t\tTest X size : {}, Test y size : {}"\
.format(len(X_train), len(y_train), len(X_test), len(y_test)))
train_dataloadloader = DataLoader(X_train)
test_dataloader = DataLoader(X_test)
X_train = np.array(X_train, dtype=np.float64)
I can't understand what does error means. Please help. thanks :D
It means that whatever sequences X contains, they are not of the same length. You can check {len(e) for e in X); this is the set of all different lengths found in X.
Consider the following example:
>>> import numpy as np
>>> x = [[1, 2], [2, 3, 4]]
>>> np.array(x, dtype=np.float64)
[...]
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
Here, the list x contains two other lists, one of length 2 and the other of length 3. They can't be combined into one array since the "column" dimension doesn't match.
I have written a generator function with Keras, before returning X,y from __getitem__ I have double check the shapes of the X's and Y's and they are alright, but generator is giving dimension mismatch array and warnings.
(Colab Code to reproduce: https://colab.research.google.com/drive/1bSJm44MMDCWDU8IrG2GXKBvXNHCuY70G?usp=sharing)
My training and validation generators are pretty much same as
class ValidGenerator(Sequence):
def __init__(self, df, batch_size=64):
self.batch_size = batch_size
self.df = df
self.indices = self.df.index.tolist()
self.num_classes = num_classes
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
return int(len(self.indices) // self.batch_size)
def __getitem__(self, index):
index = self.index[index * self.batch_size:(index + 1) * self.batch_size]
batch = [self.indices[k] for k in index]
X, y = self.__get_data(batch)
return X, y
def on_epoch_end(self):
self.index = np.arange(len(self.indices))
if self.shuffle == True:
np.random.shuffle(self.index)
def __get_data(self, batch):
#some logic is written here
#hat prepares 3 X features and 3 Y outputs
X = [input_array_1,input_array_2,input_array_3]
y = [out_1,out_2,out_3]
#print(len(X))
return X, y
I am return tupple of X,y from which has 3 input features and 3 output features each, so shape of X is (3,32,10,1)
I am using functional api to build model(I have things like concatenation, multi input/output, which isnt possible with sequential) with following structure
When I try to fit the model with generator with following code
train_datagen = TrainGenerator(df=train_df, batch_size=32, num_classes=None, shuffle=True)
valid_datagen = ValidGenerator(df=train_df, batch_size=32, num_classes=None, shuffle=True)
model.fit(train_datagen, epochs=2,verbose=1,callbacks=[checkpoint,es])
I get these warnings and errors, that dont go away
Epoch 1/2
WARNING:tensorflow:Model was constructed with shape (None, 10) for input >Tensor("input_1:0", shape=(None, 10), dtype=float32), but it was called >on an input with incompatible shape (None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 10) for input
Tensor("input_2:0", shape=(None, 10), dtype=float32), but it was
called on an input with incompatible shape (None, None, None).
WARNING:tensorflow:Model was constructed with shape (None, 10) for
input Tensor("input_3:0", shape=(None, 10), dtype=float32), but it was
called on an input with incompatible shape (None, None, None).
...
...
call
return super(RNN, self).call(inputs, **kwargs)
/home/eduardo/.virtualenvs/kgpu3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975
call
input_spec.assert_input_compatibility(self.input_spec, inputs,
/home/eduardo/.virtualenvs/kgpu3/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:176
assert_input_compatibility
raise ValueError('Input ' + str(input_index) + ' of layer ' +
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, None, None, 88]
I have rechecked whole code and it isnt possible to have input (None,None,None) like in warning or in error, my input dimension is (3,32,10,1)
Update
I have also tried to write a generator function with python and got exactly same error.
My generator function
def generate_arrays_from_file(batchsize,df):
#print(bat)
inputs = []
targets = []
batchcount = 0
while True:
df3 = df.loc[np.arange(batchcount*batchsize,(batchcount*batchsize)+batchsize)]
#Some pre processing
X = [input_array_1,input_array_2,input_array_3]
y = [out_1,out_2,out_3]
yield X,y
batchcount = batchcount +1
It seems like it is something wrong internally wit keras (may be due to the fact I am using functional API)
Update 2
I also tried to output tuple
X = (input1_X,input2_X,input3_X)
y = (output1_y,output2_y,output3_y)
and also named input/output, but it doesnt work
X = {"input_1": input1_X, "input_2": input2_X,"input_3": input3_X}
y = {"output_1": output1_y, "output_2": output2_y,"output_3": output3_y}
Note about problem formulation:
Changing the individual X features to shape (32,10) instead of (32,10,1) might help to get rid of this error but that is not what I want, it changes my problem(I no longer have 10 time steps with one feature each)
Keras use 'None' for dynamic dimensions.
As you can see on the model.summary() chart - the model expecting shape(None, 10) for all of your inputs, which is two dimensional. With batch dimension - you should feed three dimensional data to the model.
But you are feeding four dimensional data.
I would guess that your model doesn't split your input list by three inputs. Try to change your inputs to tuple:
X = (input_array_1,input_array_2,input_array_3)
In order to resolve this error:
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, None, None, 88]
TrainGenerator should be changed in the following way.
Current code:
input1_X = np.array(df3['input1_X'].to_list()).reshape(dlen,pad_len,1)
input2_X = np.array(df3['input2_X'].to_list()).reshape(dlen,pad_len,1)
input3_X = np.array(df3['input3_X'].to_list()).reshape(dlen,pad_len,1)
Should be changed to:
input1_X = np.array(df3['input1_X'].to_list()).reshape(dlen,pad_len)
input2_X = np.array(df3['input2_X'].to_list()).reshape(dlen,pad_len)
input3_X = np.array(df3['input3_X'].to_list()).reshape(dlen,pad_len)
The reason is that each of the 3 Inputs expects a 2-dimensional array, but the generator provides a 3-dimensional one. The expected shape is (batch_size, 10).
I had a similar issue with a custom generator that just had to pass a numpy array of size 10 as input and one single output.
To solve this problem i had to trasform the shape of the 2 vectors passed to the neural network like this:
def slides_generator(integer_list):
# stuff happens
x = np_ts[np_index:np_index+10] # numpy array
y = np_ts[np_index+10] # numpy array
yield tf.convert_to_tensor(x)[np.newaxis, ...], tf.convert_to_tensor(y)[np.newaxis, ...]
doge_gen = slides_generator(integer_list) #next(doge_gen)
basically you need to pass the 2 arrays with shape (None,size),
so in my case were (None,10) and (None,1), and to achieve this i just passed 2 reshaped tensors.
you need the None dimension as the batch size.
I have coded mini_batch creator for miniBatchGradientDescent
The code is here:
# function to create a list containing mini-batches
def create_mini_batches(X,y, batch_size):
print(X.shape, y.shape) # gives (280, 34) (280,)
splitData=[]
splitDataResults=[]
batchCount=X.shape[0] // batch_size #using floor division for getting indexes integer form
for i in range(batchCount):
splitData.append(X[(i) * batch_size : (i+1) * batch_size, :])
splitDataResults.append(y[(i) * batch_size : (i+1) * batch_size, :]) # GIVES ERROR
splitData=np.asarray(splitData)
splitDataResults=np.asarray(splitDataResults)
return splitData, splitDataResults, batchCount
the error says:
splitDataResults.append(y[(i) * batch_size : (i+1) * batch_size, :])
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
I am sure that the shape is correct but it gives me an error. What is wrong?
try reshaping y:
print(X.shape, y.shape) # gives (280, 34) (280,)
y = y.reshape(-1, 1)
this should fix your problem, since y will become 2 dimentional
I am trying to construct a model that looks like this.
Notice that the output shape of the padding layer is 1 * 48 * 48 * 32. The input shape to padding layer is 1 * 48 * 48 * 16. Which type of padding operation does that?
My code:
prelu3 = tf.keras.layers.PReLU(shared_axes = [1, 2])(add2)
deptconv3 = tf.keras.layers.DepthwiseConv2D(3, strides=(2, 2), padding='same')(prelu3)
conv4 = tf.keras.layers.Conv2D(32, 1, strides=(1, 1), padding='same')(deptconv3)
maxpool1 = tf.keras.layers.MaxPool2D()(prelu3)
pad1 = tf.keras.layers.ZeroPadding2D(padding=(1, 1))(maxpool1) # This is the padding layer where problem lies.
This is the part of code that is trying to replicate that block. However, I get model that looks like this.
Am I missing something here or am I using the wrong layer?
By default, keras maxpool2d takes in:
Input shape : 4D tensor with shape (batch_size, rows, cols, channels).
Output shape : (batch_size, padded_rows, padded_cols, chamels)
PLease have a look here zero_padding2d layer docs in keras.
In that respect you are trying to double what is getting treated as a channel here.
Your input looks more like (batch, x, y, z) and you want to have a (batch, x, y, 2*z)
Why do you want to have a zeropadding to double your z? I would rather suggest you to use a dense layer like
tf.keras.layers.Dense(32)(maxpool1)
That would increase z shape from 16 to 32.
Edited:
I got something which can help you.
tf.keras.layers.ZeroPadding2D(
padding=(0, 8), data_format="channels_first"
)(maxpool1)
What this does is treats your y, z as (x, y) and x as channel and pads (0, 8) around (y, z) to give (y, 32)
Demo:
import tensorflow as tf
input_shape = (4, 28, 28, 3)
x = tf.keras.layers.Input(shape=input_shape[1:])
y = tf.keras.layers.Conv2D(16, 3, activation='relu', dilation_rate=2, input_shape=input_shape[1:])(x)
x=tf.keras.layers.ZeroPadding2D(
padding=(0, 8), data_format="channels_first"
)(y)
print(y.shape, x.shape)
(None, 24, 24, 16) (None, 24, 24, 32)
When I run my code, I get a value error with the following message:
ValueError: Input dimension mis-match. (input[0].shape[1] = 1, input[2].shape[1] = 20)
Apply node that caused the error: Elemwise{Composite{((i0 + i1) - i2)}}[(0, 0)](Dot22.0, InplaceDimShuffle{x,0}.0, InplaceDimShuffle{x,0}.0)
Toposort index: 18
Inputs types: [TensorType(float64, matrix), TensorType(float64, row), TensorType(float64, row)]
Inputs shapes: [(20, 1), (1, 1), (1, 20)]
Inputs strides: [(8, 8), (8, 8), (160, 8)]
Inputs values: ['not shown', array([[ 0.]]), 'not shown']
Outputs clients: [[Elemwise{Composite{((i0 * i1) / i2)}}(TensorConstant{(1, 1) of 2.0}, Elemwise{Composite{((i0 + i1) - i2)}}[(0, 0)].0, Elemwise{mul,no_inplace}.0), Elemwise{Sqr}[(0, 0)](Elemwise{Composite{((i0 + i1) - i2)}}[(0, 0)].0)]]
My training data is a matrix of entries such as ..
[ 815.257786 320.447 310.841]
And the batches I'm inputting to my training function have a shape of (BATCH_SIZE, 3) and type TensorType(float64, matrix)
My neural net is very simple:
self.inpt = T.dmatrix('inpt')
self.out = T.dvector('out')
self.network_in = nnet.layers.InputLayer(shape=(BATCH_SIZE, 3), input_var=self.inpt)
self.l0 = nnet.layers.DenseLayer(self.network_in, num_units=40,
nonlinearity=nnet.nonlinearities.rectify,
)
self.network = nnet.layers.DenseLayer(self.l0, num_units=1,
nonlinearity=nnet.nonlinearities.linear
)
My loss function is:
pred = nnet.layers.get_output(self.network)
loss = nnet.objectives.squared_error(pred, self.out)
loss = loss.mean()
I'm a bit confused as to why I'm getting a dimension mismatch. I'm passing in the correct input and label types (as per my symbolic variables), and the shape of my input data corresponds to the expected 'shape' parameter that I'm giving my InputLayer. I believe it's a problem with how I'm specifying the batch size, as when I use a batch size of 1 then my network can train without any problem, and the input[2].shape[1] value from the error message is my batch size. I'm quite new to machine learning, and any help would be greatly appreciated!
Turns out the problem was that my labels had the wrong dimensionality.
My data had shapes:
x_train.shape == (batch_size, 3)
y_train.shape == (batch_size,)
And the symbolic inputs to my net were:
self.inpt = T.dmatrix('inpt')
self.out = T.dvector('out')
I was able to solve my problem by reshaping y_train. I then changed the symbolic output variable to a matrix to account for these changes.
y_train = np.reshape(y_train, y_train.shape + (1,))
# y_train.shape == (batch_size, 1)
self.out = T.dmatrix('out')